CN101485233A - Method and apparatus for signal presentation - Google Patents

Method and apparatus for signal presentation Download PDF

Info

Publication number
CN101485233A
CN101485233A CNA2007800158526A CN200780015852A CN101485233A CN 101485233 A CN101485233 A CN 101485233A CN A2007800158526 A CNA2007800158526 A CN A2007800158526A CN 200780015852 A CN200780015852 A CN 200780015852A CN 101485233 A CN101485233 A CN 101485233A
Authority
CN
China
Prior art keywords
signal
address
data
light
emitting component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007800158526A
Other languages
Chinese (zh)
Other versions
CN101485233B (en
Inventor
约瑟夫·芬尼
艾伦·约翰·狄克斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lancaster University
Original Assignee
Lancaster University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lancaster University filed Critical Lancaster University
Priority claimed from PCT/GB2007/000708 external-priority patent/WO2007099318A1/en
Publication of CN101485233A publication Critical patent/CN101485233A/en
Application granted granted Critical
Publication of CN101485233B publication Critical patent/CN101485233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A method and apparatus for presenting an information signal such as an image signal or a sound signal using a plurality of signal sources. The plurality of signal sources are located within a predetermined space, and the method comprises receiving a respective positioning signals from each of said signal sources, generating location data indicative of locations of said plurality of signal sources, based upon said positioning signals, generating output data for each of said plurality of signal sources based upon said information signal and said location data, and transmitting said output data to said signal sources to present said information signal.

Description

The signal indication method and apparatus
Technical field
The present invention relates to be used for the method and apparatus of positioning signal source, and use such signal source to represent the method and apparatus of information signal.
Background technology
The lamp string is used for decorative purpose to be widely known by the people.For example, for a long time, generally the lamp string is placed on and is used for decorative purpose on the Christmas tree.Similarly, lamp is placed in the public place as other objects of tree and macrophyte and so on.Recently, such lamp is connected with the control circuit that can make lamp close or open in various predetermined modes.For example, all lamps " flicker " Kai Heguan together.Alternatively, lamp can close successively and open with respect to lamp adjacent one another are in the string, to produce " chasing " effect.Known many such effects, its something in common are, to all lamps, to the selection at random of lamp or to applying this effect according to its lamp of selecting with respect to the position toward each other in the lamp string.
The decorative lamp of the above-mentioned type sometimes also by regularly attached to around the predetermined configurations, make to use as a lamp when bright that lamp demonstrates the determined image of predetermined configurations.For example, can be with around the shape of lamp attached to Christmas tree, make to use as a lamp when bright, can see the profile of Christmas tree.Similarly, arrange that lamp shows alphabetic(al) letter, make that lamp demonstrates word when a plurality of such monograms together the time.
Up to now, when showing more complex image, use light-emitting device array, the light-emitting component in the array is relative to each other fixing.Then, processor can image data processing and the data of the fixed position of indication lamp, to determine that should light which lamp shows required image.Such array can be taked the form of a plurality of bulbs or similar light-emitting component, yet, more commonly lamp is littler, the union body forms LCD (LCD) or plasma screen, and this mode is the mode of display image on modern flat screen display, notebook computer screen and many television sets.
It should be noted that all said methods are based on the fixed relationship between the light-emitting component, in image display process, use this fixed relationship.
Recently, it is quite universal that television set becomes, and can use a plurality of loud speakers that the audiovisual amplifier is provided.Typically, in conventional stereo sound was arranged, preposition central loudspeakers was identical with the indicator screen position, and a preposition left side and preposition right loud speaker are disposed in the both sides of indicator screen.In addition, at least two loud speakers are placed on the rear of beholder position, so that " around sound " effect to be provided.For example, in the video display sequence, if aircraft enters display image in the lower left corner of screen, and display image is left in the upper right corner at screen after some frames, then in the process that video shows, can at first pass through rearmounted left speaker, send the sound of aircraft subsequently by preposition right loud speaker, make the sound that sends provide the impression of airplane motion.Such effect provides the impression of placing oneself in the midst of display image that strengthens for the beholder.
It should be noted that when creating audio-visual data determining will be by the sound of each loud speaker transmission.Yet, when in beholder's family, the said equipment being installed, can carry out small adjustment (for example relative volume that each loud speaker is exported), compensating for example different distance between the beholder position and preposition loud speaker, and the different distance between beholder position and the rearmounted loud speaker.
The surround sound system for electrical teaching that it should be noted that the above-mentioned type always comprises a plurality of loud speakers of arranging in a predefined manner, and it changes the nuance that only may be compensated position and distance.Therefore, above-mentioned surround sound system for electrical teaching allows to use the loudspeaker array of predetermined configurations to present sound in essence.In other words, such loudspeaker arrangement is the sound equivalence that the modulation element array of above-mentioned use fixed and arranged comes display image.
Aspect luminous and sounding, said system all is subject to arranging the requirement of lamp and loud speaker in a predetermined manner to small part, thereby has reduced the flexibility of system.
Summary of the invention
The purpose of the embodiment of the invention is elimination or alleviates at least some the problems referred to above.
The invention provides the method and apparatus that uses a plurality of signal sources to come the presentation information signal, described a plurality of signal sources are positioned at predetermined space.Described method comprises: each from described signal source receives corresponding framing signal; Based on described framing signal, produce the position data of the position of the described a plurality of signal sources of indication; Based on described information signal and described position data, be each the generation dateout in described a plurality of signal sources; And send described dateout to described signal source, to present described information signal.
Therefore, the invention provides a kind of method, can be used to locate signal source, use such light-emitting component to come the display message signal then as light-emitting component and so on.Can be at the light-emitting component as arranging with random fashion on the fixed structure of tree and so on.Therefore, can locate the light-emitting component of random arrangement, and use it to show predetermined pattern or pre-determined text as image and so on.
For producing position data, each signal source can also comprise: the identification data of described position data with the described signal source of sign is associated.The identification data of described position data with the described signal source of sign is associated and can comprises: produce described identification data according to the described framing signal that receives from each signal source.
Each framing signal can comprise a plurality of pulses at interval in time, in this case, can comprise for each signal source produces identification data: produce described identification data based on described a plurality of pulses at interval in time.Each framing signal can be indicated the identification code of a signal source in the described a plurality of signal sources of unique identification in described a plurality of signal sources.Each framing signal can be the modulation format of the identification code of each signal source.For example, can use binary phase shift keying modulation or non-return-to-zero modulation.
Receiving each framing signal can comprise: receive a plurality of emissions of the electromagnetic radiation at interval in time.Described electromagnetic radiation can be taked any suitable form, and for example radiation can be visible light, infrared radiation or ultra-violet radiation.
In this application, visible light, ultraviolet light and infrared light have been carried out various quoting.Those skilled in the art understand the meaning of these terms easily.Yet, it should be noted that infrared light typically has the wavelength of about 0.7 μ m to 1mm, visible light has the wavelength of about 400nm to 700nm, and ultraviolet light has the wavelength of about 1nm to 400nm.
Receiving framing signal from each signal source can comprise: receive the framing signal that each signal source sends in the signal receiver termination, described signal receiver is configured to produce two-dimensional position data in detecting frame, and described two-dimensional position data is located described signal source.Then, can produce position data based on the described position in the described detection frame.
The framing signal that receives each signal source transmission can comprise: use video camera to receive described framing signal.In a preferred embodiment of the invention, described video camera comprises the charge-coupled device (CCD) to electromagnetic radiation sensitivity.Producing described position data also comprises: the frame that described video camera is produced divides into groups in time, produces described identification data.A plurality of described frames are divided into groups to produce described identification data can comprise the zone of mutual distance within preset distance in the described frame of processing.
Receiving described framing signal can also comprise: receive the framing signal that each signal source sends in a plurality of signal receiver terminations, in the described signal receiver each is configured to produce two-dimensional position data in the relevant detection frame, and described two-dimensional position data is located described signal source.Producing described position data can also comprise: the two-dimensional position data that described a plurality of signal receivers are produced makes up to produce described position data.Can make up described two-dimensional position data by triangulation (triangulation).
Each signal source can be to be configured to cause that the emission of electromagnetic radiation presents the electromagnetic component of described information signal.Therefore, send described dateout to described signal source and can comprise: send instruction, to cause some electromagnetic component emission electromagnetic radiation to present described information signal.
Described electromagnetic component can be a light-emitting component, and described instruction can cause described light-emitting component visible emitting.Described light-emitting component can be lighted on a plurality of predetermined strengths, and therefore described instruction can be specified the intensity that will light each light-emitting component.Therefore, the intensity modulated of the described electromagnetic radiation that can be launched by each light-emitting component is represented each framing signal, to present described information signal.In some embodiments of the invention, suppose to allow light-emitting component to continue to show described information signal, and allow same light-emitting component to export framing signal simultaneously that then such intensity modulated is preferred in unconspicuous relatively mode.
Can lighting elements to cause any that shows in a plurality of predetermined colors, described instruction has been specified color for each light-emitting component.In this case, the tone of the described light that can be launched by each light-emitting component is modulated and is represented framing signal, to present described information signal.Once more, suppose to allow the light-emitting component of presentation information signal to send framing signal that then this transmission of framing signal is favourable in unconspicuous relatively mode.Really, studies show that, human to such tone modulation relative insensitivity.Therefore, suppose that the video camera of suitable configurations can detect such tone modulation, then such tone modulation is a kind of effective means that sends framing signal.
Here employed this term of signal source comprises that signal produces source and signal reflex source.For example, each signal source can be the ELECTROMAGNETIC RADIATION REFLECTION device, preferably, is the ELECTROMAGNETIC RADIATION REFLECTION device with controllable reflectivity.Can provide such controllable reflectivity by variable opacity element is associated with each reflecting element.Can use LCD (LCD) as this variable opacity element.
Term used herein " signal " comprises the signal that a plurality of signal sources produce.For example, color signal can be understood that the combined effect of red, green and blue signal source.
Signal source can be a sound source, sending described dateout to described signal source comprises to present described information signal: instruct by some to send the voice data that will export, to cause some sound source output sound data, produce predetermined sound scape (sound scape).
The present invention also provides the method and apparatus that is used for location signal receiver in predetermined space.Described method comprises: the data that receive the signal value of the described signal receiver reception of indication; The data and a plurality of expected signal value that are received are compared, and each expected signal value is illustrated in the signal of the each point place expectation in the interior predetermined a plurality of points of predetermined space; And relatively locate described signal receiver based on described.
Therefore, by the data of indicating the expected data that will receive in a plurality of positions are stored, can come the framing signal receiver based on the signal that receives by received signal.Can carry out described method in each signal receiver, or alternatively, signal receiver can provide the details of received signal to central computer in distributed mode, described central computer is configured to locate described signal receiver.
Each signal receiver can be the signal transmitting and receiving machine.Described method can also comprise to described signal receiver provides signal.
Described method can also comprise to described signal receiver and send prearranged signals, and the signal that makes each signal receiver receive is based on described prearranged signals.The data that receive the signal value of the described signal receiver reception of indication can comprise: the data of the voice signal that the described signal receiver of reception indication receives, but of the present invention this is not limited to use voice data on the one hand.
The present invention also provides the method and apparatus that is used to locate with the id signal source.Described method comprises: receive the signal that described signal source sends by signal receiver, described signal receiver is configured to produce two-dimensional position data in detecting frame, and described two-dimensional position data is located described signal source; Described position based in the described detection frame produces position data; Handle received signal, described received signal comprises that a plurality of signals that separate in time send; And a plurality of signals transmissions that separate in time from being received, come the identification code of definite signal source of being located.
It is of the present invention that this is on the one hand especially practical aspect the motion of the people in the monitoring predetermined space or equipment.For example, signal source can be associated with each individual or each items of equipment.
Can take any suitable form from the signal that signal source receives.Especially, according to other aspects of the invention, this signal can be taked the form of above-mentioned framing signal.
The present invention also comprises the method and apparatus that uses a plurality of sound sources to produce three dimensional sound scape (soundscape).Described method comprises: the desired audio pattern of determining to be applied to predetermined space; Determine and to use the data of indication sound source position and to use described sound pattern to carry out described definite from the sound of each sound source emission; And to each sound source transmission voice data.
Therefore, the present invention allows to produce and will use a plurality of sound source outputs to produce the voice signal of three dimensional sound scape.
Employed sound source can be taked any suitable form.In some embodiments of the invention, use a plurality of handheld devices to produce sound, export described sound by the loud speaker that is associated with mobile phone as mobile phone and so on.
The present invention also provides the method and apparatus that carries out address process in the addressing system, and described addressing system is configured to a plurality of spaces element of hierarchical arrangement is carried out addressing.Described method is used the address that is defined by a plurality of predetermined numerical digits, and described method comprises: handle at least one predetermined numerical digit of described address, with the hierarchical level of the hierarchy determining to be represented by described address; And, on determined hierarchical level, determine the address of space element according to handled address.
At least one predetermined numerical digit of handling described address is to determine that hierarchical level can comprise: at least one that handle described address is in preceding numerical digit.For example, can handle each numerical digit of address, since first end, all that can consider to have equal value handled numerical digit be formed for determining hierarchical level in preceding numerical digit group.For example, when using binary address, the number in preceding " 1 " in the address can be used for determining hierarchical level.
The address of determining the space element can comprise: handle at least one other numerical digit in the described address.Can determine at least one other numerical digit to be processed by the numerical digit of the described hierarchical level of indication.
This method can be used with various addressing mechanisms, comprises the IPv6 address.
The present invention also provides the method to a plurality of devices allocation address, and described method comprises: make each the selection address in a plurality of equipment; Receive the data of each selected address of equipment of indication; Handle the data of the selected address of indication, with the individual address that determined whether many choice of equipment; If a plurality of choice of equipment have been arranged individual address then orders described a plurality of equipment to reselect the address.
The present invention also provides the method for the device address that is used to identify a plurality of equipment, and described address is set in the address realm, and described method comprises: produce a plurality of subranges from described address realm; Determine whether one of described a plurality of equipment have the address in first subrange; And if one or more equipment has the address in described first subrange, then handle at least one address in described first subrange.
The feature that should be understood that one aspect of the present invention described herein can be used in combination with the feature of other aspects of the present invention described herein.Also be appreciated that and realize all aspects of the present invention by method, device and equipment.Also be appreciated that and the program of using a computer realize method provided by the invention.Such computer program can be realized on the suitable carriers medium as CD ROM and CD and so on.Such mounting medium also comprises the signal of communication that carries suitable computer program.Also can carry out suitable programming to stored program computer installation, realize aspect of the present invention by using suitable computer program code.
Description of drawings
Now,, embodiments of the invention are described by example with reference to accompanying drawing, wherein:
Fig. 1 is the high level schematic diagram of the embodiment of the invention;
Fig. 2 shows the high-level flow of the performed processing overview of the embodiment of the invention shown in Figure 1;
Fig. 3 is the process schematic diagram that is used for space address is converted to the address that is associated with particular signal source in the embodiment of the invention shown in Figure 1;
Fig. 4 is the process schematic diagram that a plurality of light sources of use that use in the embodiment of the invention shown in Figure 1 present image;
Fig. 5 is the schematic diagram by computer-controlled light-emitting component network that is applicable to the embodiment of the invention;
Fig. 6 is shown in Figure 5 and is used for the schematic diagram of PC of the device of control chart 5;
Fig. 7,7A and 7B are the schematic diagrames of light-emitting component shown in Figure 5;
Fig. 8 shows the light-emitting component that is used for to Fig. 5 and distributes the address of address to determine the flow chart of algorithm;
Fig. 8 A and 8B show the flow chart of the possible variant of the address of Fig. 8 determining;
Fig. 9 is the optional schematic diagram by computer-controlled light-emitting component network that is applicable to the embodiment of the invention;
Fig. 9 A is the schematic diagram of pulse width modulating signal;
Fig. 9 B is the schematic diagram that is used for transmitting to light-emitting component the packet of order;
Fig. 9 C shows the flow chart of the processing of the light-emitting component execution among Fig. 5;
Fig. 9 D shows the flow chart of the processing of the control element execution among Fig. 5;
Figure 10 is the layout schematic diagram of the video camera of being used in the embodiment of the invention locating light-emitting component;
Figure 10 A and 10B are to use the pixelation of the frame that video camera shown in Figure 10 catches to represent;
Figure 11 is the schematic diagram that is used to locate the video camera of light-emitting component in another embodiment of the present invention;
Figure 11 A is at the fixed time on the section, uses a series of 4 pixelations of the frame that the video camera of Figure 11 catches to represent;
Figure 12 is the schematic diagram of the hamming code that uses in the some embodiments of the invention;
Figure 13 is to use the schematic diagram of the pulse shape of binary phase shift keying (BPSK) modulation;
Figure 14 is the schematic diagram of BPSK modulation how to carry out using in the some embodiments of the invention;
Figure 15 is the schematic diagram of the Frame that uses in the embodiment of the invention;
Figure 16 is the schematic diagram of locating being used to of using in the embodiment of the invention a plurality of video cameras of light-emitting component;
Figure 17 is the sketch plan that is configured to operate the light-emitting component position fixing process that the data that obtain from video camera shown in Figure 11 operate;
Figure 18 is the flow chart of handling frame by frame that illustrates in greater detail Figure 17;
Figure 19 is the flow chart that illustrates in greater detail the time processing of Figure 17;
Figure 20,20a, 20b, 20c and 20d are the schematic diagrames that is used to locate the method for light-emitting component in the embodiment of the invention;
Figure 21 is the flow chart of the camera calibration process used in the embodiment of the invention;
Figure 22 A to 22D is the schematic diagram of the pseudomorphism of appearance when the video camera shown in Figure 10 and 11 is not correctly calibrated;
Figure 23 is the flow chart that is suitable for the optional light-emitting component location algorithm that uses with Fig. 5 and 9 shown devices;
Figure 23 A is the flow chart that is used for the processing of estimated signal source position;
Figure 24 is the flow chart of the light-emitting component position fixing process that uses in the some embodiments of the invention;
Figure 24 A shows the flow chart that is used to obtain locate the processing of the used data of light-emitting component;
Figure 24 B shows the flow chart that the data that obtain according to the process of using Figure 24 are located the processing of light-emitting component;
Figure 24 C is the screenshot capture that intercepts from cause the graphic user interface of handling shown in Figure 24 A and the 24B;
Figure 24 D shows and is used to use the light-emitting component of being located to come the flow chart of the processing of display image;
Figure 24 E is the screenshot capture that intercepts from cause the graphic user interface of handling shown in Figure 24 D;
Figure 24 F is the screenshot capture that intercepts from the simulator of emulation light-emitting component;
Figure 24 G shows the screenshot capture that the data load that how will define a plurality of light-emitting components is gone into simulator;
Figure 24 H shows the screenshot capture at the interface of how to use Figure 24 G;
Figure 24 I is the screenshot capture that intercepts from the graphic user interface of the Interactive control that allows light-emitting component;
Figure 25 is the schematic diagram that produces system according to spatial sound of the present invention;
Figure 26 is the schematic diagram that is used to control the PC of system shown in Figure 25;
Figure 27 has provided the flow chart of the overview of the performed processing of Figure 25 system;
Figure 28 shows the flow chart of the initialization process of carrying out in the system shown in Figure 25;
Figure 29 shows that system shown in Figure 25 carries out is used to produce the flow chart of processing of the position data of specific sound transceiver;
Figure 30 and 31 shows how to improve the process of using Figure 29 and the flow chart of the position data that produces;
Figure 32 shows the flow chart of the process of the volume mapping that is used for producing Figure 25 system;
Figure 33 show the sonic transceivers that is used for calculating Figure 25 system gain and towards the flow chart of process;
Figure 34 shows and uses Figure 25 system to come the flow chart of sonorific process;
Figure 35 shows the flow chart of the processing of the sonic transceivers execution in Figure 25 system;
Figure 36 shows the flow chart that is used for sonorific optional process in Figure 25 system;
Figure 37 is the schematic diagram that is used for space address is converted to the process of address, this locality (native);
Figure 38 to 40 is schematic diagrames of 128 bit addresses configuration;
Figure 41 is the schematic diagram in Figure 37 process that realizes on the internet;
How in embodiments of the present invention Figure 42 shows the schematic diagram of usage space addressing; And
Figure 43 is the schematic diagram of the octree representation in the space of using in the embodiment of the invention.
Embodiment
With reference to Fig. 1, provide overview of the present invention.PC 1 communicates with a plurality of light-emitting components 2 that are arranged on the tree 3 with random fashion.PC 1 is configured to light-emitting component 2 is carried out space orientation, and carries out such location and come the explicit user appointed pattern to use this light-emitting component.
The flow chart of Fig. 2 shows the high level processing that Fig. 1 device is carried out.At step S1, use location algorithm described below to come light-emitting component 2 is carried out space orientation.At step S2, typically import, and, receive the image that will show by reading of data from this specified file by the user that the file details of therefrom wanting reading of data is provided.Alternatively, can from frame buffer, read mode like the images category that will show with traditional computer display, reading images from frame buffer.At step S3, selection will be lighted some light-emitting components 2 with display image, after having selected suitable light-emitting component, lights these light-emitting components at step S4.Be appreciated that the light-emitting component that may need to light before extinguishing some is with display image.
Fig. 3 has schematically shown the desired output of the light-emitting component position fixing process of Fig. 2 step S1.Can see that a plurality of volume elements (voxel) have collectively defined the volume elementsization in the space that includes light-emitting component 2 and represented 4.The volume elementsization that this position fixing process maps to the space with each light-emitting component 2 is represented a volume elements in 4.After having carried out the process that Fig. 3 schematically shows, what understand relatively is, supposes that image mapped to the volume elementsization that will show represents 4 volume elements, then can determine should to light which lamp at the specific image that will show.In other words, if know to light which volume elements, then the output of step S1 can easily identify the light-emitting component that will light.
Referring now to the schematic diagram of Fig. 4, be described in the process of display image on the light-emitting component 2.Can see that will use light-emitting component 2 to show the view data 5 of the 3-D view of representing cone, Fig. 3 is described as reference, light-emitting component 2 represents that with volume elementsization 4 are associated.View data 5 is mapped to volume elementsization represent 4, to identify a plurality of volume elements that to light.This step S3 with Fig. 2 is corresponding.After having carried out this map operation, can determine the light-emitting component 2 that will light, then, can light suitable light-emitting component, come display image data 5 to use light-emitting component 2.
Referring now to Fig. 5, the device that is used to realize the preferred embodiment of the present invention is described.PC 1 is connected with three control elements 6,7,8, and these control elements are connected with each group light-emitting component 2 via corresponding bus 9,10,11 successively.This device also comprises the power subsystem 12 that also is connected with control element 6,7,8.PC 1 is connected with control element 6,7,8 via connected in series.Below with the operation of this device of more detailed description.
Referring now to Fig. 6, the structure of PC 1 is described.PC 1 comprises CPU 13 and random-access memory (ram) 14.RAM 14 is used to provide program storage 14a and data storage 14b.PC1 also comprises hard disk drive 15 and I/O (I/O) interface 16.I/O interface 16 is used for input and output device is connected to other assemblies of PC1.In an illustrated embodiment, keyboard 17 is connected with I/O interface 16 with flat screen display 18.PC1 also comprises communication interface 19, and this interface allows PC 1 to communicate by letter with following control element in greater detail 5,6,7.Preferably, this communication interface is a universal serial bus.CPU 13, RAM 14, hard disk drive 15, I/O interface 16 and communication interface 19 are linked together by bus 20, and data and instruction can be transmitted between said modules along bus 20.
Fig. 7 shows the exemplary light-emitting component 2 that is connected with bus 9.Light-emitting component 2 comprises the light source by light-emitting diode (LED) 21 forms of processor 22 controls.Processor 22 is configured to receive the instruction that indicates whether to light LED 21, and instructs based on these and to operate.Light-emitting component 2 also comprises diode 23 and capacitor 24.In a practical embodiment of the invention, can make the miniature version of light-emitting component 2, its size and traditional LED are similar.Such light-emitting component exposes two connections, along these two connections, provides power (5V DC power supply) and to the instruction of processor 22.Really, it should be noted that as will be described in more detail that light-emitting component 2 is connected with bus 9 by two connectors, this light-emitting component obtains power and instruction from bus 9.
It should be noted that the light-emitting component shown in Fig. 7 only is exemplary, light-emitting component can be taked various form.Two optional forms are shown in Fig. 7 A and 7B.Because these optional forms help to eliminate flicker, therefore they are preferred in certain embodiments.Especially, the layout that it should be noted that Fig. 7 A comprises and LED 21 diode in series 23a and the capacitor 24a in parallel with LED 21.In addition, though shown in light-emitting component in light source be LED, can use the light source of any appropriate.For example, light source can be lamp, neon tube or cold-cathode lamp.Though it should be noted that in the described embodiment of the invention, instruction and power all offer light-emitting component via bus 9,, instruction and power can be provided by different modes.For example, can provide power, and wirelessly,, directly provide instruction from control element 6 as by using Bluetooth communication via bus 9.Alternatively, can provide instruction, and each light-emitting component has oneself power supply of battery forms via bus 9.
As mentioned above, in described embodiment, provide instruction and electric energy to the light-emitting component 2 that is connected with bus 9 via bus 9.Typically, this is by 5v DC power supply being provided on bus 9 and modulating this power supply to provide single worker's one-way communication to realize to light-emitting component 2, making control element 6 to send instruction to each light-emitting component.The power supply of 5v is preferred, otherwise may need more complicated light-emitting component, and the more high voltage that will receive is converted to the voltage that is applicable to light source.
When the device of design drawing 5, scalability is a main consideration.Particularly, each light-emitting component is made easily and cheaply, and controlled function is separated from light-emitting component is very important.Simultaneously, must be noted that the solution that is difficult to convergent-divergent of avoiding concentrations.For this reason, carry out overall control by PC1, control element 6,7,8 is entrusted the responsibility of the light-emitting component of its connection of control.Return with reference to Fig. 5, can see, each control element 6,7,8 is connected with PC 1 via bus 25, and such configuration has allowed the expectation balance between trust and the scalability.
Control element 6,7,8 can use various addressing schemes to come each light-emitting component 2 of order to open or close.Really, under some environment, whole light-emitting components that may need to be associated with specific control element open or close simultaneously, and under such environment, this control element can use broadcast communication to control the light-emitting component of its connection.Yet, wish very much each light-emitting component of addressing individually.Various possible addressing schemes below will be described, but should note, generally speaking, control element 6,7,8 can be handled the address (for example IPv6 of the following stated) of relative complex, and operate the simple address that each light-emitting component typically uses the control corresponding element to produce.
Each light-emitting component must have unique address on its oneself bus.There is multiple scheme can realize so unique addressing.For example, in certain embodiments, during fabrication the address is hard coded into each light-emitting component 2.This is the method that adopt medium access control (MAC) address of traditional computer network hardware.Method although it is so is feasible, but it should be noted that all addresses of supposition all are globally unique, and this may cause unnecessary location longways.This has damaged the desired simplification of light-emitting component.In addition, the such address of use needs the two-way communication between control element 6,7,8 and each light-emitting component 2.For the reason of complexity and cost, preferably avoid such two-way communication.
In addition, in the scheme of using such hard coded address, supposing needs to use the light-emitting component with identical address to replace the light-emitting component that breaks down, and then may be difficult to replace light-emitting component.This will damage availability, and require the user to come light-emitting component is sorted with respect to the address of light-emitting component, also require the supplier to store the light-emitting component that has different addresses in a large number.
Because these problems, in some embodiments of the invention, optionally addressing mechanism is preferred.This method comprises: each light-emitting component dynamically is chosen in unique address on its bus that is connected to.This method use light-emitting component and the control element that is associated between co-operation, for each light-emitting component produces 8 bit addresses.
Fig. 8 shows the flow chart of address choice process.Each light-emitting component execution in step S5 and S6 of being connected with specific bus.At step S5, each light-emitting component uses pseudorandom number generator to produce a plurality of addresses.This process repeats predetermined amount of time (for example 1 second).Then, when this time period finished, the random number of Chan Shenging was set to the address (step S6) of each light-emitting component at last.It should be noted that the inexactness between the clock will mean typically that the address that is obtained quite is evenly distributed in the address space on the plate of processor of each light-emitting component.
When above-mentioned predetermined amount of time past tense, control element 6,7,8 is carried out processing separately respectively.This control element cycles through each address of address space successively.For selected address, the light-emitting component 2 (step S7) that is associated with this address is lighted in order.Suppose transmitted power and instruction on identical bus, then can determine the power that light-emitting component draws at step S8, the power that is drawn is proportional to the number of the light-emitting component that is associated with this assigned address.Determine the power (for example by measuring the electric current that is drawn) drawn at step S8, determine the number of the light-emitting component lighted at step S9.Step S10 repeats this processing to each address successively, with the number of definite light-emitting component that is associated with each address.At step S11, carry out to check, with determine any address whether be associated more than a light-emitting component.If do not find such address, can assert that then each light-emitting component has the bus unique address, in step S12 end process.Yet, repeating if exist, order does not have all light-emitting component repeating step S5 of bus unique address and the processing of S6, and the step S14 among Fig. 8 shows this reprocessing.After the section, the processing of repeating step S7 to S12 has unique address to guarantee all light-emitting components at the fixed time.Determined the address repetition if should handle, the then processing of execution in step S13 once more, therefore, this process continues, and all light-emitting components on specific bus have the bus unique address.In order to improve convergence rate, control element can be specified at step S13 and do not used the set of address, and then, light-emitting component can not use address set from selecting its address, to reduce the risk that repeat the address from this.
In some embodiments of the invention, for providing non-volatile memory capacity, light-emitting component stores its last address of using.This can be avoided all processing of execution graph 8 when the luminous configuration of each use.Yet the bus that is connected in the time of should noting guaranteeing all light-emitting components still with its last use connects.In some embodiments of the invention, by the processing of the step S7 to S12 of execution graph 8 simply, verify the light-emitting component that is connected with specific bus and use consistency between the last light-emitting component that uses the address.
Referring now to Fig. 8 A and 8B, the nonexpondable optional method that is used to identify individual address is described.The processing that the step S7 to S10 of above-mentioned Fig. 8 has been replaced in the processing of describing with reference to Fig. 8 A and 8B in fact.This optional method is particularly suitable for using the situation in space, location significantly.Especially, be suitable for address space fully greater than the situation of the number of the light-emitting component of address to be allocated.This optional method has been avoided the linear process of passing through the possibility address set as requiring in the described processing of reference Fig. 8.Really, under the bigger situation of address space, the linear process by may address set may be infeasible on calculating.For example, when using 32 bit address space, per second surpasses 1 year by the linear process of 100 addresses with consuming time.Optional method with reference to Fig. 8 A and 8B description adopts hierarchy plan to determine whether to exist any address conflict.
Referring now to Fig. 8 A,, determine the scope of address at step S100.At step S101, produce the subrange in the determined address realm.This can realize easily by using suitable prefix.For example, if the scope division that will determine at step S100 is two subranges, this can be defined as first subrange address that begins with " 0 " value bit, second subrange is defined as the address that begins with " 1 " value bit.If wish from the scope that step S100 determines, to produce subrange, then can use the prefix that includes more than individual bit more than two.For example, when use includes the prefix of two bits, can provide 4 subranges.
At step S102, as will be described in more detail, the address in each subrange is handled.Step S103 determines whether to also have other subrange to handle.If there is not such subrange also will handle, then handle the step S11 that returns Fig. 8.Yet,, handle from step S103 and return step S102 if also have other subrange to handle.
Fig. 8 B illustrates in greater detail the processing of step S102.At step S104, order the light-emitting component in the address subrange of lighting current processing.At step S105, definite power of being lighted that light-emitting component drew, this power of determining is used for the number at the definite light-emitting component of having lighted of step S106.At step S107, carry out and check, determine whether to have lighted any lamp.If do not light any lamp, then can record data, indication does not have light-emitting component to have in the address in the subrange of pre-treatment.At step S108, the data of this situation of storage indication do not need further to handle the address in the handled subrange.Yet,, handle from step S107 and arrive step S109 if the inspection of step S107 determines to have lighted some light-emitting components.Here, carry out to check with definite address realm whether only comprise individual address when pre-treatment.If, then handle from step S109 and arrive step S110, at step S110, carry out and check, light more than a light-emitting component determining whether.If determine to have lighted, then handle arriving step S111, these true data of storage indication in this step more than a light-emitting component.Then, handle this data in above-mentioned mode with reference to Fig. 8.Yet, if the single light-emitting component of only lighting writes down its address, at step S112 with this address mark for distributing.
If the inspection of step S109 is determined to comprise more than an address when the scope of pre-treatment, is then handled from step S109 and arrive step S113.Here, before step S114 handles subrange, from when the address realm of pre-treatment, producing subrange.The processing of step S114 itself comprises the processing of Fig. 8 B to each subrange of producing among the step S113.Therefore, can see that step S109, S113 and S114 mean, when light-emitting component is positioned in the subrange, carries out and further handle to determine the address of this light-emitting component.
The complexity that it should be noted that the process of describing with reference to Fig. 8 A and 8B is relevant with the logarithm of the number of light-emitting component and address number.This complexity is not linear with total address number.Therefore, for very large address realm, the processing of Fig. 8 A and 8B is feasible on calculating.
Be appreciated that when (comprising static state or dynamic) in any suitable manner and when distributing the address, can use the processing of Fig. 8 A and 8B.The processing of Fig. 8 A and 8B provides a kind of effective means of determining each used address of light-emitting component.
Description is before paid close attention to definite address and is controlled the mode of each light-emitting component 2 respectively to allow control element 6,7,8.Described, bus 9,10,11 is load power (5v power supply typically) also.The data of address and instruction type are provided to bus 9,10,11 along bus 25.PC1 communicates by letter with bridger (bridge) 25a via the USB connection.Bridger 25a is connected with control element 6,7,8 via bus 25.Along the bus 26 that is connected with power subsystem 12 power is offered bus 9,10,11. Though bus 25 and 26 can be single common bus, currently preferred embodiment of the present invention is used two different buses 25,26.
Power subsystem 12 is 36v DC power supplys.Each control element 6,7,8 comprises the device that is used for this 36v DC power supply is converted to the desired 5v power supply of each bus.Use the 5V power supply to allow the processor of use standard.Also provide power supply has been modulated the device that instructs with carrying to control element 6,7,8.
The typical LED light-emitting component consumes the electric current of 30mA.Therefore, under 5V voltage, a string 80 light-emitting components will draw the electric current of 2.4A.Use cheap narrow specification cable to connect (narrow gauge cabling) and can satisfy such requirement.
Linear relationship between electric current and the light-emitting component number has limited the scalability of single light-emitting component string.This scalability is subjected to the further restriction of the following fact, and promptly the number of lamp is big more, and the data volume that transmit is also big more, thereby has increased the frequency of modulation power source.If the number of lamp is excessive, then this frequency will become too high.
In view of the restriction to the scalability of single light-emitting component string, the device of Fig. 5 allows 8 control elements to be connected with single 36v power subsystem.Each control element can be controlled 80 lamps, this means that the configuration of Fig. 5 can be used to provide 640 light-emitting components.Can use cable connection that these control elements are linked together as standard C AT 5 cables connection and so on.
If 640 light-emitting components are not enough, then under the control of center control element, the device of Fig. 5 can link together with other similar devices.Fig. 9 has illustrated such configuration.Here, by bandwidth interconnections circuit 29 two device 27,28 (each dispose as shown in Figure 5) are linked together.Then, center control element 30 provides the integral body control to this configuration, provides instruction to the PC 31,32 of each device 27,28.
Above described, power and instruction all offer light-emitting component along bus 9,10,11.This is to use pulse width modulating technology to realize.Fig. 9 A shows illustrated example pulse sequence.Can see, generally speaking, provide+voltage of 5v.In the time will sending data, this voltage reduces to ground.The time span that the value that is sent is reduced to ground by this voltage is represented.Particularly, from Fig. 9 A, can see, use short relatively pulse to represent " 0 " bit, and use long relatively pulse to represent " 1 " bit.
In addition, when such modulation and the power supply of relative high voltage (for example 36v power supply) when using together, voltage can not reduced to ground, but reduces to lower level simply.For example, if maximum voltage value is 36v, then voltage can be reduced to 31v and represented data.
Owing to avoided voltage to be positioned at 0v for a long time or be lower than the value of desired value, so the transmission of above-mentioned data is favourable.In other words, short relatively by keeping pulse duration, the minute differences of the power that provided should be provided.
Bus 9,10,11 speed with 50kbps communicate.This speed allows to come deal with data by relatively cheap 4MHz processor.Speed with 500kbps is transmitted in the data that transmit between the control element on the bus 25.
Description now will be sent to the form of the data of light-emitting component.Fig. 9 B shows a kind of packet.Can see that this packet comprises the destination field 100 of 8 bits, the address that specific data will be sent to; The command field 101 of 8 bits, the order that indication is associated with this packet; And the length field 102 of 8 bits, indicate the length of this packet.Checksum field 103 for packet provide verification and.Payload field 104 is stored in the data that transmit in this packet.
The value indication light-emitting component address that destination field 100 is got.Yet destination field 100 can value be 0, and the destination of indicating this packet is the control element on the specific bus, or value is 255, the indication broadcast data packets.
As describing now, can specify various command in the command field of the packet of Fig. 9 B.
Order ON opens the one or more light-emitting components by the address designation in the destination field 100, and order OFF closes the one or more light-emitting components by the address designation in the field 100.
When initial, have the order SELF_ADDRESS of blank (blank) payload field 104 to all light-emitting components broadcasting, with trigger light-emitting component in the above described manner (Fig. 8 step S6) distribute the address.When detecting address conflict, broadcast another SELF_ADDRESS order, but payload field 104 has the bit mode that indication has distributed the address here.In other words, this bit mode can comprise the bit at each possibility address.In case receive second packet that comprises the SELF_ADDRESS order, light-emitting component determines by the bit mode that provides in the payload field 104 is provided whether the address of its selection is illustrated as distributing.If the address of its selection is not illustrated as distributing, can determine that then this selected address has caused and the conflicting of the address of another light-emitting component.Therefore, this light-emitting component is selected different addresses.
When selecting different address, this light-emitting component can be considered the address to be allocated of indication in the payload field 104, to eliminate further address conflict.
Order SELF_NORMALISE is used for deallocation.SELF_COMMAND is described as the reference mentioned order, and the packet that sends from normalization SELF_NORMALISE order has the payload that indication has distributed the address.Order SELF_NORMALISE adjusts the address, so that the address is continuous.This is to handle payload field 104 by light-emitting component to realize with the bit that sign is associated with its address.Bit before this address is counted, and added 1, think that specific light-emitting component provides the address to this counting.
Order SET_BRIGHTNESS is used to set light-emitting component brightness.The packet that sends this order has the payload field 104 of indication brightness and the destination field 100 of suitable configurations.Similarly, order SET_ALL_BRIGHTNESS is used to set the brightness of all light-emitting components.
Order CALIBRATE makes each light-emitting component launch a series of pulses, and as described below, this series of pulses can be used to identify light-emitting component, to be used for alignment purpose.By light-emitting component processing command FACTORY_DEFAULT, so that the setting of light-emitting component returns to the default value that dispatches from the factory.
How to have described after the light-emitting component move instruction, described the operation of light-emitting component and control element now in more detail.
Fig. 9 C shows the flow chart of the operation of light-emitting component.S1 powers up light-emitting component in step, carries out hardware initialization at step S121.At step S122, attempt from storage device, being written into the address of this light-emitting component.When using static address, or when light-emitting component stores the data of its last address of using of indication, from storage device, be written into the address at step S122.
A plurality of somes executable operations in the processing of Fig. 9 C are set the brightness of LED.This has comprised the frequency of control to the LED excitation effectively, so that the brightness of required expectation to be provided.Carry out such processing at step S123.
At step S124, carry out and check to determine whether light-emitting component can receive lock-out pulse on the bus that it was connected.If do not receive such pulse, then handle and return step S123.Yet, if receive lock-out pulse, handle to proceed to step S125, at step S125, reading of data bit on the bus.At step S126, carry out and check to determine whether having read 8 Bit datas (1 byte).If also do not read 1 byte, then handle and return step S125.When reading 1 byte, before step S128 upgrades checksum value based on handled byte, the brightness of disposing LED at step S127 once more.At step S129, the byte that storage is received, but it should be noted that this processing of configuration, only to store the interested byte of specific light-emitting component at step S129.
Processing arrives step S130 from step S129, at step S130, carries out and checks, whether represents packet header to determine 4 bytes handling recently.In other words, carry out to check whether represent as with reference to the described destination field 100 of Fig. 9 B, command field 101, length field 102 and checksum field 103 to determine 4 bytes handling recently.Really represent packet header if determine the byte of handling recently, then handle arriving step S131, in this step, resolve this packet header.Then, handle arriving step S132, this step is checked based on the value of the command field 101 of handled packet header.If command field 101 these groupings of indication comprise multiplexing payload, then handle arriving step S133, otherwise, handle and return step S125, read other data in this step from bus.Multiplexing payload is the payload of this packet of indication target light-emitting component pointed.In other words, the payload as in above-mentioned SET_ALL_BRIGHTNESS order, providing.When packet comprised multiplexing payload, light-emitting component is calculated in the processing of step S133 can the interior suitable skew (offset) of interested payload.In other words, this payload will be longer relatively, and light-emitting component may not have enough memory capacity to store whole payload.Therefore the processing of step S133 identifies the skew in the payload, has found interested data in this skew place.Can use the skew of determining at step S133 in the subsequent treatment, determining whether should be at step S129 storage data byte.
If the inspection of step S130 is determined 4 bytes that receive recently and is not represented packet header that then processing arrives step S134, carries out in this step and checks, to determine whether collective has represented complete packet to the nearest byte that receives.If not, then handle and return step S123, and continue as mentioned above.Yet, if the inspection of step S134 determines to have received complete grouping, handle arriving step S135, carry out in this step and check, whether effective with the checksum value that the processing of determining step S128 is calculated.If checksum value is invalid, then handles and return step S123.Otherwise, handle proceeding to step S136, carry out in this step and check, whether should handle with the packet of determining to receive by this specific light-emitting component.If the packet that should receive should not handled by this specific light-emitting component, then handle and return step S123.Otherwise, carry out subsequent treatment, determine the character and the desired action of the packet of this reception.
At step S127, carry out and check, determine whether the packet that is received represents ON order or OFF order.If then before step S123 is returned in processing, upgrade the state of LED at step S138.
At step S139, carry out and check, whether represent the SET_BRIGHTNESS order to determine the packet that is received.If then before step S123 is returned in processing, upgrade the monochrome information of above-mentioned steps S123 and step S127 use at step S140.
At step S141, carry out and check, whether represent the FACTORY_DEFAULT order to determine the packet that is received.If then handle arriving step S142, the setting of the light-emitting component that in this step, resets.Then, step S123 is returned in processing.
At step S143, carry out and check, whether represent the SELF_ADDRESS order to determine the packet that is received.If, then handle and proceed to step S144, in this step process payload, the data of whether having distributed with the address that obtains the indication light-emitting component.If this address is distributed, then can determine not have address conflict.Yet,, can determine to have taken place really address conflict if this address is unallocated.At step S145, carry out and check, address conflict has taken place to determine whether the data that are associated with the address of light-emitting component indicate.If there is not such conflict, then handles and proceed to step S123.Yet, if address conflict has taken place really, handle from step S145 and arrive step S146, be that light-emitting component is selected other addresses in this step, selected address is not marked as in the payload of received data grouping to be distributed.
At step S147, carry out and check, whether represent the SELF_NORMALISE order to determine the packet that is received.If then handle and proceed to step S148, in the payload of this step process packet, to determine distributed how many more addresses of low value to other light-emitting components.Then,,, and add 1, calculate the address of current light-emitting component to count results by to what have distributed more count the address of low value at step S149.
At step S150, carry out and check, whether represent the CALIBRATE order to determine the packet that is received.If, then handle to arrive step S145, in step 151, determine the sign indicating number that will launch by visible light.Then, at step S152, provide determined sign indicating number to LED.The processing of step S153 has been guaranteed this yard emission 3 times.Below with the generation and the use of this sign indicating number of more detailed description.
After the operation of having described light-emitting component, come the operation of description control element 6,7,8 referring now to Fig. 9 D.
At step S155, control element is powered up, at step S156, the hardware of initialization control element.At step S157, control element receiving data frames on the bus 25 of its connection.At step S158, the frame that reads in step S157 is decoded, and it is verified at step S159.If the checking of step S159 is unsuccessful, then handles and return step S157.Otherwise, handle by arriving step S160, in this step calculation check and value from step S159.At step S161 this checksum value is verified,, then handled and return step S157 if this checksum value is invalid.If this checksum value is effective, then handle and proceed to step S162, in this step this frame is resolved.At step S163, carry out and check, whether should handle with the frame of determining to receive by current control element.If not, then handle arriving step S164, carry out in this step and check, whether should continue to be sent to the light-emitting component under this control element control to determine the frame that receives.If then before step S157 is returned in processing, transmit this frame at step S165.Should not continue to transmit this frame if handle the control element of this frame, then handle from step S164 and arrive step S157.
If the inspection of step S163 is determined should to be handled by specific control element when the frame of pre-treatment, then handle arriving a plurality of inspections, these inspections are configured to determine the character of the order that received.
At step S166, carry out and check, whether represent ping message to determine the frame that receives.If then control element produces response to this ping message at step S167, and send this response at step S168.
At step S169, carry out and check, to determine whether the frame that receives is the pin requests for data, and these data have been indicated the current electric current that is just drawing from this control element of the light-emitting component that is connected with control element.That is, whether the frame of reception is the request at the data of indication electric power consumption.If then read current drain, and before step S157 is returned in processing,, provide the electric current that is read by response at step S171 at step S170.
At step S172, carry out to check, whether be request with the frame determining to receive at correcting current.Promptly whether the frame of Jie Shouing asks control element to carry out calibration operation, to determine with lighting elements not, to light a light-emitting component and two levels of current that light-emitting component is associated, can use such levels of current as described above.If it is the request to correcting current that the frame that receives is determined in the inspection of step S172, then handle arriving step S173, close all light-emitting components in this step by broadcast.At step S174, the current drain when measuring lighting elements not.Light a light-emitting component at step S175, measure the current drain that is produced at step S176.At step S177, light two light-emitting components, measure the current drain of these two light-emitting components at step S178.Then, before step S157 is returned in processing, at step S179, storage representation not during lighting elements, when lighting a light-emitting component and the data of the current drain when lighting two light-emitting components.
At step S180, carry out and check, whether represent carrying out the request of addressing operation to determine the frame that receives.If, then handle proceeding to step S181, close all light-emitting components under this control element control in this step.At step S182, select the address, any light-emitting component of giving an order and being associated with selected address to light.At step S183, measure the current drain of the light-emitting component of being lighted, to determine whether to take place address conflict.At step S184, close the light-emitting component of being lighted, in step S185 scheduler mapping, indicate single light-emitting component and handled address to be associated, not to have light-emitting component to be associated or a plurality of light-emitting component is associated with handled address (promptly having address conflict) with handled address.At step S185a, carry out and check, to determine whether that the address will be handled in addition.If then handle and return step S182.When not also during address to be processed, handle arriving step S186, carry out in this step and check, to determine whether to exist any address conflict.If there is no address conflict can determine that then light-emitting component has unique addresses distributed, handles to proceed to step S157.Yet, if there are one or more address conflicts really, then handle from step S186 and arrive step S187, send self-routing (self address) message in this step to all light-emitting components, the payload of this message has been indicated address assignment in the above described manner.At step S188, before step S183 was returned in processing, control element delay scheduled time section was to allow the light-emitting component deallocation.
At step S189, carry out to check, whether be that the request control element produces the data that are used to form as mentioned above to the basis of the SELF_NORMALISE order of light-emitting component with the message determining to receive.If, then handle and arrive step S190, close all light-emitting components in this step order, and any map addresses of storage before removing.At step S191, give an order, with in selected address lighting elements.At step S192, measure in response to this order consumed current, and lamp is closed at step S193.At step S194, the scheduler mapping comes, to indicate whether having light-emitting component to be associated with the address of working as pre-treatment.This handles the electric current of measuring based at step S192.Processing arrives step S194a from step S194, carries out in this step and checks, to determine whether address how to be processed in addition.If then handle and return step S191.When not having other light-emitting component also will handle, at step S195, produce SELF_NORMALISE order to light-emitting component, in the packet of transmitting this order, provide the map addresses that is produced.
The light-emitting component that many description concerns before are connected with fixed line.It should be noted that above-mentioned address distribution method is widely used in the set of any following equipment, promptly this equipment has to all devices and sends the ability of broadcast and distinguish zero whether, the one or more equipment ability in usefulness.Especially, under the situation of light-emitting component, detect, can determine to light specific light-emitting component by the light that light-emitting component self is launched by suitable video camera.Use the light of being launched to determine whether to light lamp, this is especially valuable in wireless layout, in wireless layout, and can not the monitoring power that each light-emitting component consumed.It should be noted that also such scheme needing to have avoided light-emitting component to send data on one's own initiative, from the viewpoint of complexity and power consumption, this is especially favourable.
Description has before illustrated how a plurality of lamps link together with the distributed control that realizes each lamp and provide power to each lamp easily.
Return with reference to Fig. 2, can see, at step S1, location light-emitting component 2 in the space.The ensuing part of this description has been described various location algorithms.Generally speaking, location algorithm is operated by using a plurality of video cameras (or use successively or use simultaneously), to catch the image of luminous pattern, uses these images then in localization process.
Figure 10 is the schematic diagram of two video cameras 33,34 observed 5 light-emitting component P, Q, R, S, T.Light-emitting component P, Q, R, S are in the visual field of video camera 33, and light-emitting component Q, R, S and T are in the visual field of video camera 34.Figure 10 A has illustrated the example image that video camera 33 is caught.Can see that 4 pixels are lighted, among each pixel corresponding 4 light source P, Q, R, the S one.Figure 10 B has illustrated the example image that video camera 34 is caught.Here, still 4 pixels are lighted, expression light-emitting component Q, R, S, T.Though in the image of Figure 10 A and 10B, which pixel is pixel can't identify and with which light-emitting component be associated with light-emitting component is relevant separately separately.Describe now the solution of this problem, at first with reference to Figure 11, in Figure 11,4 light-emitting component A, B, C, D are in the visual field of video camera 35.Each light-emitting component A, B, C, D have unique identification code in 4 light-emitting component A, the B that will be positioned, C, D.This identification code is taked the form of binary sequence.In the location of light-emitting component A, B, C, D, each light-emitting component presents its identification code by opening and closing according to identification code.
As shown in table 1, be 4 light-emitting component A, B, C, D assigned identification codes:
Light-emitting component Identification code
A 1001
B 0101
C 0111
D 0011
Table 1
Figure 11 A shows when each light-emitting component A, B, video camera 35 was caught when C, D presented its identification code image, here suppose that light-emitting component A, B, C, D synchronously present its identification code mutually, suppose that video camera 35 and light-emitting component are static toward each other, suppose that each light-emitting component is lighted the one or more pixels in the image of being caught.Figure 11 A is included in 4 different 4 images that produce constantly, and the time between image is enough to make each light-emitting component to present next bit of its identification code.
At moment t=1, video camera 35 detects light-emitting component A.At moment t=2, video camera 35 detects two light-emitting components, and the lamp that is detected different with the lamp that is detected at moment t=1 (being light-emitting component B and C) has promptly detected 3 lamps altogether.At moment t=3, video camera 35 detects two light-emitting components once more, but detects light-emitting component C and D this moment.Therefore, after the image of moment t=3, detected whole 4 light-emitting component A, B, C, D, relied on its locus in the generation image, these lamps can be distinguished from each other.At moment t=4, detect light-emitting component A, B, C, the D of whole 4 first prelocalizations.
By making up the data of whole 4 images, can determine the identification code of each light-emitting component, even video camera 35 moves, even or observe these light-emitting components from different video cameras, this also allows these light-emitting components are distinguished from each other.
Can see, detect light-emitting component A at moment t=1 and t=4, but moment t=2 and t=3 detect less than.Therefore, determine that the identification code of light-emitting component A is 1001, as shown in table 1.Detect light-emitting component B at moment t=2 and t=4, but moment t=1 and t=3 detect less than.Therefore, the identification code of determining light-emitting component B is 0101, and is as shown in table 1 once more.Detect light-emitting component C at moment t=2, t=3 and t=4, but moment t=1 detect less than.Therefore, determine that the identification code of light-emitting component C is 0111, as shown in table 1.At last, detect light-emitting component D at moment t=3 and t=4, but moment t=1 and t=2 detect less than.Therefore, the identification code of determining light-emitting component D is 0011, and is as shown in table 1 once more.
Can recognize that above-mentioned simple 4 bit code only are enough to provide different sign indicating numbers for 16 light-emitting components.Can recognize that also detecting lamp in the above described manner simply may go wrong, and error takes place easily.For example, the object that falls may have been covered the observability of video camera to light-emitting component as leaf, thereby causes determining improperly its identification code.Really, even particulate matter also may be covered the observability of light-emitting component.On the contrary, the detection possible errors ground to external light source detects light-emitting component.Below description is intended to improve the various encoding mechanisms of the restoring force (resilience) of this identification procedure.
In preferred embodiments more of the present invention, use Hamming (Hamming) sign indicating number that the light-emitting component identification code is encoded.Because the complexity of Code And Decode process is low relatively, so in some embodiments of the invention, Hamming code is preferred.Owing to need each light-emitting component to produce sign indicating number, and light-emitting component is designed to have low-down complexity with the lifting scalability as mentioned above, so this is very important.Hamming code provides in each coding transmits the assurance up to 2 bit error codes (error) has been detected, or can correct the individual bit error code under the situation that does not need other transmission.Under about 50% situation, can detect the coding that comprises 3 or more error codes and transmit.The common relatively situation of accidental (sporadic) bit error code that Hamming code is generally used for.
Hamming code is a kind of form of block parity (block parity) mechanism, describes it now as a setting.Using single Parity Check Bits is one of the simplest Error detection form.A given code word is added single added bit to this code word, and this added bit only is used for error code control (error control).Number according to the bit that has " 1 " value in the code word is odd number (odd) or even number (even parity check), sets this bit value of (being called as Parity Check Bits).In case receive the code word that comprises Parity Check Bits, the value of Parity Check Bits is relatively checked the odd even of code word, to determine whether error code takes place in transmission.
Though above-mentioned simple Parity Check Bits mechanism provides the Error detection of a bit, it can not provide any error correcting capability.For example, it can not determine which bit is wrong.Can not determine whether to have taken place more than a mistake.
Hamming code has utilized a plurality of complementary Parity Check Bits that the sign indicating number of more healthy and stronger (robust) is provided.This is called as block parity mechanism.Hamming code adds n additional parity-check bits to a value.For n〉3 (for example 7,15,31 ...), the code word of hamming code has 2 nThe length of-1 bit.(2 n-1) (2 in the individual bit n-1-n) individual bit is used for the data transmission, and n bit is used for the EDC error detection and correction data.In other words, can carry out hamming code to 4 bit message, form 7 bit codewords, wherein 4 bits are represented the data that needs transmit, and 3 bits are represented the EDC error detection and correction data.Can carry out hamming code to 11 bit message simply, form 15 bit codewords, wherein 11 table of bits are shown with and use data, and 4 bits are represented the EDC error detection and correction data.
Hamming code is described now.By adopting the odd even of subclass in the data bit, produce Parity Check Bits.Each Parity Check Bits is considered different subclass, formally selects subclass, so that the individual bit mistake can produce the inconsistency between at least two Parity Check Bits.This inconsistency has not only been indicated the existence of error code, and it is incorrect also can providing enough information to identify which bit.Therefore, this permission is corrected error code.
Provide the example of cataloged procedure referring now to Figure 12.Here, 44 bit identification sign indicating numbers in the his-and-hers watches 1 carry out hamming code, produce 7 bit codewords.4 identification codes shown in the table 1 have formed the input data 36 to Parity Check Bits generator 37.Parity Check Bits generator 37 is 3 Parity Check Bits 38 of identification code output of each input.Then, will import data 36 and Parity Check Bits 38 makes up, produce the identification code 39 of hamming code.
The operation of Parity Check Bits generator 37 is described now in more detail.To each enter code word 36, produce 3 Parity Check Bits, sue for peace by 3 bits, and adopt the least significant bit of the binary number that obtains enter code word, calculate each Parity Check Bits.The bit that Figure 12 shows input code 36 is noted as c 1To c 4(c wherein 1Be highest significant position), Parity Check Bits p 1, p 2And p 3Be calculated as follows:
p 1=c 1+c 2+c 4
p 2=c 1+c 3+c 4
p 3=c 2+c 3+c 4
After each identification code having been calculated these 3 Parity Check Bits,,, produce the identification code 39 of hamming code to produce 7 bit values by being incorporated into this identification code for 3 Parity Check Bits of each identification code generating.Generally speaking, these Parity Check Bits interweave with the bit of specifying this identification code usually, make parity data can all not lose in burst error (bursterror).In other words, 3 bit 40 expression EDC error detection and correction data of 7 bit values, and all the other 4 bits, 41 expression identification codes.
Though at length do not present here, can carry out in a very similar way and produce 15 bit codewords since 11 bit values, these codings are conspicuous for those of ordinary skills.
Also can expand and form extended hamming code Hamming code.This comprises to sign indicating number and adds final Parity Check Bits that this bit is operated the Parity Check Bits that produces as mentioned above.This is a cost with an added bit, allows sign indicating number when having the ability of correcting 1 bit mistake, also can detect the dibit mistake in (rather than correction) single transmission.Can use extended hamming code to come to produce 16 bits of encoded values, and produce 8 bits of encoded values from 4 bit values from 11 bit values.
In a preferred embodiment of the invention, light-emitting component has the 11 bit identification sign indicating numbers that are associated, and uses extended hamming code to come these identification codes are encoded, to produce 16 bits of encoded identification codes.11 bit identification sign indicating numbers provide 2 11(2048) individual different identification code this means and can use 2048 light-emitting components and they are distinguished from each other.By using extended hamming code, each yard has good error code restoring force, and the EDC error detection and correction function is provided.Use such extended hamming code to provide by aerial (a kind of noisy channel) well balanced between the needs of required robustness and use efficient coding mechanism when transmitting light pattern, to have kept the simplification of each light-emitting component.The relatively little expense that extended hamming code applied (i.e. 5 bits) can excessively not increase light-emitting component and send the required time of sign indicating number visibly.
Though in some embodiments of the invention, 16 bit code of the above-mentioned type are preferably, can use other alternative sign indicating numbers, as use 8 bit expanded Hamming codes to encode to have the identification code of 4 bit lengths.Sign indicating number although it is so only provides 16 different identification codes, mean and can only use 16 light-emitting components simultaneously, but owing to reduced code length, the sign chance of sign indicating number accurately increases.Yet balance is that each light-emitting component is sent 28 bit expanded Hamming codes than the improved identity characteristic of short code with a kind of possible solution that needs the different identification sign indicating number of greater number.Such technology will provide 255 different identification symbols, and each identifier comprises two sign indicating numbers.In addition, such technology will keep and the good error code restoring force that is associated than short code.
In optional embodiment of the present invention, may need very large purpose different identification sign indicating number.In this case, can distribute 26 bit identification sign indicating numbers, this 26 bit identification sign indicating number can be encoded to 31 bit expanded Hamming codes to each light-emitting component.Sign indicating number like this allows to use 2 26(about 6,000 7 hundred ten thousand) light-emitting component.
As mentioned above, light-emitting component comes to send its identification code to one or more video cameras visibly by opening or closing its light source.In order to improve scalability and minimization system complexity, light-emitting component and video camera asynchronous operation.That is, between light-emitting component and video camera, do not transmit timing signal.Therefore, light-emitting component changes between the moment of state and moment that video camera is caught frame not synchronously.
When using the asynchronous transmission of the above-mentioned type, must be with respect to the frame per second of video camera, control carefully sends the speed (frequency) of sign indicating number, to guarantee catching one-frame video data at least for each transformation (transition).Otherwise data may be lost, and cause inaccurate code word to receive.More specifically, according to Nyquist's theorem, the frequency that sends sign indicating number must be not more than half of video camera frame per second.Typically, video camera operates in the frame per second of 25 frame per seconds.Therefore, typically, send the sign code word with the frequency that is not more than 12Hz.
In a preferred embodiment of the invention, in the sign indicating number process of transmitting, use one of two kinds of modulation techniques.Modulation technique is the mode that code word (a series of 0 and 1) is converted to physical effect (being the flicker of light-emitting component in this case).First kind of modulation technique is non-return-to-zero (NRZ) coding, and second kind of modulation technique is binary phase shift keying (BPSK).Two kinds of modulation techniques are all described in more detail following.
Nrz encoding is a kind of simple modulation technology that data transmit that is used for." 1 " is converted to high impulse, " 0 " is converted to low pulse.In a preferred embodiment of the invention, transmit " 1 " and relate to and opens light-emitting component, transmit " 0 " and then relate to and extinguish light-emitting component.This is above-mentioned modulation technique with reference to Figure 11 and 11A.
Usually, NRZ modulation is not associated with asynchronous communication, and this is because in the code word continuous 0 of long string or 1 or cause in long-time section signal condition (being the state of light-emitting component in this case) constant.Thus, because the clock drift between transmitter and the receiver, some bits may be by " ignorance ".In addition, as described in more detail below, under situation of the present invention, such modulation may make the detection to transmitting beginning go wrong.
Yet, use the NRZ modulation that some benefits are also arranged in an embodiment of the present invention.Therefore at first, the transfer rate of data very slow (12Hz) is compared with the clock accuracy of current processor, and it is unconspicuous that clock drift can be considered to.Secondly, the efficient of NRZ modulation is higher relatively---and each cycle can send 1 Bit data, and at 12Hz, per second can send 12 bits.Therefore, although there is above-mentioned shortcoming, the NRZ that also is to use in some embodiments of the invention modulation.
Above-mentioned second kind of modulation technique is the BPSK modulation, and this is the simple relatively modulation technique of another kind.The advantage of BPSK modulation is, uses the sign indicating number transmission of BPSK modulation not comprise not having long-time section that changes.The BPSK modulation is described now.
BPSK modulation is operated by the pulse (being light pulse under situation of the present invention) of transmission regular length, no matter and what will send is " 0 " or " 1 ".BPSK encodes to " 0 " value and " 1 " value in a particular manner, uses this to encode then and sends data.Referring now to example BPSK is described.In this example, " 0 " is encoded to low period heel along with the high period, " 1 " is encoded to high period heel along with the low period.Figure 13 shows this coding, wherein can see the pulse shape that is used for expression " 0 " and " 1 ".
Two coded pulses that Figure 14 has illustrated the coding of use Figure 13 to produce flow 42,43.Can see that each stream of pulses comprises 4 pulses, each pulse has the duration of two clock cycle.Stream of pulses 42 comprises " 1 " pulse, is following " 0 " pulse, then is being another " 0 " pulse, is following " 1 " pulse, therefore, and stream of pulses 42 indication codes 1001.Stream of pulses 43 comprises " 0 " pulse, is following 3 " 1 " pulses, therefore, and stream of pulses 43 indication codes 0111.
With reference to Figure 14, can see, no matter now data are why, exist never do not have change more than clock cycle of two, this means and can realize more easily that accurate data transmits.Yet, it should be noted that two clock cycle of present needs transmit individual bit.This has produced the low effective data transfer rate of 6 bits per seconds.
Description before provides NRZ modulation and BPSK to modulate the details of these two kinds of modulation schemes.The NRZ modulation is applicable to the embodiments of the invention of light-emitting component (be that video camera and light-emitting component are fixed, be not subject to DE Camera Shake, wind or other similar effects) fixed relative to one another.Under the 12Hz transfer rate, use NRZ modulates approximate 1.5 seconds of the time discerning 16 bit identification sign indicating numbers.BPSK modulation provides the much more healthy and stronger scheme of supporting high mobility more, but cost is slightly high recognition time, and for 16 bit code, recognition time is 3 seconds.Owing to ignoring this time difference of most scenes, therefore in many embodiment of the present invention, the BPSK modulation is likely preferred.
As the situation in many data communication systems, be placed on the frame from the data that light-emitting component is sent to video camera with the form of visible light, this frame adopts form as shown in figure 15.In order to allow synchronously (otherwise, be asynchronous) between light-emitting component and the video camera between the two, the first of the data of framing is a silence period 44, does not send data in this section.Typically, the duration of this silence period equals 5 pulse periods.After this silence period, send individual bit data 45 in the mode of initial bits (start bit).This has indicated and will send data, and this bit can be taked the form of " 0 " pulse or " 1 " pulse.After having sent initial bits 45, then send the data that to communicate by letter.As mentioned above, these data typically comprise 11 bit values are expanded 16 Bit datas 46 after the hamming code.After having sent data 46, send and to stop bit and indicate and be sent completely.
It should be noted that using NRZ to modulate and realize under the situation of the present invention, may need data 46 are further encoded, a plurality of " 0 " of guaranteeing that data 46 do not comprise and being enough to define silence period.The suitable encoding scheme that realizes this point is Manchester's code or 4B5B coding.As the given pulse that is used for the BPSK modulation, then when adopting the BPSK modulation, do not need to use such coding.
Describing how to be light-emitting component generation identification code, and how these identification codes after the communication, the processing of discerning light-emitting component from the image that video camera produces are described now between light-emitting component and video camera.Figure 16 has schematically shown the device that is suitable for carrying out this processing, and wherein 3 video cameras 50,51,52 are connected with PC 53.Preferably, video camera 50,51,52 is connected with PC 52 by wireless mode, and this helps the mobility of video camera.The view data that these video cameras are configured to catch is sent to PC 53, and PC 53 can have above-mentioned configuration in fact as shown in Figure 6.
Referring now to Figure 17 to 19, the processing of the view data execution of 53 pairs of receptions of PC is described.Describe this processing with reference to video camera 50, can recognize, video camera 51,52 need be carried out similar processing independently.Figure 17 provides the schematic overview of this processing.PC 53 comprises frame buffer 52, wherein the image data frame that is received is stored, and is handled frame by frame.Reference number 55 among Figure 17 has represented that this handles frame by frame.Can see that this frame buffer comprises the frame 56 of nearest reception and be right after frame 57 the preceding, both are by 55 uses of processing frame by frame that will describe with reference to Figure 18 now.
At step S15, the view data that receives is added timestamp.Because many video cameras can not be caught frame at interval with accurate rule, so this process is very important.Therefore, the hypothesis of catching frame with 1/25 second sync interval may be incorrect, and the timestamp that is applied is used as the more accurate mechanism of determining the time interval between the frame.
After the image that receives is added timestamp, at step S16, use narrow-band pass filter in color space, this image to be carried out filtering, to eliminate all colours outside the color of mating with the light-emitting component that is being positioned.Typically, this can comprise image is carried out filtering, to get rid of all light outside the pure white light.
At step S17,, the image of up-to-date reception is carried out different filtering with respect to the image of previous reception.This filtering compares the intensity (after the filtering of step S16) of each pixel intensity with the respective pixel of the frame of first pre-treatment.If intensity difference greater than predetermined threshold, then indicates this pixel to change.Therefore, the frame that is treated to when pre-treatment of step S17 has produced the tabulation that possible light changes.
The above hypothesis of doing that each light-emitting component is mapped to the single image pixel may be too simple, therefore at step S18, with the pixel cluster of mutual distance within preset distance together.Typically, this distance only is several pixels.After cluster, produce the set (each transition region may be corresponding with single light-emitting component) of transition region.This transition region set is to handle 55 output frame by frame.A plurality of frames are carried out this processing, be the frame generation transition region data 58 of each processing.
Transition region data 58 time that inputs to is handled (temporal processing) method 59.The flow process of Figure 19 there is shown this time processing.Each transition region that writes down in the set for the transition region data 58 of first processing, (spatiotemporal) filtering (step S19) when carrying out sky, the transition region that detects in the set with transition region and other transition region data 58 in the transition region data 58 that will handle mates.By locating transition region in other transition region data acquisition systems in the tolerance limit when the transition region of handling empty, carry out this filtering operation.Can use movement compensating algorithm in this stage.Then,, in time transformation is divided into groups, to form code word at step S20.
At step S21, the code word that is produced is verified.Typically, this checking comprises that being used to mate initial sum stops bit, effectively silence period and the effectively inspection of extended hamming code.In case checking is finished, and has just known the identity of light-emitting component.By in the image of handling, determining the center of corresponding transition region, the easily position of the light-emitting component on the computed image.
Should be understood that because in processing procedure information conversion is gone into and be recorded in the time-domain with the form of transition region data 58, therefore the processing of describing with reference to Figure 17 to 19 needs video data storage seldom---only need single frame the preceding.
Above description has explained how to use single camera to locate light-emitting component and definite its identification code.In some cases, single camera is enough to locate light-emitting component in three dimensions.For example, be positioned under the situation on 2D plane or surface at known all light-emitting components.Yet, in other cases, the information of using single camera to obtain is only arranged, be not enough in three dimensions, locate light-emitting component.Therefore need further the processing, this is further handled the data of using from a plurality of video cameras acquisitions and operates.For example, with reference to Figure 20, two shootings 50 and 51 all detect light-emitting component X in the image that video camera produces.On the one or more pixels of the image that is produced, detect this light-emitting component, and know that by its identification code (as mentioned above) this is same element.By using triangulation (triangulation) algorithm, and know video camera towards, carry out to handle and construct from the extended imaginary line of video camera.This processing is described now.
With reference to Figure 20, can see that it is (C that the camera lens of first video camera 50 is positioned at coordinate 1x, C 1y, C 1z) the position.Similarly, to be positioned at coordinate be (C to the camera lens of second video camera 51 2x, C 2Y, C 2z) the position.Figure 20 also shows the line 52 that extends and pass through the position of light-emitting component X from the camera lens of video camera 50.Line 53 extends from the camera lens of video camera 51, once more by light-emitting component X.Triangulation algorithm is configured to, and the position of light-emitting component X has been indicated in the crosspoint of detection line 52,53, this crosspoint.This algorithm is described now.This algorithm is with reference to imaginary plane 54a, 54b, and imaginary plane 54a, 54b lay respectively at apart from the camera lens of first video camera 50 and 1 meter position far away of camera lens of second video camera 51.With these floor plan is the direction quadrature pointed with video camera.The line 52 that extends to light-emitting component X from first video camera 50 will pass plane 54a, and the point that is passed by line 52 among the 54a of plane has coordinate (T 1x, T 1y, T 1z).Similarly, the point that is passed by line 53 among the 54b of plane has coordinate (T 2x, T 2y, T 2z).Therefore, with respect to first video camera as initial point, the coordinate of the point that is passed by line 52 among the 54a of plane is as follows:
R 1x=T 1x-C 1x
R 1y=T 1y-C 1y
R 1z=T 1z-C 1z
Similarly, with respect to second video camera as initial point, the coordinate of the point that is passed by line 53 among the 54b of plane is as follows:
R 2x=T 2x-C 2x
R 2y=T 2y-C 2y
R 2z=T 2z-C 2z
Defined after the point among plane 54a, the 54b in aforesaid mode, the equation of line 52 can be expressed as follows:
(C 1x+ t 1R 1x, C 1y+ t 1R 1y, C 1z+ t 1R 1z) wherein:
t 1Be the scalar parameter of indication along the distance of line 52.
Similarly, line 53 is defined by following equation:
(C 2x+t 2R 2x,C 2y+t 2R 2y,C 2z+t 2R 2z)
Wherein:
t 2Be the scalar parameter of indication along the distance of line 53.
Can see, when the equation of above-mentioned line defines point in the imaging plane that each line passes, t 1And t 2Value will be 1.
Suppose perfect precision, the point that should find line 52,53 to intersect, this point are exactly some X.Can be by the value of the equation of line taking 52,53 in two dimensions, and use these to be worth to form a pair of simultaneous equations, carry out determining of such crosspoint.The value of supposing all C and R is known, and this will comprise two unknown number (t to simultaneous equations 1, t 2), therefore, can solve an equation to determine to insert (the t of the equation of the equation of line 52 or line 53 1, t 2) value, to produce the coordinate of light-emitting component X.
More specifically, at the place, crosspoint, on the coordinate of x, y and z, the equation of line 51,52 is equal to each other.Therefore, on the online y crosspoint, following equation is set up:
C 1x+t 1R 1x=C 2x+t 2R 2x
C 1y+t 1R 1y=C 2y+t 2R 2y
C 1z+t 1R 1z=C 2z+t 2R 2z
Because two unknown number (t are only arranged 1, t 2), any two values that can be used for determining unknown number in the above-mentioned equation, for example, use the equation in x and the y coordinate:
C 1x+t 1R 1x=C 2x+t 2R 2x
C 1y+t 1R 1y=C 2y+t 2R 2y
Once more, because known all C and the value of R can be separated above-mentioned equation in a well-known manner, to determine t 1And t 2Value.After having produced such value, can determine the crosspoint (that is some X) of line.
It should be noted that in some applications, may exist error to make line ideally not intersect.Therefore, need to determine the point of minimum distance between two lines, or use optional similar estimation alternatively.
For example, in one embodiment of the invention, more than the equation of Ding Yi line 52,53 is converted in the following coordinate system, and promptly a line is in the z direction, and the quadrature component of another line forms the y direction.The x intersection of these lines has provided the point of the minimum distance that can be transformed back to original coordinates.Followingly this coordinate system is described in more detail with reference to Figure 20 a, 20b, 20c and 20d.
Figure 20 a and 20b there is shown first video camera 50 and second video camera 52 at vertical view and side-looking respectively.Show each vector (r1, r2 and c2) in the drawings.Vector c2 has defined video camera 50,51 position relative to each other.Vector r1, r2 have defined the line that extends on the approximate direction of light-emitting component X from video camera 50,51.Notice that suppose aspect sensed position slightly error, draw vectorial r1 and r2 make it depart from the actual position of light-emitting component X slightly.Can see, in vertical view and end view, all have error.
With respect to first video camera 50, be defined as to the vectorial r1 of the proximal line of light-emitting component X:
r1=(R 1x,R 1y,R 1z)
With respect to second video camera 51, be defined as to the vectorial r2 of the proximal line of light-emitting component X:
r2=(R 2x,R 2y,R 2z)
Vector from first video camera 50 (as initial point) to second video camera 51 is defined as:
c2=(C 2x-C 1x,C 2y-C 1y,C 2z-C 1z)
Define 3 unit vectors and come transformed coordinate system.Unit vector in the r1 direction is defined as:
z=r1/|r1|
Wherein | r1| represents the European norm (being length) of r1.
With the r1 quadrature but the unit vector y that has produced the y-z plane that comprises r1 and r2 be defined as:
y=(r2-r2.r1)/|(r2-r2.r1)|
Be defined as with the unit vector of y and z quadrature:
x=z×y
Wherein, z * y represents the vector cross product of z and y.
Vector x, y and z have defined coordinate system, from this coordinate system, calculate the minimum distance point especially easily.
It should be noted that as long as vectorial r1 and r2 are not parallel, just can define unit vector y well.Yet for two video cameras (for example first video camera 50 and second video camera 51) of any distance of each interval, the sight line of (for example light-emitting component X) should be never parallel from each video camera to single source.Therefore, if the definition of above-mentioned unit vector " failure ", then one of video camera 50,51 has detected the position of light-emitting component X mistakenly.
Though on mathematics, created coordinate system,, can more easily understand it by the motion (promptly shake mirror, tilt and/or rock, make the invariant position of first video camera 50) of considering first video camera 50.Figure 20 c and 20d have illustrated this point.Illustrated reference frame RF, to help the calculating of understanding this coordinate system and minimum distance point.For example, this reference frame is corresponding with the content of seeing by the view finder of first video camera 50.
Shown in Figure 20 c, move first video camera 50 and make its sensed position X1 (not being actual position promptly) just in time be positioned at center, the visual field light-emitting component X.Thereby position X1 has formed the initial point of new coordinate system.The z direction of this coordinate system (leaving the direction of first video camera 50) then is the direction (defining as above-mentioned equation) of vectorial r1.
Rotate first video camera 50 up to the sight line r2 of second video camera 51 " upper right " with respect to first video camera 50, promptly r2 is now parallel with the y direction.Figure 20 d has described this situation.Can recognize that Figure 20 d is that the two dimension of this coordinate system is described, the sight line r2 of second video camera 51 after the conversion and the component that also can have on the z direction.Obviously can see that from Figure 20 d minimum distance is the position that the sight line r2 of second video camera 51 passes the x axle exactly.
Mathematicization ground more in this new coordinate system, by second video camera 51 to the equation of the line r2 of the position X1 of the light-emitting component X that is responded to is:
R2=((c2.x), (c2.y)+t2 (r2.y), (c2.z)+t2 (r2.z)) wherein t2 be the above-mentioned parameter that changes along line r2.
From the equation in coordinates of the line r1 of first video camera 50 be:
r1=(0,0,t1(r1.z))
For the value of any t2, can adjust the value of t1, make the z coordinate of two equations of above-mentioned definition equate.Therefore, the minimum distance point is when the y coordinate is zero:
(c2.y)+t2(r2.y)=0
t2=-(c2.y)/(r2.y)
At that point, the distance between line r1 and the r2 is:
(c2.x)
By z coordinate, can find at the reach the standard grade mid point Xm of r1 and r2 of minimum distance, so this point is t2 substitution line r2:
((c2.x)/2,0,(c2.z)-((r2.z)(c2.y)/(r2.y)))
Can convert it back to conventional coordinates now.
Above-mentioned processing has indicated how to locate light-emitting component uniquely in three dimensions.Yet, before carrying out above-mentioned processing, must guarantee correctly to calibrate the video camera that is used to locate light-emitting component.Figure 21 shows the flow chart of the step of camera calibration process execution.At step S22, carry out calibration to consider each video camera attribute.Can be when video camera be made, and/or just carry out such calibration before use.Such calibration comprises the attribute of configuration as aberration and zoom and so on.
The calibration of step S22 must be considered various video camera pseudomorphisms.For example, some camera lens have distortion (for example white point (fish eye) effect) at the edge.When making video camera, should determine such distortion ideally.Yet, can use optional method.For example, can before video camera, place big test card, handle the image that is produced then with known color pattern.In optional embodiment of the present invention, under the situation of known desired image in advance,, carry out this calibration by the light-emitting component that the reference video camera is responded to.
In addition, but some video cameras may have the zoom factor of the manual adjustment that can not directly sense.Owing to can on-the-spotly adjust zoom, therefore may need to proofread and correct.This can also pass through to use the test target at known distance, or uses the layout of light-emitting component to realize.
Though above-mentioned processing allows to locate light-emitting component with respect to video camera,, the absolute position in the space if desired then need be about the data of camera position.At step S23, camera position is calibrated.
The processing of execution in step S23 in many ways.First method comprises the physical measurement camera position, and mark camera position on map subsequently.A kind of optional position calibration method comprises positioning shooting machine electronically.For example, for outdoor mounted, can use single camera with GPS and digital compass.
Said method is determined absolute camera position in the space.This will allow video camera relative to each other to locate conversely, as mentioned above, also allow to come positioning lamp with respect to video camera.A kind of optional method that relative to each other comes the positioning shooting machine comprises by coming the positioning shooting machine with reference to a plurality of light-emitting components.Owing to detecting simultaneously light-emitting component, therefore need only from different perspectives and the distance observation, these information can be used to obtain the relative position of video camera.A kind of so a plurality of light-emitting components can be the elements that is being positioned.This method that is used to obtain station-keeping data also can be used to have the special light-emitting component configuration of known dimensions, for example can use to have line cube (wire cube) or the pyramid (pyramid) that lamp is placed on its top.Because known dimensions, thereby relative to each other come the calibration camera angle therefore easilier with respect to known source.Also can be by video camera is pointed to the other side mutually, relative to each other to come the positioning shooting machine, wherein each video camera has visible or sightless light source.Then, can video camera relative to each other be located by triangulation.
By using the device of laser designator (pointer) as comprising on the video camera and so on, strengthen the above-mentioned relative to each other process of positioning shooting machine that is used for.For example, can allow visual field centre focus with each video camera on single known location in the laser designator of installing on each video camera.If placed little array of source (visible or invisible for human eye) on each video camera, and video camera points to the other side's (when keeping its position) mutually, then can calculate their relative position, thereby can determine the relative position of video camera.
Above-mentioned localization method has multiple shortcoming, and described certain methods can not provide unambiguous data in all cases.For example, as mentioned above, if come the positioning shooting machine with respect to light-emitting component (its configuration is known or unknown), if the customized configuration of video camera and lamp position is carried out linear scale, then the image on each video camera is all identical.This means, need know or measure at least one measured value by additive method.Method although it is so may not provide the data of ambiguity, but this in practice may be inessential.For example, in some embodiments of the invention, have only relative size just important.
When two video cameras are relative to each other calibrated, similar problem has appearred, even when known camera position, also have multiple lamp and video camera towards configuration can cause on each video camera, occurring identical content.Therefore, usually for accurate location, should use 3 camera positions (need not 3 video cameras, a video camera can be placed on 3 different positions successively) at least.Once more, these important embodiments of the invention that adopt this method that depend on whether in practice.
Return with reference to Figure 21, the final stage in the camera calibration is the fine correction of carrying out at step S24.Typically, this fine correction is about guaranteeing that video camera correctly is aligned with each other, and can use total algorithm.For example, can use as the technology of simulated annealing (annealing), climb the mountain (hillclimbing) and so on or the difference that general algorithm minimizes the position of the light-emitting component of being sensed by different cameras.Yet, also can use simpler heuristic to carry out multistep and proofread and correct (Deng Shan form effectively) suddenly.Such method is below described.
Compare with the position of the measurement of light-emitting component, described fine correction method is based on the estimated position that projects to the light-emitting component on the camera plane.By measuring specific system deviation, can to the assumed position of video camera and towards particular aspects proofread and correct.
Figure 22 A to 22D has illustrated 4 dissimilar deviations.In each image, detect 5 light-emitting components.These images are expressed as filled circles with the desired locations of each light-emitting component, and the actual position of each light-emitting component is represented by open circles.
Figure 22 A illustrated in the horizontal direction, i.e. the deviation that causes of systematic error on the directions X.Can see that each filled circles is positioned at the left side of each open circles, but on vertical or Y direction perfect alignment.Such error by video camera about towards the rotation (yawing) or the translation on X plane cause.Can be whether evenly or not relevant with distance to lamp to all lamps according to this effect, check the difference between two kinds.
Figure 22 B has illustrated the deviation that the systematic error on the Y direction causes.Can see, each filled circles be positioned at each open circles directly over.Such error by video camera up and down towards (pitching) or the error of the height of camera position cause.
Figure 22 C has illustrated proportional deviation on directions X and Y direction.Such error is caused by the configuration (rocking) on the supposition plane of video camera.Figure 22 D has illustrated the deviation that caused by the zoom factor of video camera.
At the image of having handled Figure 22 A to 22D, determine required correction, and carried out this correction (step S24) afterwards, correctly disposed video camera.
Then, above-mentioned processing can be used for detecting light-emitting component and the light-emitting component position in the space.Can recognize, can revise above-mentioned various processing in many ways.Some such modifications are described now.
It is invisible with human viewer to wish to allow light-emitting component, or the mode that does not directly occur at least sends identification code.For example, may wish to allow when just using light-emitting component to come display image, to send identification code.In this case, should send identification code not disturb mode to the visible image of human viewer.A kind of technology that realizes this point comprises by the intensity of light-emitting component is modulated and sends identification code.For example, if light-emitting component has from 0 to 1 strength range, then can come display image by using the intensity between 0 to 0.75.When sending identification code, can send light with full intensity (promptly 1).Therefore, only use less difference to come to distinguish with between the light of display image and the light of emission with the transmission identification code in emission.Human viewer unlikely perceives so little difference, but the video camera that is used to locate light-emitting component can relatively easily detect this difference by revising above-mentioned image processing method simply.
When using the colorful light-emitting element in an embodiment of the present invention, can utilize the operation in the color space, typically, human eye is more insensitive to such operation.For example, typically, compare with the difference of brightness, human eye is more insensitive to the change on the tone (spectrum color).This phenomenon is used for various image encodings, as jpeg image format, wherein, has used the picture signal of the less bit tone of encoding.Compare with the similar fluctuation on brightness or the saturation, human eye can not be noticed the little variation on the tone of keeping same brightness and saturation very much.Therefore, transmit identification code, can under the situation of not disturbing the appreciable image of human viewer, effectively transmit identification code by using tone variations.
Description is before paid close attention to and is located light-emitting component based on identification code, and this identification code is transmitted by the visible light that atmosphere sends by light-emitting component.In optional embodiment of the present invention, replace and use invisible light to transmit identification code.For example, outside above-mentioned visible light source, each light-emitting component can also comprise infrared light supply, and this infrared light supply sends the light-emitting component identification code in the above described manner.If digital camera uses charge-coupled device (CCD) to produce image, detect infrared light well, and in the image of being caught, the infrared light that is detected is designated as the pure white zone, then the use of infrared light is comparatively convenient.
Use infrared light to transmit identification code (or using controlled intensity to transmit as mentioned above) in this manner and mean that invisible or appreciable hardly mode transmits identification code with human eye.This means and under the situation of the image that does not interrupt using light-emitting component to show, to transmit identification code.In a similar way, can use other forms of electromagnetic radiation, for example can use ultraviolet source to send identification code.
Use so non-visible light source (or using controlled intensity to transmit as mentioned above) to mean that light-emitting component can be regularly, or even send its identification code continuously, and such transmission is not disturbed human viewer.Continuous or regular transmission identification code like this has multiple advantage.For example, in some embodiments of the invention, light-emitting component is not to arrange with fixed form, but moves when display image.Therefore, wish by using suitable track algorithm, along with light-emitting component is followed the tracks of in the variation of the position of light-emitting component.
The example of the tracking of the image that the use video camera of the above-mentioned type produced is provided now.The process of stating in the use is designated transition region after the light-emitting component, and any transformation subsequently during this position predetermined empty within the tolerance limit has the higher probability that sends from identical sources.Yet, if send identification code continuously or regularly, suppose that the identification code of expectation light-emitting component is known, can frame by frame checking cause the identity of the light-emitting component of the transformation that is detected, to guarantee that this hypothesis is correctly.
This additional information provides the how up-to-date extrapolation positional information about the position of light-emitting component.Compare with waiting for the whole identification code of reception, this has allowed to verify quickly the identity of light-emitting component.Allow embodiments of the invention that reaction is faster made in the motion of light-emitting component like this.
Irregular or transmit discontinuously in the embodiment of the invention of identification code, the light-emitting component light of emission in operation allows to carry out certain tracking.More specifically, suppose that the apparent position of light-emitting component is known (by above-mentioned processing),, can provide certain following function by observing the output of said frequencies band pass filter.Not high for the light-emitting component mobility, but the embodiment of the invention that moves slightly in time, this is particularly useful.
Use the BPSK modulation scheme to help track algorithm.This is because the BPSK modulation has produced higher conversion rates, thereby provides more latest position information when following the tracks of.
In some cases, do not consider that the above-mentioned error correcting capability that is used to send the Hamming code of identification code is useful.For example, when detecting identification code first, processing will guarantee typically that the code word that is received does not have error code, and carry out essential processing up to receiving errorless identification code.This has reduced to occur the probability of false positive.After having determined identification code, embodiments of the invention can be accepted mistake may prove as the position of one or more bits.
In some embodiments of the invention, can use single camera to carry out the location of light-emitting component, this video camera moves to a plurality of different positions, and the image that produces at diverse location collectively is used for executing location and determines.Really, above-mentioned many processing can be performed as off-line or at line process.In other words, when video camera during, can be online processing with this processing execution, or alternatively, use the data of precedence record to carry out processed offline towards light-emitting component.Really, can be by the sequential observation of single camera, or observe by a plurality of video camera the time and collect data.Yet, it should be noted that generally speaking, when light-emitting component is mobile, need at least two video cameras to be used for accurate location usually.
The light-emitting component that before description is considered has in fact the consistent optical effect of controller that is associated with himself and its.It should be noted that the optical effect that light-emitting component is set up may be inconsistent with light-emitting component self or its controller that is associated.For example, LED can omit light by one or more optical-fibre channels, makes the optical effect of lighting LED occur in the place away from the LED position.Similarly, can reflect the light of light-emitting component emission from reflecting surface, this reflecting surface provides the optical effect of light-emitting component, and is positioned at the space place different with the place, space at this light-emitting component place.Suppose between the place that light-emitting component and light-emitting component tell on to have one-to-one relationship, then can recognize, can use above-mentioned technology and come suitable location light-emitting component.
Yet some light-emitting components produce its optical effect in big relatively zone, make it can not be considered to point-source of light.Really, can use the light source of dispersing relatively, this makes it locate relative complex.Really, in some cases, in order to reduce calculation requirement and to reduce ambiguousness, the priori of light source position is useful, or or even essential.
In some cases, can suppose from the diverging light of single source is approximate and be positioned at a plane.When illuminating the part wall, spotlight has such situation.Here, each video camera can calculate the barycenter of this light source, then, can use above-mentioned algorithm to it.Can use the diffusion of the light around the barycenter to determine the angle on this plane.A plurality of light sources have been set up the 3D model on the surface that is illuminated effectively, it can be fed back the point that is associated with the specific light source that illuminates a plurality of objects corner with refinement.
In some cases, can avoid determining to disperse the 3D scope of light source.If light drops on known, then single camera can be determined the two dimensional range of light source.Even be not this situation, it is important also having only from the observation of single point of observation, in this case, and can be with the two dimensional range of the effect in source as important position information.
When light source was dispersed in use, the generation of image also had additional complexity.Because light source is not a little, therefore, simply turn on those effects lamp in the zone that needs illuminate fully, will cause not opening the source, this is all to produce effect outside the zone that needs illuminate owing to all light sources.Need the most approaching coupling of certain form to determine to light which light-emitting component.
Can use least square approximation (common in statistics) to determine to light which light-emitting component.Interested three-dimensional or two-dimensional space are divided into a plurality of volume elements or pixel (N p).Each volume elements or pixel are noted as P k, k=1...N wherein pA plurality of (N) is provided light source.Each light source is noted as I i, i=1...N wherein.
For each light source I iWith each volume elements/pixel P k, determine illumination (illumination) level that on this volume elements/pixel, causes by light-emitting component.This level is marked as M KiThis value is based on light source I iFull illumination.If with each light source igniting to horizontal iI i(supposition is measured illumination between standardization scope 0 and 1) is then at particular voxel/pixel P kOn illumination provide by following equation:
IP K = Σ i = 1 N 1 M Ki I L i
Suppose that required illumination pattern on this volume elements/pixel is by DP jProvide, determine that then each illumination intensity of light source level makes square error and minimizing.Square error and provide by following equation:
Figure A200780015852D00572
Can use standard method to separate above equation.This scheme is:
IL=QM TDP
Wherein Q is symmetric positive definite matrix M TM's is contrary, and DP is required illumination level vector, and IL is the vector of determined illumination intensity of light source level.Can use polyteny to return to carry out this square error and scheme.This is at Freund J. , ﹠amp; Walpole R.: " Mathematical Statistics ", Longman, 1986, ISBN-10:0135620759 describes among the pp480 et seq.
Notice that said method can provide almost impossible high brightness value for specific light source, and may provide negative brightness value for other light sources.In this case, use threshold process to set illumination level suitably.
In some cases, a plurality of light sources may independently be controlled.For example, the control of light source is in the time of can not opening and closing light source independently, to be exactly this situation.Alternatively, each light source can have the reflection that is associated.In this case, each video camera can detect several two-dimensional points to individual address.Given two video cameras can be to carrying out triangulation to the detected every pair of potential point of single source in first and second video cameras, and in the step S103 of Figure 23 A error of calculation value.Usually, the detected two-dimensional points in different source positions has been provided higher error amount, therefore can be dropped.By accident, strange position overlaps the sure position that mistake can occur, and still, this is regarded as a kind of potential problem, can use a large amount of video cameras to overcome this problem.
Among the embodiment of the invention described above, each light-emitting component has the address.Each light-emitting component also sends identification code, sends and use this identification code by light-emitting component in position fixing process.This identification code can be the address of light-emitting component, or alternatively can be different.When identification code and address not simultaneously, for example can link it by look-up table.Yet in some embodiments of the invention, light-emitting component does not send identification code under the control of himself.But control position fixing process based on the address of light-emitting component by master controller.Referring now to Figure 23 such process is described.
With reference to Figure 23, at step S25, order all light-emitting component emission light, make the video camera that is used for testing process have the complete picture of all light sources.At step S26, close all light-emitting components.At step S27, i is initialized as 1 with counting variable.In processing procedure, this counting variable increases to N from 1, and wherein N is the bit number in the address of each light-emitting component.At step S28, light all light-emitting components that bit i in the address is set as " 1 ".Write down the image that is produced at step S29.Step S30 determines whether that bit will be handled in addition, if i equals N, then all bits has been carried out this processing, handles being transferred to step S31 (following description).Otherwise, at step S32, increase i, handle and return step S28.
At step S31, handled the sequence of N image.These images will be taked the form shown in Figure 11 A, can handle these images, so that determine the address of each light-emitting component with said method.
In optional embodiment of the present invention, light-emitting component can send sign indicating number under himself is controlled, but can carry out under the prompting of master controller.
The above-mentioned traditional triangulation algorithm of method use of locating light-emitting component from the image that produces.Such algorithm may have many problems.For example, some light-emitting components may be blocked outside the visual field of some video cameras.If only use two video cameras in procedure of triangulation, then this will mean and can not correctly locate some light-emitting components.Yet, when using the video camera of big figure more, can carry out triangulation, to overcome this problem by simply based on the image that video camera produced that can see light-emitting component really.
Another problem of the triangulation of the above-mentioned type is owing to noise, video camera precision and digital error produce.This may mean that the imaginary line that comes from the video camera projection can not accurately intersect.Therefore " closest approach " method that needs certain form is to determine apparent position based on the imaginary line that is produced.For example, can select three-dimensional position, make the projection of estimated position on all video cameras and the quadratic sum of the difference of measuring position separately minimize.
For example, a kind of algorithm operating based on " closest approach " method is as follows.With single light-emitting component is example, and for each video camera that writes down this light-emitting component, imaginary line is projected to the test point of light-emitting component from video camera.For the every pair of video camera that writes down this selected light-emitting component, calculate between the projected light near the point of approach (closest approach), with the estimation of the mid point between these lines as the actual position of light-emitting component.This is every pair of estimated position that video camera has produced this light-emitting component.This has also indicated near the distance between the line on the approach, and this distance provides useful error metrics.If the error metrics of any in the estimation point in essence greater than other points, is then ignored these other points.To producing each such point, typically the false positive on pretreatment stage video camera is corresponding with it for each such point by specific video camera.The estimation right to all the other video cameras averages, to provide the overall estimated position to this light-emitting component.Then, detected each light-emitting component is repeated this algorithm.Figure 23 A shows suitable process.
With reference to Figure 23 A, at step S100, the results_set array of initialization sky.This array is a pair of in each element storage, every pair of estimation and error metrics that comprises source location.At step S101, counting variable c is initialized as zero.At step S102, calculate the right location estimation of representing by counting variable c of video camera, at step S103, also calculate the right error metrics of this video camera.At step S104, with a pair of adding results_set array, this is to location estimation of being calculated that is included in step S102 generation and the error metrics that is calculated that calculates at step S103.Increase counting variable c at step S105,, carry out and check, to determine whether that video camera is to handling in addition at step S106.If also have video camera, then handle and return step S102 handling.Otherwise, handle proceeding to step S107.At step S107, in all elements of results_set array, calculate the error metrics average.
After step S107 has calculated this average,, another counting variable p is initialized as zero at step S108.This counting variable is counted all elements of results_set array successively.At step S109, from error amount that the element p of results_set array is associated deduct the mean error value of calculating at step S107.Then, carry out to check, with the result that determines this subtraction whether greater than predetermined threshold.If illustrate that then the element p of results_set array has represented exceptional value (outlying value).Then,, remove such exceptional value, recomputate all original mean errors of this array then at step S111 at step S110.If the inspection of step S109 is not satisfied, then handle directly arriving step S112, increase counting variable p in this step, handle arriving step S113 then, carry out in this step and check, to determine whether that element p needs to handle in addition.If then handle and return step S109.Proceed to step S114 otherwise handle.
At step S114, calculate the mean place of all elements of results_set array and estimate.Then, step S115 is reset to zero with the value of counting variable p, handles each element of results_set array then successively.At step S116, the corresponding element of distance array is set to equal the location estimation and average estimate poor that are associated with the element p of results_set array.Increase counting variable p at step S117, carry out at step S118 and check, to determine whether that array element needs to handle in addition.If, then handle and return step S116, arrive step S119 otherwise handle, determine the average distance of the average estimation that have a few and step S114 calculate in this step.
Then, handle arriving step S120, once more counting variable p is made as zero in this step.At step S121, carry out to check, with the difference of the distance determining this average distance and be associated with the element P of distance array whether greater than boundary.If then at step S122, the element P in the deletion distance array also deletes the element P in the results_set array, then, before step S124 increases counting variable p, recomputates this average distance at step S123.If the inspection of step S121 is not satisfied, then handle directly and arrive step S124 from step S121.Carry out inspection at step S124, to determine whether that the element of distance array needs to handle in addition, if then handle and return step S121, otherwise handle from step S125 and arrive step S216, in all the other units average estimation of calculating location usually of this step use location array.
Can recognize that the process of describing with reference to Figure 23 A only is exemplary, can use various similar process.For example, in some embodiments of the invention, the exceptional value that each stage during the course can be carried out other removes.
If from the visual angle of particular camera, two or more light-emitting components are aimed at, and then this video camera will produce image effectively, and this image is the logic OR of the sign indicating number of two light-emitting components transmissions.If sign indicating number is fully sparse, then typically can identify error detection.Yet, suppose visual angle from least one video camera, the light-emitting component misalignment, thus the imaginary line that is produced is non-intersect, if then video camera has been determined in fact the effective code that the light-emitting component by two alignings causes, procedure of triangulation just can detect mistake.
Referring now to Figure 24, the optional triangulation scheme of the problem of the light-emitting component attempt to solve aligning is described.The method of Figure 24 is operated the image that video camera produces in the above described manner, and simultaneously, operates the paired image of being caught by different cameras successively.At step S33, f is initialized as 1 with variable, and this variable uses as frame count, successively each frame of catching is counted.At step S34, since first video camera, detect each pixel projection imaginary line of light-emitting component.Imaginary line like step S35 projection-type, but be from second video camera specifically.Intersect from the line of first video camera and the second video camera projection, detected light-emitting component is thought in any crosspoint of line.This has constituted logic and operation, and carries out this computing at step S36.If should be successful,, alternatively,, then do not write down any light-emitting component at step S38 if should be unsuccessful with computing then at step S37 record light-emitting component with computing.Handle then and arrive step S39, check in this step, to determine whether having handled all frames.If still be untreated all frames, then increase frame count f at step S41, handle and return step S34.If treated all frames are then handled at step S40 and are finished.
Under the control of PC1, carry out the processing of above-mentioned location light-emitting component.Figure 24 A shows the processing that PC1 carries out.At step S200, video camera is connected with PC 1.At step S201, give an order to the light-emitting component that will locate, make it launch the light of its identification code of expression in the above described manner.This is by providing suitable order to realize to control element 6,7,8 (Fig. 5), and the form of the CALIBRATE order that control element 6,7,8 is described with reference Fig. 9 B and 9C successively offers light-emitting component along bus 9,10,11 with order.
At step S202, receive data from the video camera that connects, carry out at step S203 and check, to determine whether having identified acceptable light-emitting component number.At step S204, check to determine whether the image when pre-treatment is first image to be processed.If then at step S205, the position of using video camera is as initial point, in step 206, this position for video camera of storage indication is in the data of the also further indication light-emitting component of initial point with respect to the position of this initial point.If the inspection of step S204 determines that this is not first image to be processed, then handle to arrive step S207, in this step, for example by using above-mentioned Camera Positioning technology to determine to work as the position of the video camera of pre-treatment.Then, handle by step S207 and arrive step S206, in the data of this step storage indication video camera and luminous element position.
Processing arrives step S208 from step S206, carries out in this step and checks, to determine whether that image (that is camera position) will be handled in addition.If then handle and return step S200.Finish at step S209 otherwise handle.
Figure 24 B shows the processing that is used for locating from the data that the processing of Figure 24 B is stored light-emitting component that PC 1 carries out.At step S215, carry out and check, to determine whether also having other light-emitting component to locate.If do not have so other light-emitting component to exist, then handle at step S216 and finish.If such light-emitting component exists really, then select light-emitting component at step S217, be used for the location, at step S218, sign comprises the image of the light-emitting component that will locate.At step S219, abandon and have the image that abnormality reads.At step S220, carry out and check, to determine whether having comprised the light-emitting component that to locate more than an image.If not, then, therefore handle and return step S215 owing to can not correctly locate light-emitting component.Yet, if find comprised the lamp that will locate more than one image, select a pair of image to be used for handling at step S221, carry out above-mentioned triangulation at step S222.At step S223, the position data that storage obtains from the triangulation operation.
At step S224, carry out and check, exist to determine whether the image that comprises this light-emitting component in addition, if there is such image really, then handles and return step S221, obtain other position data in this step.When not having image also will handle, handle proceeding to step S225, carry out statistical analysis in this step, to remove unusual position data.At step S226, the set position data that obtains is afterwards in the final position data of determining of step S227 storage.
Figure 24 C is the screenshot capture by the graphic user interface that application provided of operation on the PC 1, and this is used and allows to carry out with reference to Figure 24 A and the described calibration process of 24B.Can see that this interface provides calibration knob 150, this button can be used for making light-emitting component to launch its identification code, to allow to carry out the sign operation.Provide zone 151, to allow configuration camera position and parameter.
Can use the processing described and the position data that obtains is stored in the XML file.This XML file comprises a plurality of<light id〉label.The form of each label is:
<light id=" 65823 " x=" 0.0005 " y=" 0.6811 " z=" 6.565 "〉wherein, " light id " number afterwards is the light-emitting component identifier, each number afterwards is a coordinate among x, y and the z.It should be noted that in a preferred embodiment of the invention, to come storing coordinate such as the higher precision shown in last.
Return with reference to Fig. 2, can see, the positional information of using said method to determine can be used to use light-emitting component to come display image.The process of display image can be taked various multi-form, character and the position of depending on light-emitting component, but generally speaking, should note, when the image mapped that will show to space representation (as shown in Figure 4), and during the position of the light-emitting component in known should the expression, it is straightforward relatively that placement of images shows.It should be noted that in some embodiments of the invention, for each volume elements in the space representation is distributed the address.As mentioned above, each light-emitting component also has the address, and then, the relation by between light-emitting component address and the volume elements address is placed into light-emitting component in the space.Below discuss addressing scheme in more detail.
Can arrange light-emitting component with multiple different configuration and position.For example, in some embodiments of the invention, can light-emitting component be arranged on tree or the similar structures to be generally used for as mentioned above dressing a Christmas tree and tradition " color lamp " mode of public place object.Optional embodiment of the present invention is used and is had more ambulant light-emitting component equipment, and such equipment must not link together by non-wireless means.For example, in a large amount of crowds' occasion occurring, many people have " optical wand " or are bonded in luminaire as the form of the lamp on the dress ornament article of cap and so on.For example, the mobile phone with backlight LCD screen can be used as light-emitting component.Such occasion comprises the stadium occasion as football match and so on, and as the opening ceremony of important competitive sports such as the Olympic Games.Though the public member who occurs in such occasion has such luminaire as everyone knows, current they operate independently of each other.In an embodiment of the present invention, use these luminaires to come display image, now this is described.
Each luminaire has unique address, uses said method to come it is positioned.In a preferred embodiment, all luminaires send its identification code continuously and realize the location.For example, this can realize by the infrared or ultraviolet source that the above-mentioned type is provided to luminaire.It should be noted that in the application based on the stadium holder of luminaire may be positioned at a side in stadium, promptly they may be positioned on the single plane.Thus, single camera may be enough to locate luminaire.In other words, may not need above-mentioned triangulation method.Yet bigger stadium may need a plurality of video cameras to be used for position fixing process, and each video camera is caught the different piece in stadium.
Locating luminaire, make its position and address known after, order each luminaire, or order luminaire group emission light more possibly.Can use provides any wireless data transportation protocol of enough addressabilities to transmit these instructions.In a preferred embodiment of the invention, luminaire can be launched the light of multiple different colours, and in such embodiments, instruction also comprises color data.The holder of luminaire knows that the luminaire of oneself is opened or closed, or the emission different colours.Near the operation of the luminaire they also know is being experienced similarly and is being changed.Yet though the holder of luminaire only knows local change, at that time, the people who is arranged in the stadium opposite side can observe the image of the bigger stadium size that is shown by luminaire collective.For example, can display pattern, football club's sign, national flag or even as the text of lyrics and so on.
Referring now to Figure 24 D, the description control light-emitting component shows the process of predetermined image.At step S230, create the model of the content displayed of indicating.Use traditional graph technology to create this model, this technology is used two dimension and/or 3-D graphic primitive.Upgrade this model at step S231.When finishing this model, storage application model 155.
At step S233, read the data of indication luminous element position.At step S234, determine to be positioned at the light-emitting component in the represented zone of model 155.At step S235, carry out and check, to determine whether to provide the emulation of light-emitting component.Below will describe such emulation in detail.When emulation was provided, at step S236, what supply a model in simulator was visual, lights suitable light-emitting component at step S237 afterwards.If do not need emulation, then handle directly and arrive step S237 from step S235.
Figure 24 E is the screenshot capture from the graphic user interface intercepting that allows to control in the above described manner light-emitting component.Can see, provide and opened button 160 and allow to open the model data file.In addition, zone 161 allows to use light-emitting component to show various standard effects.
Figure 24 F is the screenshot capture that intercepts from above-mentioned simulator provided by the invention.Can see, show all light-emitting components, it is brighter that wherein the light-emitting component of being lighted is expressed ground.Can see that the control light-emitting component shows the image of fish.
Provide the application of controlling light-emitting component also to allow Interactive control.Particularly, Figure 24 G allows to load the data that the definition light-emitting component is arranged.Shown in Figure 24 H, in simulator, load and show.Can see that light-emitting component is disposed on the Christmas tree.Interface shown in Figure 24 I allows the user to select paintbrush (brush).Then, use this paintbrush " drawing " in the window of Figure 24 H, the window of Figure 24 H allows to select suitable light-emitting component to light.
As mentioned above, luminaire can move with its holder's motion.Yet typical motion may be slowly and relatively less generation.But, need to recalibrate the position of luminaire sometimes.Can use invisible light source (for example infrared or ultraviolet) as mentioned above, or, carry out such recalibration alternatively also as mentioned above by changing light intensity.
It should be noted that because luminaire only needs to receive (and not sending) data, therefore can minimize the complexity of luminaire based on the embodiment of the invention of removable luminaire.Make and use up (visible or invisible) and only carry out transmission.
With reference to Fig. 5, described from PC1 and sent the instruction that is used to light each light-emitting component to light-emitting component 2 via control element 6,7,8, appointed some data to transmit task to control element 6,7,8.Can recognize, in the embodiment of the invention of using wireless luminaire, can create similar hierarchy (hierarchy).But, when using wireless luminaire, may need light-emitting component to carry out dynamically or the connection of self-organizing (ad-hoc) with wireless base station different and that change.
In described embodiment of the present invention, storage is at the details of the position of map addresses on PC 1 or control element 6,7,8.Yet, in optional embodiment of the present invention,, just this position is sent to this light-emitting component or equipment, or optionally, is sent to suitable control element in case determine the position of light-emitting component or equipment.Then, the mode by broadcast or multicast message sends instruction.For example, be 4 layers hierarchy if will comprise the spatial division of lamp, then can use four-tuple to represent the position.Generally speaking, be the hierarchy of multilayer if will comprise the spatial division of light-emitting component, then can use IP-based Octree or quaternary tree to represent specific zone.Such method is below described in more detail.Can send the instruction of indication by all lamps in the defined unit of arbitrary grade element of hierarchy.In case receive such instruction, each light-emitting component determines whether that it is arranged in any suitable element, thereby determines whether that it should light, and can determine also which kind of color it should light.
Can recognize that a plurality of light-emitting component set can be used together, produce bigger demonstration.
Above-mentioned method of locating light-emitting component for the purpose of image demonstration has various other application, describes this type of application now.For example, can use the positioning equipment of emission invisible light to follow the tracks of people or equipment on every side in the precalculated position.Can use said method to locate such positioning equipment, but it should be noted that such positioning equipment may stronger motion take place than above-mentioned light-emitting component.
In needs people from location (for example around the job site) the embodiment of the invention, the people wears badge (badge), and this badge has the LED that is configured to take place infrared light.This badge also is configured to send continuously the identification code through suitable coding and modulation of the above-mentioned type.Along with the motion of people around in the job site, video camera detects this identification code then, and infrared light is invisible to human viewer, but can clearly be detected by video camera.If detect the sign indicating number of emission by single camera, then this will allow in the visual field of this video camera the people that the location is associated with the badge with detected identification code at least.If by two or more multiple-camera detect the identification code of transmission, then can use the triangulation method of the above-mentioned type in the space, to carry out absolute fix.
If only detect the sign indicating number of transmission, then only may be enough to like this in the space, the people be positioned by single camera.Suppose that badge rest on the ground 1 meter height (may be like this), and the supposition video camera is placed on the ground the position (for example smallpox flaggy under construction) apparently higher than 1 meter, then can realize this point, on the plane that suppose 1 meter highly can be used for 1 meter height on the ground the people be positioned.In other words, image can use simultaneously with highly measuring, with the location badge.
As mentioned above, the triangulation of two video cameras of use has produced the linear equation of following form:
(C x+tR x,C y+tR y,C z+tR z)
Under aforesaid situation, on the about on the ground 1 meter height of known target.Suppose the height that will define on the z dimension, then known:
C z+tR z=1
Suppose C zAnd R zValue known, then can easily derive the value of t.After having derived such value, can recognize, the above-mentioned equation of its substitution can be derived x and y and sit target value.
Above-mentioned example is paid close attention in the job site that a plurality of video cameras are installed the people is positioned.Can use very similar techniques to come the positioning equipment project.Each items of equipment that will locate is equipped with small-sized label (tagging) equipment, and this equipment has the outward appearance of little black button and comprises infra-red transmitter.This transmitter sends the unique identification sign indicating number continuously, and suitable placed cameras detects this identification code, to determine the position of equipment.Can recognize that transmitter can be continuously, or alternatively, send its unique identification discontinuously or periodically.Once more, if by at least one pair of video camera detect transmission the sign indicating number, then can use triangulation to locate this equipment.When using single camera, as mentioned above, can use hypothesis (in this case, ground level may be suitable hypothesis) about the height level, come positioning equipment with the image that uses single camera to be caught.
The embodiment that it should be noted that the invention described above must not depend on additional firmware.Really, can use existing assembly to realize that the desired position determines purpose.Particularly, existing screen equipment can be used, the LED that indicates its power rating traditionally can be used as the equipment of mobile phone and so on as the equipment of computer and so on.
In the example of above-mentioned location, quoted infra-red transmitter.It should be noted that in some embodiments of the invention, used the ultraviolet or the infrared reflective device that cover (shutter) by LCD.For example, can use suitable reflecting surface to replace light-emitting component among the embodiment of the invention described above.Any light source can shine on these reflectings surface, thereby produces a plurality of light-emitting components.In these light-emitting components each can be rendered as point-source of light, and is similar with the mode of LED.In order to control such reflecting surface, need the reflectivity of these reflectings surface of control.Can realize such reflectivity control by the surface (as LCD) with controlled opacity is provided on the surface of high reflectance (as mirror).This can produce the light-emitting component of lower-wattage, its emission light rather than generation light.
The embodiment of the invention described above relates to and uses visible or invisible light is located light-emitting component.Some embodiment relate to the oriented element of use and send by visible light, with display image.Yet, it should be noted that some embodiments of the present invention use sound replaces light, describes such embodiment now.
Figure 25 provides and has used a plurality of sonic transceivers that are positioned, are used to then send according to its position sound, produces the general survey of the hardware of three dimensional sound scape.The hardware of Figure 25 comprises controller PC55, will illustrate this PC 55 in Figure 26 in more detail.Can see that the structure of PC 55 and PC1 shown in Figure 6 are very similar, and use original similar reference number to indicate similar assembly.Here the so similar assembly of more detailed description no longer, i.e. CPU 13 ', RAM 14 ', hard disk drive 15 ', I/O interface 16 ', keyboard 17 ', display 18 ', communication interface 19 ' and bus 20 '.Yet, it should be noted that PC 55 also comprises sound card 56, this sound card 56 has input 57, receives voice data by importing 57, and this sound card 56 also has output 58, by exporting 58 to for example loud speaker output sound data.
Return with reference to Figure 25, can see, PC 55 loud speakers 59,60,61,62 connect, and these loud speakers are connected with the output 58 of sound card 56.PC 55 also is connected with microphone 63,64,65,66, and these microphones are connected with the input 57 of sound card 56.PC 55 also is configured to carry out radio communication with a plurality of sonic transceivers, and in described embodiment, a plurality of sonic transceivers are taked the form of mobile phone 67,68,69,70.Though it should be noted that only to show 4 mobile phones in Figure 25, practical embodiments of the present invention may comprise more mobile phone or other suitable sonic transceivers of big figure.Being connected between mobile phone 67,68,69,70 and the PC55 can be taked any form easily, comprises using mobile telephone network (for example GSM network) or using as the wireless connections (supposing that PC 55 and mobile phone 67,68,69,70 all are equipped with appropriate interface) of other agreements of WLAN and so on.Really, in some embodiments of the invention, PC 55 and mobile phone 67,68,69,70 can link together by wired connection.Describe now and use device shown in Figure 25 to produce the three dimensional sound scape.
At first describe the embodiment of the invention that PC 55 guide sound scapes produce, at first with reference to Figure 27, Figure 27 shows the flow chart of handling general survey.The processing that each step is carried out is below described in more detail.At step S45, mobile phone 67,68,69,70 all connects with PC55.At step S46, carry out initial calibration, location mobile phone 67,68,69,70 at step S47, carries out refinement to this initial calibration in the space.At step S48, with respect to output volume with towards calibrating mobile phone.After having carried out these various calibration processes, use these mobile phones to present sound at step S49.
Figure 28 illustrates in greater detail the processing of the step S45 of Figure 27.At step S50, PC 55 waits for the connection request that receives from mobile phone 67,68,69,70.When receiving such request, handle to be transferred to step S51, be created in the data of storing in the data repository at this step PC 55, this data indication is connected with mobile phone, and indicates the address of mobile phone, so that carry out data communication with it.It should be noted that a request that mobile phone produced can take any form easily.For example, under the situation of carrying out on the telephone network between mobile phone 67,68,69,70 and the PC 55 of communicating by letter, when needs connected, mobile phone can be called out predetermined number, and the calling of predetermined number has been constituted connection request.In the duration that connects, there is call between mobile phone and the PC55 then.Can carry out such call to the telephone number of predetermined high stage speed (premium rate).It should be noted that also the address of distributing to mobile phone 67,68,69,70 may depend on employed communication mechanism.For example, when communicating by letter on telephone network, telephone number can be used as the address.
After having set up being connected between mobile phone 67,68,69,70 and the PC 55, carry out calibration at the step S46 of Figure 27.Figure 29 illustrates in greater detail this calibration, and Figure 29 shows the calibration process that PC 55 carries out.At step S52, PC 55 makes loud speaker 59,60,61,62 play predetermined tone (tone), and the microphone of mobile phone 67,68,69,70 detects these tones, and the tone that is detected is sent to PC 55.Each phone that receives data from it is carried out following processing successively.At step S53, receive the data of indication pitch detection at step S53.At step S54, with relevant, use this output of being correlated with to calculate in phone and the loud speaker 59,60,61,62 distance of each data that receive by the tone of each output in the loud speaker 59,60,61,62.Then, at step S56,, use this range data to determine the position of phone by triangulation.Step S57 determines whether that phone needs calibration in addition, if then handle and return step S53.Finish at step S58 otherwise handle.
The triangulation distances calculation process is described now in more detail.Each process can be taked multiple different form, depends on the character of loud speaker 59,60,61,62 sound that produced.Yet generally speaking, position fixing process comprises that the actual sound that one of sound that each loud speaker is produced and microphone of mobile phone receive mates, and the sound that is received is the combination of the sound that produced.Then, handle the sound that is received, to identify the sound component that each loud speaker produces.
If the simple tone of loud speaker 59,60,61,62 outputs, then identification procedure can be comparatively direct, can use a plurality of band pass filters to received signal, and each band pass filter is applicable to an expected frequency, to distinguish the sound that different loud speakers produce.If the signal of each loud speaker output is opened or closed, or modulated, then the time of cost provides the good indication that the sound from loud speaker 59,60,61,62 is sent to the time of mobile phone 67,68,69,70 in the air between the transmission of these modulation and the reception.If known this time is then because therefore the velocity of sound in the known air can determine the distance between loud speaker 59,60,61,62 and the mobile phone 67,68,69,70.In addition, by using the relative intensity of band pass filter id signal in received signal, this provides the measurement to relative distance.
Above-mentioned information allows to determine the position with different ways.
If known, then can determine the absolute distance measurement between mobile phone and each loud speaker by the moment of loud speaker 59,60,61,62 transmission sound and in the moment of one of mobile phone reception same sound.Therefore,, can determine that it is the center that mobile phone is positioned at this loud speaker for each loud speaker, radius by on the sphere of sign distance.The intersection point of three balls is one of two three-dimensional positions with the station location marker of mobile phone, and one of them may be dropped at below ground usually owing to it.If use more than 3 loud speakers (4 loud speakers for example shown in Figure 25), then can further improve well-determined accuracy.
If transmitter and receiver clock are asynchronous, be still possible based on the measurement of aerial delivery time.For example, if the known moment by each loud speaker transmission signal, also one of known mobile phone receives the relative moment of same signal, can determine that then loud speaker is to the distance between the various mobile radio.Then, can use paired loud speaker, and on more complicated 3D surface (rotate hyperbola typically, that is, about the hyperbola of its main axis rotation) go up the specific mobile phone in location, can use the intersection point on this 3D surface to determine unique 3D position.
Also the volume of the signal that can receive based on microphone 63,64,65,66 is determined relative position.Yet, it should be noted that because directivity sound tendency, such measurement may be not robust so.
When loud speaker 59,60,61,62 output can be used the simple tone that band pass filter distinguishes mutually, above-mentioned technical work got fine.Produce under the situation of more complicated sound (for example music) at loud speaker 59,60,61,62, need more complicated correlated process.For example, can determine the desired audio from particular speaker, the actual sound that this desired audio has been received with being offset specific time-delay multiplies each other then, and sues for peace on the short time window.That produced and the skew covariance is provided, this skew covariance can be with the tolerance of the signal strength signal intensity in this delay of opposing.Like this, having the delay of higher signal strength will be corresponding with the aerial delivery time.
In optional embodiment of the present invention, do not carry out in the above described manner and proofread and correct and distance calculation.The substitute is the desired audio of each point in PC 55 computer memories.Export which kind of sound owing to known from each loud speaker, therefore can carry out such calculating.Then, the sound that is received can be the object by the search of each desired point, determines that phone is positioned at the point that has with the immediate desired audio of reception sound.
Operation to hue and luminance has more than been described in the light-emitting component process of location.The sound operation that can not hear can be used in the location of sound source, more easily creates, with detection and location signal when playing " normally " sound.For example, the high or low frequency pulse that can not hear can be mixed with sound source, or can with the mode that can not hear revise sound the time/the frequency characteristic, this is similar with the method that is used to compress the MPEG-3 recording.
After having carried out processing shown in Figure 29, the position of known each phone, PC 55 can be in these data of the other storage of address date of each phone.After having determined position data, at the step S47 of Figure 27 this position data is carried out refinement, Figure 30,31,32,33 shows this processing.
With reference to Figure 30, at step S59, PC 55 computer memory sound mappings, the desired audio on each point in the space has been determined in this spatial sound mapping.After having determined such spatial sound mapping, each mobile phone is carried out following processing successively.Use the position data that produces as mentioned above, determine to pass through the sound (step S60) of the loudspeaker plays of this mobile phone,, provide determined sound to this mobile phone at step S61.Step S62 determines whether should carry out in addition the phone of processing, if, then handle and return step S60, finish at step S63 otherwise handle.
When carrying out the processing of Figure 30,, carry out the processing of Figure 31 simultaneously successively to each phone.At step S64, the phone that carry out processing is eliminated the noise, make it temporarily stop to send any sound.Then, mobile phone uses its microphone to catch near the sound that mobile phone sent.The sound of catching is sent to PC 55, and at step S65, center P C 55 receives this sound.The spatial sound mapping that the sound that receives and step 59 (Figure 30) are calculated is relevant, the data of this locus of being used for the indication phone that refinement PC 55 stores of being correlated with.Step S68 determines whether that phone need be carried out processing in addition.If, then handle and return step S64, finish at step S69 otherwise handle.Periodically carry out the processing of Figure 30, to guarantee to have kept accurate position data.
Carry out the processing of Figure 32 concomitantly with the processing of Figure 30 and 31.At step S70, PC 55 receives microphone 63,64,65,66 detected sound.At step S71, the spatial sound mapping that the step S59 of the sound that received and Figure 30 is calculated is relevant, uses this relevantly to come definite mapping (step S72) that relative volume on the each point in the space, phone place is indicated.
Typically, the loud speaker of some mobile phones is more loud than other, and in addition, some zones will comprise than the more mobile phone in other zones.Therefore, the wave volume that may need each mobile phone to play is to realize required sound scape.For this reason, the essential actual volume that calculates the sound that all phones produce in each zone is to produce this regional volume mapping.
Under simple scenario, so that the mobile phone in the specific region produces fixedly tone, can produce the volume mapping by arranging.Then, measure (use fixing microphone or use the microphone of other mobile phones alternatively) by these fixing volumes of the sound of tone generation from a plurality of known location.By measured sound and known volume (expecting its loud speaker from known location and known power) are compared, can determine the effective power of this position.Each zone sequence is carried out this processing, produce the volume mapping.
Though it is fine that said method work gets, in some embodiments of the invention because destructive relatively large, because of rather than preferably.Therefore, can carry out more complex technology to the morbid sound that on whole zone, receives based on band pass filter or correction.Very similar with the fixedly loud speaker extraction signal (above-mentioned localization method, using) from each phone, can be to carrying out filtering or check with the sound of each region generating from the fixing signal of microphone, the next signal strength signal intensity that can compare with above-mentioned expectation strength at each regional generation is to determine the power output in the specific region.
Figure 33 has illustrated to be used for the further processing of refinement calibration.Each phone is carried out this processing successively, and this processing is corresponding with the step S48 of Figure 27.At step S73, phone is eliminated the noise so that its output sound not.At step S74, PC 55 receives the sound of the microphones capture of phone.At step S75, related data is combined with the position data of this phone.Use these data step 76 calculate mobile phone towards, in step S77 calculated gains.Step S78 determines whether that phone will be carried out processing in addition, if, then handle and return step S73, finish at step S79 otherwise handle.
As mentioned above, calculated the gain of the microphone of particular telephone.After the position of having calculated mobile phone, the signal that the volume of the signal that this mobile phone can be received and reference receiver are desirably in this known location reception compares.This allows to calculate the gain of mobile phone microphone.In other words, be 50 signal if the microphone that expectation has a reference sensitivity receives intensity in this known location, and the signal strength signal intensity of actual reception is 35, can think that then this mobile phone has 70% sensitivity.If use signal (for example when mapping of refinement volume or position) later on, then can use this known gain values to operate the numeral of reception, so that the reception value is converted to value desired for the microphone with reference sensitivity from this mobile phone.
In addition, also described more than determine each mobile phone towards.If known mobile phone is equidistant with two loud speakers that produce the sound equate volume,, can infer that then this microphone is towards the loud speaker that receives the maximum signal from it if be higher than another from the intensity of the signal of a loud speaker.Typically, obtain similar reading, will provide the estimation of more accurate rotation from a plurality of loud speakers.Though it should be noted that can calculate in this manner towards,, because mobile phone hands, therefore, suppose that then such information can not have very big value towards changing fast in time.Yet, for equipment have more fixing towards optional embodiment, that this calibrated horizontal can allow is directed, organized sound generating spatially.
Figure 34 illustrated that PC is 55 that carry out, use mobile phone to produce the processing of step S49 of Figure 27 of desired audio.At step S80, the spatial sound of calculation expectation, at step S81, this spatial sound mapping combines with the mapping of expectation volume, to produce the spatial sound of revising at step S82.To each phone, carry out following the processing successively.Obtain the position (as before determined) of mobile phone.Use this position data, on the spatial sound that step S82 produces, carry out search operation, so that determine will be by the sound (step S83) of this phone output.Then, at step S84, desired audio is offered this phone.Step S85 determines whether that phone will be carried out processing in addition.If, then handle and return step S84, finish at step S86 otherwise handle.
Above-mentioned processing with reference to Figure 28 to 34 relates to the processing that PC 55 carries out.Referring now to the flow chart of Figure 35, the processing that one of mobile phone 67,68,69,70 is carried out is described.At step S87, mobile phone uses the processing of the above-mentioned type, is connected with PC 55.Then, two of mobile phone executed in parallel are handled stream.First handles stream comprises from PC55 and receives voice data (step S88), and the voice data (step S89) of this reception of output on the loud speaker of mobile phone, makes this mobile phone and other mobile phone combination results three dimensional sound scapes.Second handles the microphones capture sound (step S90) that stream uses mobile phone, and sends it to PC 55 (step S91).Second processing flows to PC 55 provides data, to allow maintenance and home position data.
The embodiment of the invention that aforesaid operations produces the three dimensional sound scape be center P C 55 is determined will be from the sound of each phone output, and provide suitable voice data.In optional embodiment of the present invention, phone can oneself determine to export which kind of sound.Figure 36 has illustrated such embodiment.
With reference to Figure 36,, download the calibration data that is used to calibrate mobile phone at step S92.This calibration data can comprise the data of the tone that the indication mobile phone will produce in calibration process, can comprise that also indicative of desired is by the data of other equipment at the sound of different spatial generation.At step S93, by the microphone of mobile phone, receive the sound that other mobile phones produce, then,, use the sound of calibration data and reception to carry out related operation at step S94.Can carry out related operation as mentioned above, but it should be noted that generally speaking, because the relatively limited processing capacity of mobile phone, it is preferred using the related operation of low relatively computer power.After carrying out such related operation,, can determine the position of mobile phone at step S95.
After carrying out said process, mobile phone is configured to identify oneself with in the generation of sound scape of the above-mentioned type.Therefore, at step S96, download the voice data of indicating the sound that will produce.At step S97, use determined position data, handle the voice data of reception, and use this voice data to determine will be by the sound of this mobile phone output.Export determined sound at step S98 then.
Take place after being shown in step S92 to S95 though it should be noted that step S96 to S98,, in some embodiments of the invention, the processing executed in parallel of the processing of step S96 to S98 and step S92 to S95.
Make with light harmony embodiments of the invention have been described after, the addressing scheme that is applicable to the embodiment of the invention is described now.Explained (for example with reference to Fig. 5), preferably, be classified to handle the control of light-emitting component.Preferably, the light-emitting component in the predetermined portions in the bright space of each control element 6,7,8 Control essentials.In other words, if used suitable addressing mechanism, only need be in the address, processing section at different levels of hierarchy.For example, the first of address can indicate one of control element simply.This is master controller PC 1 unique address part to be processed.Then, control element can use the second portion of the address of detailed each light-emitting component of expression to instruct the accurate light emission element.Addressing scheme is described now in more detail.
At present, the space address system is preferred, wherein, can come light-emitting component is carried out addressing based on space address, and for example, can provide instruction to open with coordinate (12 ,-3,7) is all lamps in the 10cm cube at center.With reference to Figure 37, can see, space address 75 can be converted to address, a plurality of this locality (native) 76, thus each local address with by this space address indicated and the location light-emitting component be associated.
In addition, it should be noted that currently preferred embodiment use IPv6 of the present invention address.As shown in figure 38, the IPv6 address is 128 bit long (16 bytes), typically is made up of two logical gates: 64 bit network prefixs 77 and 64 bit host addressing suffix 78.
Outside the indicated network of 64 bit network prefixs 77, do not explain 64 bit host addressing suffix 78, therefore, host addressing suffix 78 can be used to encode and the indicated directly related information of network of network prefix 77.Can use the 64 bit suffix three-dimensional location data of encoding, as shown in figure 39, can see in the drawings, 64 bit host addressing suffix comprise first component 79 of indication x coordinate, the second component 80 of indication y coordinate and the three-component 81 of indication z coordinate.In three components each comprises 21 bits, and 1 bit does not use.Provide 21 bits to each x, y, z coordinate, this allows, and the cube to 1 cubic millimeter carries out addressing respectively in the cube of 2km.Similarly, this addressing scheme can provide the three-dimensional addressing to the earth, allow to 1 meter longitude and latitude resolution, 1 meter height resolution to 10, and 000 meter, 10 meters height resolution to 100, many resolution mappings of 000 meter are enough to for example any aircraft in location or ship.
Use required comparing with great majority, this is obviously more fine-grained addressing.In practice, can use non-cube of littler addressing.The coordinate frame of this type of application is relevant with certain point or original calibrated camera position in the display usually.
In optional embodiment, host addressing suffix 78 can be divided into two parts, each comprises 32 bits, with the indication two-dimensional position data.Really, can recognize, network prefix 77 indicated networks can be explained host addressing suffix 78 in any mode easily, therefore, the for example combination of locus, time and direction can be represented in host addressing suffix 78, in certain embodiments, even can represent the combination of books (book) ISBN and the page number.
Figure 40 has illustrated longitude and latitude two-dimensional encoded, and wherein, host addressing suffix 78 comprises two parts.First 82 comprises 31 bits and represents longitude, and second portion 83 comprises 32 bits and represents latitude.Also exist and comprise the third part of not using bit.Such addressing scheme provides and 1 square centimeter of earth surface relevant address.The second portion 83 that it should be noted that the expression latitude is compared with first 82, comprises an added bit.This is because the girth of the earth is about 40,000km, and from the arctic to the distance in the South Pole be 20,000km.Addressing scheme shown in Figure 40 allows the expression network, and in this network, for each point on the earth surface provides the virtual web server, these web servers provide as data such as height above sea level and land uses.Alternatively, such web server can provide geographical space URI for the semantic web application.
With reference to Figure 41, can between first computer 84 and second computer 85, transmit the IPv6 address of the above-mentioned type via internet 86.Though this type of address of host addressing suffix can representation space information, because internet 86 only uses network prefix 77 to carry out route, therefore, can transmit the address of the above-mentioned type pellucidly by internet 86.
When the address arrives the indicated network of network prefix 77,64 bit suffix are converted to local non-space address.Figure 37 has schematically shown this conversion.
In optional embodiment of the present invention, the router of suitable configurations and the network of network controller can be explained the IPv6 address of expression spatial information, and this router and network controller are known the mode of carrying out the space addressing.The embodiment of this type of network operates by keep the space address scope in router, therefore can control broadcasting and multicast message, so that it only is sent to relevant network node.Figure 42 shows this type of embodiment of the present invention.
With reference to Figure 42, can see that the first router 87, the second router 88 and Third Road are connected with network 90 by device 89.Can see, on network, transmit data at address 2001:630:80:A000:FFFF:5856:4329:1254.These data and the address that is associated thereof are transferred into three routers 87,88,89.As mentioned above, this address has encapsulated spatial data.Suppose and spatially disposed router 87,88 that then they can determine that its connection device 91,92 does not separately need the data that are associated with this locus.Correspondingly, router 87,88 does not transmit these data.On the contrary, router 89 determines that the assembly of its 3 connections needs to receive the data at this locus really, correspondingly, router 89 with this data forwarding to assembly 93.
It should be noted that action need of the present invention shown in Figure 42 uses the Routing Protocol with spatial knowledge.Such agreement can comprise data from a coordinate system transformation to another.
A kind of this space-like Routing Protocol that uses in the embodiment of the invention can be associated each router 87,88,89 with the three-dimensional case (bounding box) of delimiting, this demarcation case comprises all devices that is connected with this router.Delimit case for being arranged in the high-rise relatively router of hierarchy, calculating, with the demarcation case of the router that comprises all connections.Therefore, in such system, the demarcation case of space address and router can be compared,, then this message is sent to the more router of low layer, in this router, repeat this process if delimit within the case at this in the zone of addressing.
Use aforesaid high resolution space addressing scheme to have some problems really.Because the volume data set can be very big, therefore,, present whole scene because the restriction of widely available rated output is not always can carry out addressing by separately each being constituted volume.For example, be the black/white volume elements mapping of 10 cubic metres volume generation cubic millimeter class resolution ratio, under the transfer rate of 1M bits per second, need cost 12 days.In addition, under the situation of light-emitting component, the distance between the lamp may be much larger than resolution.Therefore, the instruction of opening the light-emitting component in specific 1mm cube may not have effect, and this is owing to unlikely be placed with light-emitting component in this 1mm cube.
The present invention has overcome more above-mentioned problems in many ways.For example, different luminous networks is used different resolution.Send more substantial data of description, describe as similar X3D mark or other forms of three-dimensional model (solid modelling).
Yet some embodiments of the present invention are used hierachical data structure, have created the multiresolution coding in single space address.This is based on the following fact, that is, the required bit number in the address of low resolution descends rapidly.
For example, can use 8 bits to represent position (for example one-dimensional space address) on 1 meter ruler, come to be encoded in the position to use hierachical data structure.For 8 bits of encoded systems, the number in first " 0 " " 1 " has before produced " level indicator ".7 " 1 " expression highest level (whole chi), next level are 6 " 1 " and then 1 " 0 ", and minimum level (interlayer 8) is provided by single " 0 " the preceding.Be not used for indicating the bit of level to be used to locate the actual address of required scope.Use this hierarchy to come the most accurate locative mode to be to use the space address that begins with " 0 ".Allow to specify the 8mm scope like this:
1000 2 7 ≈ 8 mm
Similarly, mean that at preceding bit " 10 " all the other 6 bits can represent the 16mm scope, " 110 " provide 32mm scope, or the like.This means that we can represent each 8mm section of this chi, any 16mm section or represent as a wholely with the precision of about 500mm with first or the second half, or represent whole chi simply.This is shown in the following table 2.
At preceding bit The desired position bit number Can the appointed positions number Precision/mm
0 7 128 8
10 6 64 16
110 5 32 32
1110 4 16 63
11110 3 8 125
111110 2 4 250
1111110 1 2 500
11111110 0 1 1000
Table 2
For three dimension system, the equivalence of this space addressing method is to use the data structure that is called as Octree.
Octree is a kind of data structure, and wherein, each node of Octree is represented a cube volume, and each node is represented 1/8th of his father's node.Figure 43 has schematically shown such structure.Can see that top layer volume 94 comprises 8 component volume 95.In these 8 component volume each self comprises 8 component volume 96.
For 64 bits of encoded systems (that is, the open ended coded system of IPv6 address of host addressing suffix), produced " level indicator " at the number of first " 0 " " 1 " before.21 " 1 " expression highest level.That is, can be used as the cube 94 that integral body is come addressing.Next level have 20 preceding " 1 " and then 1 " 0 " indicate, this level provides 3 bits, can be used for representing volume 95 in the mode of x, y and z value.Figure 43 shows such value in conjunction with volume 95.
Next level is followed 1 " 0 " indication by 19 in preceding " 1 ".This level provides 6 bits, can be used for addressing volume 96 independently, but can not further son division of independent addressing.
At minimum level (level 21), can the single volume elements of independent addressing.This level is indicated in preceding " 0 " by 1.Minimum level address like this is identical with address shown in Figure 39, and all the other bits are used to indicate the level of address.
The resolution that the at all levels of addressing hierarchy has been shown and has been associated in the following table 3:
Number preceding 1 At preceding bit Bit number at each x, y, z The desired location bit number The hop count that can represent at each x, y, z Total addressable volumetric region Resolution
0 0 21 63 2 21 8 21 2 0?
1 10 20 60 2 20 8 20 2 1
2 110 19 57 2 19 8 19 2 2
3 1110 18 54 2 18 8 18 2 3
4 1111?0 17 51 2 17 8 17 2 4
5 1111?10 16 48 2 16 8 16 2 5
6 1111?110 15 45 2 15 8 15 2 6
7 1111?1110 14 42 2 14 8 14 2 7
8 1111?11110 13 39 2 13 8 13 2 8
9 1111?1111 10 12 36 2 12 8 12 2 9
10 1111?1111 110 11 33 2 11 8 11 2 10
11 1111?1111 1110 10 30 2 10 8 10 2 11
12 1111?1111 9 27 2 9 8 9 2 12
1111?0
13 1111?1111 1111?10 8 24 2 8 8 8 2 13
14 1111?1111 1111?110 7 21 2 7 8 7 2 14
15 1111?1111 1111?1110 6 18 2 6 8 6 2 15
16 1111?1111 1111?11110 5 15 2 5 8 5 2 16
17 1111?1111 1111?1111 10 4 12 2 4 8 4 2 17
18 1111?1111 1111?1111 110 3 9 2 3 8 3 2 18
19 1111?1111 1111?1111 1110 2 6 2 2 8 2 2 19
20 1111?1111 1111?1111 1111?0 1 3 2 1 8 1 2 20
21 1111?1111 1111?1111 1111?10 0 0 2 0 8 0 2 21
Table 3
In table 3, in preceding 1 number row (row 1) presentation address at the number of first 1 before 0.The bits of original that in preceding bit column (row 2) presentation address, can be used for the level of this addressing hierarchy of unique identification.This adds that by 1 of the number of representing in the row 1 single 0 forms.The bit number that is used for single coordinate at bit ordered series of numbers (row 3) expression of each x, y, z.Because the different resolution of each level in hierarchy needs bit more or less to store x, y, z coordinate.Desired location bit ordered series of numbers (row 4) equals 3 times of number in the row 3.This is because 3 coordinates of needs come the volumetric region on each hierarchical level of addressing.Each level in hierarchy has the cube zone of different numbers.The hop count row (row 5) that can represent each x, y, z have illustrated on single dimension how many these cube zones are arranged.For example, in Figure 43, in highest level, have only a cube to be installed on the x direction, still the level under has 2 cubes on the x direction, and next level has 4 again.Total addressable volumetric region row (row 6) have provided total cube number that each level in hierarchy can be represented.For example, in Figure 43,1 cube is arranged, have 8, have 54 at next level at second level in highest level.Exactly, these row are values that the value that provides in the row 5 (hop count that can represent each x, y, z) is brought up to 3 powers.Resolution row (row 7) have provided the cubical length of side in each level addressing.This provides with respect to the smallest addressable zone.Be that minimum level is " size " 1.These regional physics size and in fact whether they evenly and linearly are mapped to physical space and depend on the accurate situation of using.For example, if be used for fairly large geographical addressing, then x and y can be longitude and latitude, and the z direction can be a height.Therefore, each zone is that the levels of precision of unit will change according to the position with rice.
Use above-mentioned addressing scheme, message can be addressed to any Octree cube from single volume elements to whole space.
For example, can send instruction and light volume: all light-emitting components in 11,111,111 11,111,111 1,110,000,000,000,000 00,000,000 00,000,000 00,000,000 00 01 10 10.19 " 1 " indication level that the address begins.As above shown in the table, 2 bits (promptly 2 are arranged 2=4) be used to encode scope on x, y and the z direction.X, y, the z coordinate of last 6 bits (01,10,10) indication volume of address.
This will be in following address realm all volume elements of addressing:
2 19≤ x<2^20 position 01, resolution 2 19Volume elements
2 20≤ y<2^20+2^19 position 10, resolution 2 19Volume elements
2 20≤ z<2^20+2^19 position 10, resolution 2 19Volume elements
Observe these equatioies in more detail, it should be noted that 18 volumes that are addressed in preceding 1 indication are 2 of basic volume elements width 19Doubly.The x coordinate of coding is 01 binary system, therefore is meant to have 1 * 2 19With 2 * 2 19Between the zone of x coordinate, or only refer to 0 1,000 0,000 0,000 0,000 0000 to 0 1,111 1,111 1,111 1,111 1111.
With in a scope individually each independent volume elements of addressing compare, the data of using Octree to transmit are wanted much less.
Optionally mapping (still using the Octree data structure) is that the initial initial bits position to x, y, z coordinate is maintained fixed, and uses end bit to determine level.For the demarcation case filtering on router, this has advantage.For example, above-mentioned x, y, z position are encoded as with replacing 01000,000 00,000,000 00000 100 00,000,000 0000000000 100,111 11,111,111 11111111.
Being mapped in of these compactnesses has sufficient " free time " bit under the low resolution, allow to comprise in identical address realm various other shapes, rotation or offset area.
Foregoing description relates to the addressing of area of space.Send message to such space address and carry certain payload usually.For example, can comprise " turning on all lamps in this zone " or " all lamps in should the zone become blueness " message of form like this.
Can recognize that the present invention is applicable to the signal source size of relative broad range, allow device of the present invention is decreased to micron or nanometer scale.Small-scale like this device can produce the ability that the large-scale array in micron of the present invention or nanowire signal source is used in exploitation, employing, calibration and control.For example, can use small-scale like this signal source to construct display, as cathode ray tube, LCD and plasma screen.Can recognize, use the signal source of such miniaturization, can adopt such display apparatus in the mode of self-organizing.For example, studies show that, the miniaturization signal source can be injected in from jar (canister) on the supporting construction (for example wall), use technology of the present invention to calibrate then.Can recognize, in such self-organizing is used, the small-signal source can from prior to the signal source deposition or draw power with the substrate that signal source deposits.This substrate self can be connected with power supply.
Various embodiment of the present invention has below been described by way of example.Can recognize, can the next feature of multitude of different ways in conjunction with described various embodiment.Such combination will be apparent to those skilled in the art.It should be noted that the above-mentioned description that provides should not be considered to restrictive.But schematically, to those skilled in the art, modification is conspicuous.Such modification within the spirit and scope of the present invention.Especially, can recognize that though describe feature of the present invention with light-emitting component, some such features also are applicable to any suitable device on an equal basis.For example, though the scheme of addressing light-emitting component has been described,, can recognize that such addressing method can be used for other equipment similarly.

Claims (127)

1. method of using a plurality of signal sources to come the presentation information signal, described a plurality of signal sources are positioned at predetermined space, and described method comprises:
From described signal source each receives corresponding framing signal;
Based on described framing signal, produce the position data of the position of the described a plurality of signal sources of indication;
Based on described information signal and described position data, be each the generation dateout in described a plurality of signal sources; And
Send described dateout to described signal source, to present described information signal.
2. the method for claim 1, wherein, each signal source also comprises for producing position data:
The identification data of described position data with the described signal source of sign is associated.
3. method as claimed in claim 2 wherein, is associated the identification data of described position data with the described signal source of sign to comprise:
Described framing signal according to receiving from each signal source produces described identification data.
4. method as claimed in claim 3, wherein, each in the described framing signal includes a plurality of pulses at interval in time.
5. method as claimed in claim 4 wherein, comprises for each signal source produces described identification data:
Based on described a plurality of pulses at interval in time, produce described identification data.
As before the described method of arbitrary claim, wherein, each the sign sign indicating number in the described framing signal, described identification code be a signal source in the described a plurality of signal sources of unique identification in described a plurality of signal sources.
7. method as claimed in claim 6, wherein, each in the described framing signal is the modulation format of the identification code in corresponding signal source.
8. method as claimed in claim 7, wherein, each in the described framing signal is the binary phase shift keying modulation format or the non-return-to-zero modulation format of the identification code in corresponding signal source.
9. as claim 2 or the described method of its arbitrary dependent claims, wherein, each in the described signal source has related address, and the described identification data of each has predetermined relationship with corresponding address in the described signal source.
10. method as claimed in claim 9, wherein, the identification data of each signal source is the address of described signal source.
11. as before the described method of arbitrary claim, wherein, receive each described framing signal and comprise: receives a plurality of in time the interval electromagnetic radiation.
12. method as claimed in claim 11, wherein, described electromagnetic radiation is a visible light.
13. method as claimed in claim 12, wherein, described electromagnetic radiation is infrared radiation or ultra-violet radiation.
14. as before the described method of arbitrary claim,
Wherein, receiving framing signal from each signal source comprises:
Receive the framing signal that each described signal source sends in the signal receiver termination, described signal receiver is configured to produce the two-dimensional position data that described signal source is positioned in detecting frame;
Wherein, producing described position data comprises:
Based on described two-dimensional position data, produce position data.
15. method as claimed in claim 14, wherein, described detection frame definition pel array, described signal receiver produces the data of at least one pixel in the described pel array of indication.
16. as claim 14 or 15 described methods, wherein, the framing signal that receives each described signal source transmission comprises:
Use video camera to receive described framing signal,
Wherein, described framing signal comprises the emission of the detectable electromagnetic radiation of video camera.
17. method as claimed in claim 16 wherein, is used video camera to receive described framing signal and is comprised:
Use receives described framing signal to the charge-coupled device (CCD) of electromagnetic radiation sensitivity.
18. as be subordinated to the claim 16 or the 17 described methods of claim 2, wherein, produce described position data and also comprise: the frame that described video camera is produced divides into groups in time, produces described identification data.
19. method as claimed in claim 18 wherein, is divided into groups to produce described identification data in time with a plurality of described frames and is comprised: handles the zone of mutual distance within preset distance in the described frame.
20., wherein, receive described framing signal and also comprise as each described method in the claim 14 to 19:
Receive the framing signal that each described signal source sends in a plurality of signal receiver terminations, each in the described signal receiver is configured to produce two-dimensional position data in the relevant detection frame, and described two-dimensional position data positions described signal source.
21. method as claimed in claim 20 wherein, produces described position data and also comprises: the two-dimensional position data that described a plurality of signal receivers are produced makes up, to produce described position data.
22. method as claimed in claim 21 wherein, makes up described two-dimensional position data and comprises: make up described two-dimensional position data by triangulation or trilateration.
23. as before the described method of arbitrary claim, wherein, each described signal source is to be configured to cause that the emission of electromagnetic radiation presents the electromagnetic component of described information signal.
24. method as claimed in claim 23 wherein, sends described dateout to described signal source and comprises to present described information signal:
Send instruction to cause some the emission electromagnetic radiation in the described electromagnetic component.
25. method as claimed in claim 24, wherein, described electromagnetic component is a light-emitting component, and described instruction causes described light-emitting component visible emitting.
26. method as claimed in claim 25, wherein, described light-emitting component can be lighted with a plurality of predetermined strengths, and the intensity of each light-emitting component that will light is specified in described instruction.
27. as each described method in claim 25 or 26, wherein, the intensity modulated of the described electromagnetic radiation of being launched by each light-emitting component is represented each described framing signal, to present described information signal.
28. as claim 25,26 or 27 described methods, wherein, light described light-emitting component, to show any in a plurality of predetermined colors, described instruction is each light-emitting component designated color.
29. method as claimed in claim 28, wherein, the tone of the described light of being launched by each light-emitting component is modulated and is represented each described framing signal, to present described information signal.
30. as each described method in the claim 1 to 24, wherein, each in the described signal source is the ELECTROMAGNETIC RADIATION REFLECTION device.
31. method as claimed in claim 30, wherein, each in the described signal source is the ELECTROMAGNETIC RADIATION REFLECTION device with may command reflectivity.
32. method as claimed in claim 30, wherein, each in the described signal source comprises reflecting surface and variable opacity element, and described variable opacity element is configured to control the reflectivity of described signal source.
33. as before the described method of arbitrary claim, wherein, each in the described signal source comprises sound source,
Sending described dateout to described signal source comprises to present described information signal: send instruction to cause some the output sound data in the described sound source, produce predetermined sound scape.
34. as before the described method of arbitrary claim, wherein, receive described framing signal and comprise: receive voice signals from described a plurality of signal sources.
35. as before the described method of arbitrary claim, wherein, receive described framing signal and comprise:
At least some signal sources in described a plurality of signal sources send voice signal;
Receive data from described signal source, the voice signal that described data indication described at least some signal source places in described a plurality of signal sources receive.
36. method as claimed in claim 35, wherein, at least some signal sources in described a plurality of signal sources send voice signal and comprise: each in described at least some signal sources in described a plurality of signal sources sends a plurality of voice signals, and described a plurality of voice signals are to send from different locus.
37. method as claimed in claim 36, wherein, each voice signal in described a plurality of voice signals is all different.
38. method as claimed in claim 37 wherein, produces described position data and comprises:
The data of the voice signal that indication described at least some signal source places in described a plurality of signal sources are received are handled, to produce described position data.
39. method as claimed in claim 38, wherein, processing said data comprises:
The data that received are carried out filtering, to produce the component that from the described a plurality of alternative sounds signals that are sent to described signal source, obtains.
40. method as claimed in claim 39, wherein, processing said data also comprises:
Based on the relative intensity of described component, produce described position data.
41. as claim 38 or 39 described methods, wherein, send described a plurality of voice signal at predetermined instant,
Processing said data comprises: the time difference between the moment of determining to send the moment of each voice signal and receive described voice signal in signal source.
42. a mounting medium that carries computer readable program code, described program code be configured to make computer carry out as before the described method of arbitrary claim.
43. a computer installation that uses a plurality of signal sources to come the presentation information signal, described device comprises:
Program storage, store processor readable instructions; And
Processor is configured to read and carry out the instruction of storing in the described program storage;
Wherein, described processor instructions comprises that the described processor of control carries out as the instruction of method as described in each in the claim 1 to 41.
44. a device that uses a plurality of signal sources to come the presentation information signal, described a plurality of signal sources are positioned at predetermined space, and described device comprises:
Receiver, each that is configured to from described signal source receives corresponding framing signal;
Processor is configured to based on described framing signal, produces the position data of the position of the described a plurality of signal sources of indication, and based on described information signal and described position data, is each the generation dateout in described a plurality of signal sources; And
Transmitter is configured to send described dateout to described signal source, to present described information signal.
45. device as claimed in claim 44, wherein, described processor is configured to the identification data of described position data with the described signal source of sign is associated.
46. a device that uses a plurality of signal sources to come the presentation information signal, described device comprises:
Be positioned at a plurality of signal sources of predetermined space;
Receiver, each that is configured to from described signal source receives corresponding framing signal;
Processor is configured to based on described framing signal, produces the position data of the position of the described a plurality of signal sources of indication, and based on described information signal and described position data, is each the generation dateout in described a plurality of signal sources; And
Transmitter is configured to send described dateout to described signal source, to present described information signal.
47. device as claimed in claim 46, wherein, each in described a plurality of signal sources is configured to produce corresponding framing signal.
48. device as claimed in claim 47, wherein, each address data in described a plurality of signal sources, and be configured to produce corresponding framing signal based on described address date.
49. as claim 46,47 or 48 described devices, wherein, each in the described signal source is an electromagnetic signal source.
50. device as claimed in claim 49, wherein, each in the described signal source is a visible light source.
51. as each described device in the claim 46 to 48, wherein, each in the described signal source is a sound source.
52. the method for a location signal receiver in predetermined space, described method comprises:
The data that the signal value that reception receives described signal receiver is indicated;
The data and a plurality of expected signal value that are received are compared, and each expected signal value is illustrated in the signal of the respective point place expectation in the interior a plurality of predetermined points of described predetermined space; And
Based on described comparison, locate described signal receiver.
53. method as claimed in claim 52, wherein, described signal receiver is the signal transmitting and receiving machine.
54. method as claimed in claim 53 also comprises: provide signal to described signal transmitting and receiving machine.
55., also comprise as each described method in the claim 52 to 54:
Send prearranged signals to described signal receiver, make the signal that receives at each described signal receiver be based on described prearranged signals.
56., wherein, receive the data that the signal value of described signal receiver reception is indicated and comprise: receive the data that the voice signal of described signal receiver reception is indicated as each described method in the claim 52 to 55.
57. a mounting medium that carries computer readable program code, described program code are configured to make signal receiver to carry out as each described method in the claim 52 to 56.
58. a signal receiver that is used to produce positional information, described signal receiver comprises:
Program storage, store processor readable instructions; And
Processor is configured to read and carry out the instruction of storing in the described program storage;
Wherein, described processor instructions comprises that the described processor of control carries out as the instruction of method as described in each in the claim 52 to 56.
59. the method that signal source is positioned and identifies, described method comprises:
Receive the signal that described signal source sends by signal receiver, described signal receiver is configured to produce two-dimensional position data in detecting frame, and described two-dimensional position data is located described signal source;
Based on described two-dimensional position data, produce position data;
Handle the signal that is received, the signal that is received comprises that a plurality of signals that separate in time send; And
Send the identification code of definite signal source of being located according to a plurality of signals that separate in time that received.
60. method as claimed in claim 59, wherein, described a plurality of signals that separate in time send the modulation format of the identification code that has constituted signal source.
61. method as claimed in claim 60, wherein, described a plurality of signals that separate in time send the binary phase shift keying modulation format or the non-return-to-zero modulation format of the identification code that has constituted signal source.
62. as claim 59,60 or 61 described methods, wherein, described signal source has related address, the described identification data of signal source has predetermined relationship with corresponding address.
63. method as claimed in claim 62, wherein, the identification data of each signal source is the address of this signal source.
64. as each described method in the claim 59 to 63, wherein, the signal that is received described signal source transmission by signal receiver comprises: receive a plurality of electromagnetic radiation at interval in time.
65. as the described method of claim 64, wherein, described electromagnetic radiation is a visible light.
66. as the described method of claim 64, wherein, described electromagnetic radiation is infrared radiation or ultra-violet radiation.
67. as each described method in the claim 59 to 66, wherein, described detection frame definition pel array, described signal receiver produces the data of at least one pixel in the described pel array of indication.
68. as each described method in the claim 59 to 67, wherein, the signal that is received described signal source transmission by signal receiver comprises:
Use video camera to receive described signal,
Wherein, described signal comprises the emission of the detectable electromagnetic radiation of video camera.
69., wherein, use video camera to receive described signal and comprise as the described method of claim 68:
Use receives described signal to the charge-coupled device (CCD) of electromagnetic radiation sensitivity.
70. as claim 68 or 69 described methods, wherein, produce described identification code and comprise: a plurality of frames that described video camera is caught divide into groups in time, produce described identification code.
71., wherein, a plurality of frames are divided into groups to produce described identification code in time comprise: handle the zone of mutual distance within preset distance in the described frame as the described method of claim 70.
72., wherein, receive described signal and also comprise as each described method in the claim 59 to 71:
Receive the signal that described signal source sends in a plurality of signal receiver terminations, each in the described signal receiver is configured to produce two-dimensional position data in the relevant detection frame, and described two-dimensional position data is located described signal source.
73. as the described method of claim 72, wherein, produce described position data and also comprise: the two-dimensional position data that described a plurality of signal receivers are produced makes up, to produce described position data.
74., wherein, make up described two-dimensional position data and comprise: make up described two-dimensional position data by triangulation or trilateration as the described method of claim 73.
75., wherein, produce described position data and comprise: produce three-dimensional location data according to described two-dimensional position data as the described method of claim 59 to 73.
76., wherein, produce three-dimensional location data according to described two-dimensional position data and comprise: with the basis of the assumed position on one of three dimensions as described three-dimensional location data as the described method of claim 75.
77. as the described method of claim 59 to 76, wherein, described signal source is associated with people or equipment.
78. a mounting medium that carries computer readable program code, described program code are configured to make computer to carry out as the described method of one of claim 59 to 77.
79. the computer installation that signal source is positioned and identifies, described device comprises:
Program storage, store processor readable instructions; And
Processor is configured to read and carry out the instruction of storing in the described program storage;
Wherein, described processor instructions comprises that the described processor of control carries out as the instruction of method as described in each in the claim 59 to 77.
80. the device that signal source is positioned and identifies, described device comprises:
Receiver is used for receiving the signal that described signal source sends by signal receiver, and described signal receiver is configured to produce two-dimensional position data in detecting frame, and described two-dimensional position data is located described signal source;
Processor, be configured to produce position data based on the described position in the described detection frame, handle the signal that is received, the signal that is received comprises that a plurality of signals that separate in time send, and processor sends the identification code of definite signal source of being located according to a plurality of signals that separate in time that received.
81. a method of using a plurality of sound sources to produce the three dimensional sound scape, described method comprises:
Determine to be applied to the desired audio pattern of predetermined space;
Determine and wherein to use the data of indication sound source position and to use described sound pattern to carry out described definite from the sound of each sound source emission; And
Send voice data to each sound source.
82., wherein, determine and to comprise from the sound of each sound source emission: the voice output power of determining each sound source as the described method of claim 81.
83., wherein, determine that the voice output power of each sound source comprises as the described method of claim 82:
Reception is by the voice signal of each sound source output; And
The voice signal that the sound source of described voice signal and reference power is exported compares.
84., wherein, determine and will comprise from the sound of each sound source emission as claim 81,82 or 83 described methods: determine each sound source towards.
85., also comprise: the data that produce the indication sound source position as each described method in the claim 81 to 84.
86. as the described method of claim 85, wherein, the data that produce the indication sound source position comprise:
Indicate the data of position separately from each sound source reception.
87. as the described method of claim 85, wherein, each sound source also comprises the device that receives voice data,
Wherein, the data of generation indication sound source position comprise:
Send voice signal to each sound source;
The data of the voice signal that each sound source of reception indication receives; And
Handle the data that received, to produce described sound source position.
88. as the described method of claim 87, wherein, send voice signal to each sound source and comprise: send a plurality of voice signals to each sound source, each voice data is to send from different locus.
89., also comprise as the described method of claim 88:
Write down in the voice signal of a plurality of transmissions the delivery time of each;
Receive the data of the time of reception of each voice signal of indication from each signal source; And
Based on the time difference of the data of time of reception of the delivery time of described voice signal and described each voice signal of indication between indicated described moment, produce described position data.
90., also comprise as claim 88 or 89 described methods:
Handle the data that received, to distinguish the voice signal of the described a plurality of transmissions that receive in one of described sound source; And
Determine the signal strength signal intensity of the voice signal that each sent that each signal source receives; And
Based on described definite signal strength signal intensity, produce described position data.
91. a mounting medium that carries computer readable program code, described program code are configured to make computer to carry out as the described method of one of claim 81 to 90.
92. a computer installation that uses a plurality of sound sources to produce the three dimensional sound scape, described device comprises:
Program storage, store processor readable instructions; And
Processor is configured to read and carry out the instruction of storing in the described program storage;
Wherein, described processor instructions comprises that the described processor of control carries out as the instruction of method as described in each in the claim 81 to 90.
93. a device that uses a plurality of sound sources to produce the three dimensional sound scape, described device comprises processor, and described processor is configured to:
Determine to be applied to the desired audio pattern of predetermined space;
Determine and to use the data of indication sound source position and to use described sound pattern to carry out described definite from the sound of each sound source emission; And
Send voice data to each sound source.
94., also comprise a plurality of described sound sources as the described device of claim 93.
95. as claim 93 or 94 described devices, wherein, each sound source is a sonic transceivers.
96. as the described device of claim 95, wherein, each sound source is a mobile phone.
97. the address processing method in the addressing system, described addressing system is configured to a plurality of spaces element of hierarchical arrangement is carried out addressing, and described method is used the address by a plurality of predetermined numerical digits definition, and described method comprises:
Handle at least one predetermined numerical digit of described address, with the hierarchical level of the described hierarchy determining to represent by described address; And
According to handled address, on determined hierarchical level, determine the address of space element.
98. as the described method of claim 97, wherein, at least one predetermined numerical digit of handling described address is to determine that hierarchical level comprises: at least one that handle described address is in preceding numerical digit.
99. as the described method of claim 98, wherein, at least one predetermined numerical digit of handling described address is to determine that hierarchical level comprises: handle have predetermined value in preceding numerical digit group.
100., wherein, handle and to comprise: from first end of described address, handle each numerical digit of described address successively, describedly comprise that in preceding numerical digit group with equal value each handled numerical digit in preceding numerical digit group as the described method of claim 99.
101. as each described method in the claim 97 to 100, wherein, described address is a binary number.
102. as the described method of the claim 101 that is subordinated to claim 100, wherein, be " 1 " in preceding numerical digit or each value in preceding numerical digit.
103. as each described method in the claim 97 to 102, wherein, determine that the address of space element comprises: handle at least one other numerical digit in the described address.
104., wherein, determine described at least one other numerical digit to be processed by the numerical digit of the described hierarchical level of indication as the described method of claim 103.
105. as each described method in the claim 97 to 104, wherein, described address is the IPv6 address.
106. a mounting medium that carries computer readable program code, described program code are configured to make computer to carry out as the described method of one of claim 97 to 105.
107. carry out the computer installation of address process in the addressing system, described addressing system is configured to a plurality of spaces element of hierarchical arrangement is carried out addressing, described device comprises:
Program storage, store processor readable instructions; And
Processor is configured to read and carry out the instruction of storing in the described program storage;
Wherein, described processor instructions comprises that the described processor of control carries out as the instruction of method as described in each in the claim 97 to 105.
108. the method to a plurality of devices allocation address, described method comprises:
Make each the selection address in a plurality of equipment;
The data that reception is indicated the selected address of each equipment;
Handle the data of the selected address of indication, with the individual address that determined whether a plurality of choice of equipment; And
If a plurality of choice of equipment have been arranged individual address then orders described a plurality of equipment to reselect the address.
109. as the described method of claim 108, wherein, order described a plurality of equipment to reselect the address and comprise: each in described a plurality of equipment sends data.
110. as the described method of claim 109, wherein, the described Data Identification of each transmission in described a plurality of equipment described a plurality of equipment.
111. as the described method of claim 110, wherein, the described data of each transmission in described a plurality of equipment comprise the data of indication institute addresses distributed, and each processing said data in described a plurality of equipment is to determine whether its selected address is indicated as and will distributes.
112. as the described method of claim 111, wherein, the address with a plurality of choice of equipment is not designated as and will distributes.
113., wherein, reselect described address and comprise: select not to be indicated as to want addresses distributed as the described method of claim 112.
114. as each described method in the claim 108 to 113, wherein, each in described a plurality of equipment is a light-emitting component.
115. a mounting medium that carries computer readable program code, described program code are configured to make computer to carry out as the described method of one of claim 108 to 114.
116. the computer installation to a plurality of devices allocation address, described method comprises:
Make each the selection address in a plurality of equipment;
The data that reception is indicated the selected address of each equipment;
Handle the data of the selected address of indication, with the individual address that determined whether a plurality of choice of equipment; And
If a plurality of choice of equipment have been arranged individual address then orders described a plurality of equipment to reselect the address.
117. the method to the devices allocation address, described method comprises:
Reception causes the data of address choice;
Whether reception is assigned with the data of indicating to selected address; And
If selected address is not assigned with, then reselect the address.
118. a method that is used to identify the device address of a plurality of equipment, described address is set in the address realm, and described method comprises:
From described address realm, produce a plurality of subranges;
Determine whether one of described a plurality of equipment have the address in first subrange; And
During and if only if one or more equipment have in described first subrange address, handle at least one address in described first subrange.
119., also comprise as the described method of claim 118:
If described first subrange comprises a plurality of addresses, then produce a plurality of subranges from described first subrange; And
Determine whether one of described a plurality of equipment have the address in second subrange of described first subrange; And
During and if only if one or more equipment have in described second subrange address, handle at least one address in described second subrange.
120., wherein, determine that the address whether one of described a plurality of equipment have in the pre-stator range comprises: the power consumption of monitoring described equipment as claim 118 or 119 described methods.
121. as the described method of claim 120, also comprise: the equipment of the address in having pre-stator range is given an order, and the monitoring power consumption.
122. as the described method of claim 121, wherein, the monitoring power consumption comprises: monitor current consumption.
123. as each described method in the claim 118 to 122, wherein, described equipment is light-emitting component.
124. as the described method of the claim 123 that is subordinated to claim 118 or 119, wherein, determine that the address whether one of described a plurality of equipment have in the pre-stator range comprises: order has the equipment of the address in the described pre-stator range lights, and monitors lighting of described equipment.
125. as each described method in the claim 118 to 124, wherein, distributed to a plurality of light-emitting components, then given described a plurality of light-emitting components with other address assignment if determine particular address.
126. a mounting medium that carries computer readable program code, described program code are configured to make computer to carry out as the described method of one of claim 118 to 125.
127. a computer installation that is used to identify the device address of a plurality of equipment, described address is set in the address realm, and described method comprises:
From described address realm, produce a plurality of subranges;
Determine whether one of described a plurality of equipment have the address in first subrange; And
During and if only if one or more equipment have in described first subrange address, handle at least one address in described first subrange.
CN200780015852.6A 2006-03-01 2007-03-01 Method and apparatus for signal presentation Active CN101485233B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
GB0604076A GB0604076D0 (en) 2006-03-01 2006-03-01 Method and apparatus for signal presentation
GB0604076.0 2006-03-01
US78112206P 2006-03-09 2006-03-09
US60/781,122 2006-03-09
PCT/GB2007/000708 WO2007099318A1 (en) 2006-03-01 2007-03-01 Method and apparatus for signal presentation

Publications (2)

Publication Number Publication Date
CN101485233A true CN101485233A (en) 2009-07-15
CN101485233B CN101485233B (en) 2013-01-16

Family

ID=36218902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200780015852.6A Active CN101485233B (en) 2006-03-01 2007-03-01 Method and apparatus for signal presentation

Country Status (2)

Country Link
CN (1) CN101485233B (en)
GB (1) GB0604076D0 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014522A (en) * 2009-09-04 2011-04-13 李志海 Network monitoring system and method and corresponding location label thereof
CN103583054A (en) * 2010-12-03 2014-02-12 弗兰霍菲尔运输应用研究公司 Sound acquisition via the extraction of geometrical information from direction of arrival estimates
CN105766062A (en) * 2013-09-10 2016-07-13 飞利浦灯具控股公司 External control lighting systems based on third party content
CN106255284A (en) * 2015-06-11 2016-12-21 哈曼国际工业有限公司 Automatically identifying and localization of wireless luminous element
WO2019036858A1 (en) * 2017-08-21 2019-02-28 庄铁铮 Method and system for controlling electronic device having smart identification function
CN113939712A (en) * 2019-06-06 2022-01-14 赛峰电子与防务公司 Method and apparatus for resetting a transport device inertial unit based on information transmitted by a transport device viewfinder

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7446671B2 (en) * 2002-12-19 2008-11-04 Koninklijke Philips Electronics N.V. Method of configuration a wireless-controlled lighting system

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014522A (en) * 2009-09-04 2011-04-13 李志海 Network monitoring system and method and corresponding location label thereof
CN103583054A (en) * 2010-12-03 2014-02-12 弗兰霍菲尔运输应用研究公司 Sound acquisition via the extraction of geometrical information from direction of arrival estimates
US10109282B2 (en) 2010-12-03 2018-10-23 Friedrich-Alexander-Universitaet Erlangen-Nuernberg Apparatus and method for geometry-based spatial audio coding
US9396731B2 (en) 2010-12-03 2016-07-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Sound acquisition via the extraction of geometrical information from direction of arrival estimates
CN103583054B (en) * 2010-12-03 2016-08-10 弗劳恩霍夫应用研究促进协会 For producing the apparatus and method of audio output signal
US10091863B2 (en) 2013-09-10 2018-10-02 Philips Lighting Holding B.V. External control lighting systems based on third party content
CN105766062A (en) * 2013-09-10 2016-07-13 飞利浦灯具控股公司 External control lighting systems based on third party content
CN105766062B (en) * 2013-09-10 2019-05-28 飞利浦灯具控股公司 Based on control lighting system outside third party content
CN106255284A (en) * 2015-06-11 2016-12-21 哈曼国际工业有限公司 Automatically identifying and localization of wireless luminous element
CN106255284B (en) * 2015-06-11 2020-10-09 哈曼国际工业有限公司 Method for positioning light-emitting element, computing device and computer readable storage medium
WO2019036858A1 (en) * 2017-08-21 2019-02-28 庄铁铮 Method and system for controlling electronic device having smart identification function
US10856374B2 (en) 2017-08-21 2020-12-01 Tit Tsang CHONG Method and system for controlling an electronic device having smart identification function
CN113939712A (en) * 2019-06-06 2022-01-14 赛峰电子与防务公司 Method and apparatus for resetting a transport device inertial unit based on information transmitted by a transport device viewfinder
CN113939712B (en) * 2019-06-06 2023-11-28 赛峰电子与防务公司 Method and apparatus for resetting a transport inertial unit based on information transmitted by a transport viewfinder

Also Published As

Publication number Publication date
CN101485233B (en) 2013-01-16
GB0604076D0 (en) 2006-04-12

Similar Documents

Publication Publication Date Title
EP1989926B1 (en) Method and apparatus for signal presentation
US10952296B2 (en) Lighting system and method
CN101485233B (en) Method and apparatus for signal presentation
US10230466B2 (en) System and method for communication with a mobile device via a positioning system including RF communication devices and modulated beacon light sources
CN105592310B (en) Method and system for projector calibration
JP5059026B2 (en) Viewing environment control device, viewing environment control system, and viewing environment control method
CA2982946C (en) Mesh over-the-air (ota) driver update using site profile based multiple platform image
CN110476148B (en) Display system and method for providing multi-view content
US20170368459A1 (en) Ambient Light Control and Calibration via Console
CN107534486A (en) Signal decoding method, signal decoding apparatus and program
US10218440B2 (en) Method for visible light communication using display colors and pattern types of display
CN106464361A (en) Light-based communication transmission protocol
CN109076680A (en) Control lighting system
CN103168505A (en) A method and a user interaction system for controlling a lighting system, a portable electronic device and a computer program product
CN106574959A (en) Light based positioning
CN106255284A (en) Automatically identifying and localization of wireless luminous element
CN110011731A (en) System and method for Free Space Optics transmission of tiling
CN106443585A (en) Accelerometer combined LED indoor 3D positioning method
WO2019214643A1 (en) Method for guiding autonomously movable machine by means of optical communication device
WO2019214642A1 (en) System and method for guiding autonomous machine
CN107707898B (en) The image distortion correcting method and laser-projector of laser-projector
CN106301555A (en) A kind of signal transmitting method for light projection and transmitter
CN108120435A (en) A kind of plant area's alignment system and localization method based on visible ray
JP2009021847A (en) Viewing environment control device, system, and method
US10609365B2 (en) Light ray based calibration system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant