CN101485233B - Method and apparatus for signal presentation - Google Patents

Method and apparatus for signal presentation Download PDF

Info

Publication number
CN101485233B
CN101485233B CN200780015852.6A CN200780015852A CN101485233B CN 101485233 B CN101485233 B CN 101485233B CN 200780015852 A CN200780015852 A CN 200780015852A CN 101485233 B CN101485233 B CN 101485233B
Authority
CN
China
Prior art keywords
signal
light
emitting component
data
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200780015852.6A
Other languages
Chinese (zh)
Other versions
CN101485233A (en
Inventor
约瑟夫·芬尼
艾伦·约翰·狄克斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lancaster University
Original Assignee
Lancaster University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lancaster University filed Critical Lancaster University
Priority claimed from PCT/GB2007/000708 external-priority patent/WO2007099318A1/en
Publication of CN101485233A publication Critical patent/CN101485233A/en
Application granted granted Critical
Publication of CN101485233B publication Critical patent/CN101485233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Optical Communication System (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method and apparatus for presenting an information signal such as an image signal or a sound signal using a plurality of signal sources. The plurality of signal sources are located within a predetermined space, and the method comprises receiving a respective positioning signals from each of said signal sources, generating location data indicative of locations of said plurality of signal sources, based upon said positioning signals, generating output data for each of said plurality of signal sources based upon said information signal and said location data, and transmitting said output data to said signal sources to present said information signal.

Description

The signal indication method and apparatus
Technical field
The present invention relates to the method and apparatus for positioning signal source, and the method and apparatus that represents information signal with such signal source.
Background technology
The lamp string is used for decorative purpose to be widely known by the people.For example, for a long time, generally the lamp string is placed on the Christmas tree for decorative purpose.Similarly, lamp is placed in the public place as other objects of tree and macrophyte and so on.Recently, such lamp is connected with the control circuit that can make lamp close or open in various predetermined modes.For example, all lamps " flicker " Kai Heguan together.Alternatively, lamp can close successively and open with respect to lamp adjacent one another are in the string, to produce " chasing " effect.Known many such effects, its something in common are, to all lamps, to the random selection of lamp or to applying this effect according to its lamp of selecting with respect to the toward each other position in the lamp string.
The decorative lamp of the above-mentioned type also is attached to around the predetermined configurations sometimes regularly, so that use as a lamp when bright, lamp demonstrates the determined image of predetermined configurations.For example, lamp can be attached to around the shape of Christmas tree, so that use as a lamp when bright, can see the profile of Christmas tree.Similarly, arrange that lamp shows alphabetic(al) letter, so that when a plurality of such monograms together the time, lamp demonstrates word.
Up to now, when showing more complicated image, use light-emitting device array, the light-emitting component in the array is relative to each other fixing.Then, processor can image data processing and the data of the fixed position of indication lamp, to determine that should light which lamp shows required image.Such array can be taked the form of a plurality of bulbs or similar light-emitting component, yet, more commonly lamp is less, the union body forms liquid crystal display (LCD) or plasma screen, and this mode is the mode that shows image at modern flat screen display, notebook computer screen and many television sets.
It should be noted that all said methods are based on the fixed relationship between the light-emitting component, in image display process, use this fixed relationship.
Recently, it is quite universal that television set becomes, and can provide the audiovisual amplifier with a plurality of loud speakers.Typically, in conventional stereo sound was arranged, preposition central loudspeakers was identical with the indicator screen position, and a preposition left side and preposition right loud speaker are disposed in the both sides of indicator screen.In addition, at least two loud speakers are placed on the rear of beholder position, so that " around sound " effect to be provided.For example, in the video display sequence, if aircraft enters the demonstration image in the lower left corner of screen, and after some frames, leave the demonstration image in the upper right corner of screen, then in the process that video shows, can at first pass through rearmounted left speaker, send subsequently the sound of aircraft by preposition right loud speaker, so that the sound that sends has provided the impression of airplane motion.Such effect provides the impression that shows image of placing oneself in the midst of that strengthens for the beholder.
It should be noted that when creating audio-visual data, determining will be by the sound of each loud speaker transmission.Yet, when in beholder's family, the said equipment being installed, can carry out small adjustment (the relative volume of for example each loud speaker being exported), compensating for example different distance between the beholder position and preposition loud speaker, and the different distance between beholder position and the rearmounted loud speaker.
The surround sound system for electrical teaching that it should be noted that the above-mentioned type always comprises a plurality of loud speakers of arranging in a predefined manner, and it changes the nuance that only may be compensated position and distance.Therefore, above-mentioned surround sound system for electrical teaching allows to present sound with the loudspeaker array of predetermined configurations in essence.In other words, such loudspeaker arrangement is the sound equivalence that above-mentioned modulation element array with fixed and arranged shows image.
Aspect luminous and sounding, said system all is subject to the requirement of arranging in a predetermined manner lamp and loud speaker at least part of, thereby has reduced the flexibility of system.
Summary of the invention
The purpose of the embodiment of the invention is elimination or alleviates at least some the problems referred to above.
The invention provides the method and apparatus that comes the presentation information signal with a plurality of signal sources, described a plurality of signal sources are positioned at predetermined space.Described method comprises: each from described signal source receives corresponding framing signal; Based on described framing signal, produce the position data of the position of the described a plurality of signal sources of indication; Based on described information signal and described position data, be each the generation output data in described a plurality of signal sources; And send described output data to described signal source, to present described information signal.
Therefore, the invention provides a kind of method, can be used for the location such as the signal source of light-emitting component and so on, then show information signal with such light-emitting component.Can be at the light-emitting component as arranging with random fashion on the fixed structure of tree and so on.Therefore, the light-emitting component of random arrangement be can locate, and predetermined pattern or pre-determined text such as image and so on shown with it.
For producing position data, each signal source can also comprise: the identification data of described position data with the described signal source of sign is associated.The identification data of described position data with the described signal source of sign is associated and can comprises: produce described identification data according to the described framing signal that receives from each signal source.
Each framing signal can comprise the pulse at a plurality of in time intervals, and in this case, can comprise for each signal source produces identification data: the pulse based on described a plurality of in time intervals produces described identification data.Each framing signal can be indicated the identification code of a signal source in the described a plurality of signal sources of unique identification in described a plurality of signal sources.Each framing signal can be the modulation format of the identification code of each signal source.For example, can use binary phase shift keying modulation or non-return-to-zero modulation.
Receiving each framing signal can comprise: receive the emission of the electromagnetic radiation at a plurality of in time intervals.Described electromagnetic radiation can be taked any suitable form, and for example radiation can be visible light, infrared radiation or ultra-violet radiation.
In this application, visible light, ultraviolet light and infrared light have been carried out various quoting.Those skilled in the art easily understand these the meaning of terms.Yet, it should be noted that infrared light typically has the approximately wavelength of 0.7 μ m to 1mm, visible light has the approximately wavelength of 400nm to 700nm, and ultraviolet light has the approximately wavelength of 1nm to 400nm.
Receiving framing signal from each signal source can comprise: receive the framing signal that each signal source sends in the signal receiver termination, described signal receiver is configured to produce two-dimensional position data in detecting frame, and described two-dimensional position data is located described signal source.Then, can based on the described position in the described detection frame, produce position data.
The framing signal that receives each signal source transmission can comprise: receive described framing signal with video camera.In a preferred embodiment of the invention, described video camera comprises the charge coupled device (CCD) to electromagnetic radiation sensitivity.Producing described position data also comprises: the frame that described video camera is produced divides into groups in time, produces described identification data.A plurality of described frames are divided into groups to produce described identification data can comprise the zone of mutual distance within preset distance in the described frame of processing.
Receiving described framing signal can also comprise: receive the framing signal that each signal source sends in a plurality of signal receiver terminations, in the described signal receiver each is configured to produce two-dimensional position data corresponding the detection in the frame, and described two-dimensional position data is located described signal source.Producing described position data can also comprise: the two-dimensional position data that described a plurality of signal receivers are produced makes up to produce described position data.Can make up described two-dimensional position data by triangulation (triangulation).
Each signal source can be to be configured to cause that the emission of electromagnetic radiation presents the electromagnetic component of described information signal.Therefore, send described output data to described signal source and can comprise to present described information signal: send instruction, to cause some electromagnetic component electromagnetic radiation-emittings.
Described electromagnetic component can be light-emitting component, and described instruction can cause described light-emitting component emission visible light.Described light-emitting component can be lighted at a plurality of predetermined strengths, and therefore described instruction can be specified the intensity that will light each light-emitting component.Therefore, the intensity modulated of the described electromagnetic radiation that can be launched by each light-emitting component represents each framing signal, to present described information signal.In some embodiments of the invention, suppose to allow light-emitting component to continue to show described information signal, and allow simultaneously same light-emitting component to export framing signal in relatively unconspicuous mode, then such intensity modulated is preferred.
Can lighting elements to cause any that shows in a plurality of predetermined colors, described instruction has been specified color for each light-emitting component.In this case, the tone of the described light that can be launched by each light-emitting component modulates to represent framing signal, to present described information signal.Again, suppose to allow the light-emitting component of presentation information signal to send framing signal in relatively unconspicuous mode, then this transmission of framing signal is favourable.Really, studies show that, human to such tone modulation relative insensitivity.Therefore, suppose that the video camera of suitable configurations can detect such tone modulation, then such tone modulation is a kind of effective means that sends framing signal.
Here this term of employed signal source comprises that signal produces source and signal reflex source.For example, each signal source can be the ELECTROMAGNETIC RADIATION REFLECTION device, preferably, is the ELECTROMAGNETIC RADIATION REFLECTION device with controllable reflectivity.Can by variable opacity element is associated with each reflecting element, provide such controllable reflectivity.Can use liquid crystal display (LCD) as this variable opacity element.
Term used herein " signal " comprises the signal that a plurality of signal sources produce.For example, color signal can be understood to the combined effect of red, green and blue signal source.
Signal source can be sound source, sending described output data to described signal source comprises to present described information signal: send the voice data that will export by some instructions, to cause some sound source output sound data, produce predetermined sound scape (sound scape).
The present invention also provides the method and apparatus that is used for location signal receiver in predetermined space.Described method comprises: the data that receive the signal value of the described signal receiver reception of indication; The data and a plurality of expected signal value that receive are compared, and each expected signal value is illustrated in the signal of the each point place expectation in the interior predetermined a plurality of points of predetermined space; And relatively locate described signal receiver based on described.
Therefore, by the data of indicating the expected data that will receive in a plurality of positions are stored, can come the framing signal receiver based on the signal that receives by the reception signal.Can carry out described method in each signal receiver in distributed mode, or alternatively, signal receiver can provide the details that receives signal to central computer, described central computer is configured to locate described signal receiver.
Each signal receiver can be the signal transmitting and receiving machine.Described method can also comprise to described signal receiver provides signal.
Described method can also comprise to described signal receiver transmission prearranged signals, so that the signal that each signal receiver receives is based on described prearranged signals.The data that receive the signal value of the described signal receiver reception of indication can comprise: the data of the voice signal that the described signal receiver of reception indication receives, but of the present invention this is not limited to use voice data on the one hand.
The present invention also provides the method and apparatus that is used for location and id signal source.Described method comprises: receive the signal that described signal source sends by signal receiver, described signal receiver is configured to produce two-dimensional position data in detecting frame, and described two-dimensional position data is located described signal source; Described position based in the described detection frame produces position data; Process receiving signal, described reception signal comprises that a plurality of signals that separate in time send; And a plurality of signals transmissions that separate in time from receiving, the identification code of the signal source of locating.
It is of the present invention that this is on the one hand especially practical aspect the motion of the people in the monitoring predetermined space or equipment.For example, signal source can be associated with each individual or each items of equipment.
Can take any suitable form from the signal that signal source receives.Especially, according to other aspects of the invention, this signal can be taked the form of above-mentioned framing signal.
The present invention also comprises the method and apparatus that produces three dimensional sound scape (soundscape) with multi-acoustical.Described method comprises: the desired audio pattern of determining to be applied to predetermined space; Determine and from the sound of each sound source emission, also to carry out described definite with described sound pattern with the data of indication sound source position; And to each sound source transmission voice data.
Therefore, the present invention allows to produce the voice signal that will use multi-acoustical to export to produce the three dimensional sound scape.
Employed sound source can be taked any suitable form.In some embodiments of the invention, produce sound with a plurality of handheld devices such as mobile phone and so on, export described sound by the loud speaker that is associated with mobile phone.
The present invention also provides the method and apparatus that carries out address process in the addressing system, and described addressing system is configured to a plurality of spaces element of hierarchical arrangement is carried out addressing.Described method is used the address that is defined by a plurality of predetermined numerical digits, and described method comprises: process at least one predetermined numerical digit of described address, with the hierarchical level of the hierarchy determining to be represented by described address; And according to handled address, determine the address of space element at determined hierarchical level.
At least one predetermined numerical digit of processing described address is to determine that hierarchical level can comprise: at least one that process described address is in front numerical digit.For example, can process each numerical digit of address, from first end, all that can consider to have equal value processed numerical digit be formed for determining hierarchical level in front numerical digit group.For example, when using binary address, the number in front " 1 " in the address can be used for determining hierarchical level.
The address of determining the space element can comprise: process at least one other numerical digit in the described address.Can determine at least one other numerical digit to be processed by the numerical digit of the described hierarchical level of indication.
This method can be used with various addressing mechanisms, comprises the IPv6 address.
The present invention also provides to the method for a plurality of devices allocation address, and described method comprises: make each the selection address in a plurality of equipment; Receive the data of each selected address of equipment of indication; Process the data of the selected address of indication, selected individual address to have determined whether many equipment; If there are a plurality of equipment to select individual address, then order described a plurality of equipment to reselect the address.
The present invention also provides the method for the device address that identifies a plurality of equipment, and described address is arranged in the address realm, and described method comprises: produce a plurality of subranges from described address realm; Determine whether one of described a plurality of equipment have the address in the first subrange; And if one or more equipment has the address in described the first subrange, then process at least one address in described the first subrange.
The feature that should be understood that one aspect of the present invention described herein can be combined with the feature of other aspects of the present invention described herein.Also be appreciated that and realize all aspects of the present invention by method, device and equipment.Also be appreciated that and realize method provided by the invention with computer program.Such computer program can be realized at the suitable mounting medium such as CD ROM and CD and so on.Such mounting medium also comprises the signal of communication that carries suitable computer program.Also can carry out suitable programming to stored program computer installation by using suitable computer program code, realize aspect of the present invention.
Description of drawings
Now, with reference to accompanying drawing, by example embodiments of the invention are described, wherein:
Fig. 1 is the high level schematic diagram of the embodiment of the invention;
Fig. 2 shows the high-level flow of the performed processing overview of the embodiment of the invention shown in Figure 1;
Fig. 3 is the process schematic diagram that is used for space address is converted to the address that is associated with particular signal source in the embodiment of the invention shown in Figure 1;
Fig. 4 is the process schematic diagram that a plurality of light sources of usefulness used in the embodiment of the invention shown in Figure 1 present image;
Fig. 5 is the schematic diagram by computer-controlled light-emitting component network that is applicable to the embodiment of the invention;
Fig. 6 is shown in Figure 5 and is used for the schematic diagram of PC of the device of control chart 5;
Fig. 7,7A and 7B are the schematic diagrames of light-emitting component shown in Figure 5;
Fig. 8 shows for the light-emitting component to Fig. 5 and distributes the address of address to determine the flow chart of algorithm;
Fig. 8 A and 8B show the flow chart of the possible variant of the address of Fig. 8 determining;
Fig. 9 is the optional schematic diagram by computer-controlled light-emitting component network that is applicable to the embodiment of the invention;
Fig. 9 A is the schematic diagram of pulse width modulating signal;
Fig. 9 B is for the schematic diagram that transmits the packet of order to light-emitting component;
Fig. 9 C shows the flow chart of the processing of the light-emitting component execution among Fig. 5;
Fig. 9 D shows the flow chart of the processing of the control element execution among Fig. 5;
Figure 10 is the layout schematic diagram of the video camera that is used for the positioning luminous element in the embodiment of the invention;
Figure 10 A and 10B use the pixelation of the frame that video camera shown in Figure 10 catches to represent;
Figure 11 is the schematic diagram that is used for the video camera of positioning luminous element in another embodiment of the present invention;
Figure 11 A is on predetermined amount of time, and a series of 4 pixelations of the frame that the video camera of use Figure 11 is caught represent;
Figure 12 is the schematic diagram of the hamming code that uses in the some embodiments of the invention;
Figure 13 is the schematic diagram that uses the pulse shape of binary phase shift keying (BPSK) modulation;
Figure 14 is the schematic diagram of BPSK modulation how to carry out using in the some embodiments of the invention;
Figure 15 is the schematic diagram of the Frame that uses in the embodiment of the invention;
Figure 16 is the schematic diagram of a plurality of video cameras that are used for the positioning luminous element that use in the embodiment of the invention;
Figure 17 is the sketch plan that is configured to operate the light-emitting component position fixing process that the data that obtain from video camera shown in Figure 11 operate;
Figure 18 is the flow chart of processing frame by frame that illustrates in greater detail Figure 17;
Figure 19 is the flow chart that illustrates in greater detail the time processing of Figure 17;
Figure 20,20a, 20b, 20c and 20d are the schematic diagrames that is used for the method for positioning luminous element in the embodiment of the invention;
Figure 21 is the flow chart of the camera calibration process used in the embodiment of the invention;
Figure 22 A to 22D is the schematic diagram of the pseudomorphism of appearance when the video camera shown in Figure 10 and 11 is not correctly calibrated;
Figure 23 is the flow chart that is suitable for the optional light-emitting component location algorithm that uses with Fig. 5 and 9 shown devices;
Figure 23 A is the flow chart for the processing of estimated signal source position;
Figure 24 is the flow chart of the light-emitting component position fixing process that uses in the some embodiments of the invention;
Figure 24 A shows the flow chart for the processing that obtains the used data of positioning luminous element;
Figure 24 B shows the flow chart that the data that obtain according to the process with Figure 24 are come the processing of positioning luminous element;
Figure 24 C is the screenshot capture that intercepts from cause the graphic user interface of processing shown in Figure 24 A and the 24B;
Figure 24 D shows be used to using the light-emitting component of locating to show the flow chart of the processing of image;
Figure 24 E is the screenshot capture that intercepts from cause the graphic user interface of processing shown in Figure 24 D;
Figure 24 F is the screenshot capture that intercepts from the simulator of simulation hair optical element;
Figure 24 G shows the screenshot capture that the data that how will define a plurality of light-emitting components are loaded into simulator;
Figure 24 H shows the screenshot capture at the interface of how to use Figure 24 G;
Figure 24 I is the screenshot capture that intercepts from the graphic user interface of the Interactive control that allows light-emitting component;
Figure 25 is the schematic diagram that produces system according to spatial sound of the present invention;
Figure 26 is the schematic diagram for the PC of control system shown in Figure 25;
Figure 27 has provided the flow chart of the overview of the performed processing of Figure 25 system;
Figure 28 shows the flow chart of the initialization process of carrying out in the system shown in Figure 25;
The flow chart for generation of the processing of the position data of specific sound transceiver that Figure 29 shows that system shown in Figure 25 carries out;
Figure 30 and 31 shows how to improve the process of using Figure 29 and the flow chart of the position data that produces;
Figure 32 shows the flow chart for generation of the process of the mapping of the volume in Figure 25 system;
Figure 33 show for the gain of the sonic transceivers of calculating Figure 25 system and towards the flow chart of process;
Figure 34 shows the flow chart that comes sonorific process with Figure 25 system;
Figure 35 shows the flow chart of the processing of the sonic transceivers execution in Figure 25 system;
Figure 36 shows in Figure 25 system the flow chart for generation of the optional process of sound;
Figure 37 is the schematic diagram for the process that space address is converted to address, this locality (native);
Figure 38 to 40 is schematic diagrames of 128 bit addresses configuration;
Figure 41 is the schematic diagram of Figure 37 process of realizing in the internet;
How in embodiments of the present invention Figure 42 shows the schematic diagram of usage space addressing; And
Figure 43 is the schematic diagram of the octree representation in the space of using in the embodiment of the invention.
Embodiment
With reference to Fig. 1, provide overview of the present invention.PC1 communicates with a plurality of light-emitting components 2 that are arranged on the tree 3 with random fashion.PC1 is configured to light-emitting component 2 is carried out space orientation, and carries out such location to show the pattern of user's appointment with this light-emitting component.
The flow chart of Fig. 2 shows the high level processing that Fig. 1 device is carried out.At step S1, come light-emitting component 2 is carried out space orientation with location algorithm described below.At step S2, typically input by the user that the file details of therefrom wanting reading out data is provided, and by reading out data from this specified file, receive the image that will show.Alternatively, can from frame buffer, read with traditional computer display the similar mode of the image that will show, reading images from frame buffer.At step S3, selection will light to show some light-emitting components 2 of image, after having selected suitable light-emitting component, lights these light-emitting components at step S4.Be appreciated that the light-emitting component that may need to light before extinguishing some is to show image.
Fig. 3 has schematically shown the desired output of the light-emitting component position fixing process of Fig. 2 step S1.Can see, the volume elements that a plurality of volume elements (voxel) have collectively defined the space that includes light-emitting component 2 represents 4.The volume elements that this position fixing process maps to the space with each light-emitting component 2 represents a volume elements in 4.After having carried out the process that Fig. 3 schematically shows, what relatively understand is suppose image mapped to the volume elements that will show represents 4 volume elements, then can determine should light which lamp for the specific image that will show.In other words, if know to light which volume elements, then the output of step S1 can easily identify the light-emitting component that will light.
Referring now to the schematic diagram of Fig. 4, be described in the process that shows image on the light-emitting component 2.Can see, show with light-emitting component 2 view data 5 of the 3-D view that represents cone, Fig. 3 is described such as reference, and light-emitting component 2 represents that with volume elements 4 are associated.View data 5 is mapped to volume elements represent 4, to identify a plurality of volume elements that to light.This step S3 with Fig. 2 is corresponding.After having carried out this map operation, can determine the light-emitting component 2 that will light, then, can light suitable light-emitting component, to come display image data 5 with light-emitting component 2.
Referring now to Fig. 5, the device that is used for realizing the preferred embodiment of the present invention is described.PC1 is connected with three control elements 6,7,8, and these control elements are connected with each group light-emitting component 2 via corresponding bus 9,10,11 successively.This device also comprises the power subsystem 12 that also is connected with control element 6,7,8.PC1 is connected with control element 6,7,8 via connected in series.Below with the operation of this device of more detailed description.
Referring now to Fig. 6, the structure of PC1 is described.PC1 comprises CPU13 and random access memory (RAM) 14.RAM14 is used for providing program storage 14a and data storage 14b.PC1 also comprises hard disk drive 15 and I/O (I/O) interface 16.I/O interface 16 is used for input and output device is connected to other assemblies of PC1.In an illustrated embodiment, keyboard 17 is connected with flat screen display and is connected with I/O interface 16.PC1 also comprises communication interface 19, and this interface allows PC1 to communicate by letter with following in greater detail control element 5,6,7.Preferably, this communication interface is universal serial bus.CPU13, RAM14, hard disk drive 15, I/O interface 16 and communication interface 19 are linked together by bus 20, and data and instruction can be transmitted between said modules along bus 20.
Fig. 7 shows the exemplary light-emitting component 2 that is connected with bus 9.Light-emitting component 2 comprises the light source by light-emitting diode (LED) 21 forms of processor 22 controls.Processor 22 is configured to receive the instruction that indicates whether to light LED21, and operates based on these instructions.Light-emitting component 2 also comprises diode 23 and capacitor 24.In a practical embodiment of the invention, can make the miniature version of light-emitting component 2, its size and traditional LED are similar.Such light-emitting component exposes two connections, along these two connections, provides power (5V DC power supply) and to the instruction of processor 22.Really, it should be noted that as will be described in more detail, light-emitting component 2 is connected with bus 9 by two connectors, and this light-emitting component obtains power and instruction from bus 9.
It should be noted that the light-emitting component shown in Fig. 7 only is exemplary, light-emitting component can be taked various form.Two optional forms are shown in Fig. 7 A and 7B.Because these optional forms help to eliminate flicker, therefore they are preferred in certain embodiments.Especially, the layout that it should be noted that Fig. 7 A comprises the diode 23a that connects with LED21 and the capacitor 24a in parallel with LED21.In addition, although shown in light-emitting component in light source be LED, can use the light source of any appropriate.For example, light source can be lamp, neon tube or cold-cathode lamp.Although it should be noted that in the described embodiment of the invention, instruction and power all offer light-emitting component via bus 9,, instruction and power can be provided by different modes.For example, can provide power via bus 9, and wirelessly, as by using Bluetooth communication, directly provide instruction from control element 6.Alternatively, can provide instruction via bus 9, and each light-emitting component has oneself power supply of battery forms.
As mentioned above, in described embodiment, provide instruction and electric energy via bus 9 to the light-emitting component 2 that is connected with bus 9.Typically, this is by 5v DC power supply being provided in bus 9 and modulating this power supply to provide single worker's one-way communication to realize to light-emitting component 2, so that control element 6 can send instruction to each light-emitting component.The power supply of 5v is preferred, otherwise may need more complicated light-emitting component, and the more high voltage that will receive is converted to the voltage that is applicable to light source.
When the device of design drawing 5, scalability is a main consideration.Particularly, make each light-emitting component easily and make cheaply, and will to control that function separates from light-emitting component be very important.Simultaneously, must be noted that to avoid the solution that is difficult to convergent-divergent of concentrations.For this reason, carry out overall control by PC1, control element 6,7,8 is entrusted the responsibility of the light-emitting component of its connection of control.Return with reference to Fig. 5, can see, each control element 6,7,8 is connected with PC1 via bus 25, and such configuration has allowed the expectation balance between trust and the scalability.
Control element 6,7,8 can come each light-emitting component 2 of order to open or close with various addressing schemes.Really, under some environment, whole light-emitting components that may need to be associated with specific control element open or close simultaneously, and under such environment, this control element can be controlled with broadcast communication the light-emitting component of its connection.Yet, wish very much individually each light-emitting component of addressing.Various possible addressing schemes below will be described, but should note, generally speaking, control element 6,7,8 can be processed the address (for example IPv6 of the following stated) of relative complex, and each light-emitting component typically operates with the simple address that corresponding control element produces.
Each light-emitting component must have unique address on its oneself bus.There is kinds of schemes can realize so unique addressing.For example, in certain embodiments, during fabrication the address is hard coded into each light-emitting component 2.This is the method that adopt medium access control (MAC) address of traditional computer network hardware.Method although it is so is feasible, but it should be noted that all addresses of supposition all are globally unique, and this may cause unnecessary longways location.This has damaged the desired simplification of light-emitting component.In addition, use such address need control element 6,7,8 and each light-emitting component 2 between two-way communication.For the reason of complexity and cost, preferably avoid such two-way communication.
In addition, in the scheme of using such hard coded address, suppose and to replace the light-emitting component that breaks down with the light-emitting component with identical address, then may be difficult to replace light-emitting component.This will damage availability, and require the user to come light-emitting component is sorted with respect to the address of light-emitting component, also require the supplier to store the light-emitting component that has in a large number different addresses.
Because these problems, in some embodiments of the invention, optional addressing mechanism is preferred.The method comprises: each light-emitting component dynamically is chosen in unique address on its bus that is connected to.The method use light-emitting component and the control element that is associated between co-operation, for each light-emitting component produces 8 bit addresses.
Fig. 8 shows the flow chart of address choice process.Each light-emitting component execution in step S5 and S6 of being connected with specific bus.At step S5, each light-emitting component uses pseudorandom number generator to produce a plurality of addresses.This process repeats predetermined amount of time (for example 1 second).Then, when this time period finished, the random number that produces at last was set to the address (step S6) of each light-emitting component.It should be noted that the inexactness between the clock will typically mean on the plate of processor of each light-emitting component, the address that obtains quite is evenly distributed in the address space.
When above-mentioned predetermined amount of time past tense, control element 6,7,8 is carried out respectively processing separately.This control element cycles through each address of address space successively.For selected address, the light-emitting component 2 (step S7) that is associated with this address is lighted in order.Suppose transmitted power and instruction on identical bus, then can determine the power that light-emitting component draws at step S8, the power that draws is proportional to the number of the light-emitting component that is associated with this assigned address.Determine the power (for example by measuring the electric current that draws) draw at step S8, determine the number of the light-emitting component lighted at step S9.Step S10 repeats this processing to each address successively, with the number of definite light-emitting component that is associated with each address.At step S11, carry out to check, with determine any address whether be associated more than a light-emitting component.If do not find such address, can assert that then each light-emitting component has the bus unique address, in step S12 end process.Yet, repeating if exist, order does not have all light-emitting component repeating step S5 of bus unique address and the processing of S6, and the step S14 among Fig. 8 shows this reprocessing.After predetermined amount of time, the processing of repeating step S7 to S12 has unique address to guarantee all light-emitting components.Determined the address repetition if should process, the then again processing of execution in step S13, therefore, this process continues, until all light-emitting components on the specific bus have the bus unique address.In order to improve convergence rate, control element can be specified at step S13 and do not used the set of address, and then, light-emitting component can not use address set from selecting its address, the risk that repeats to reduce the address from this.
In some embodiments of the invention, for providing non-volatile memory capacity, light-emitting component stores its last address of using.This can be avoided all processing of execution graph 8 when the luminous configuration of each use.Yet the bus that is connected in the time of should noting guaranteeing all light-emitting components still with its last use connects.In some embodiments of the invention, by the simply processing of the step S7 to S12 of execution graph 8, verify the light-emitting component that is connected with specific bus and use consistency between the last light-emitting component that uses the address.
Referring now to Fig. 8 A and 8B, the nonexpondable optional method that is used for the sign individual address is described.The processing that the step S7 to S10 of above-mentioned Fig. 8 has been replaced in fact in the processing of describing with reference to Fig. 8 A and 8B.This optional method is particularly suitable for using the significantly situation in space, location.Especially, be suitable for address space fully greater than the situation of the number of the light-emitting component of address to be allocated.This optional method has been avoided the linear process of passing through the possibility address set as requiring in the described processing of reference Fig. 8.Really, in the situation that address space is larger, the linear process by may address set may be infeasible on calculating.For example, when using 32 bit address space, per second surpasses 1 year by the linear process of 100 addresses with consuming time.Optional method with reference to Fig. 8 A and 8B description adopts hierarchy plan to determine whether to exist any address conflict.
Referring now to Fig. 8 A, at step S100, determine the scope of address.At step S101, produce the subrange in the determined address realm.This can be by realizing with suitable prefix easily.For example, if will be divided into two subranges in the scope that step S100 determines, this can be defined as the first subrange the address that begins with " 0 " value bit, the second subrange is defined as the address that begins with " 1 " value bit.If wish from the scope that step S100 determines, to produce the subrange more than two, then can use the prefix that includes more than individual bit.For example, when use includes the prefix of two bits, can provide 4 subranges.
At step S102, as will be described in more detail, the address in each subrange is processed.Step S103 determines whether to also have other subrange to process.If there is not such subrange also will process, then process the step S11 that returns Fig. 8.Yet, if also have other subrange to process, process from step S103 and return step S102.
Fig. 8 B illustrates in greater detail the processing of step S102.At step S104, order the light-emitting component in the address subrange of lighting current processing.At step S105, the power that definite light-emitting component of lighting draws, this power of determining is used for the number at the definite light-emitting component of having lighted of step S106.At step S107, carry out checking, determine whether to have lighted any lamp.If do not light any lamp, then can record data, indication does not have light-emitting component to have in the address in the subrange of pre-treatment.At step S108, the data of this situation of storage indication do not need further to process the address in the handled subrange.Yet, if the inspection of step S107 determines to have lighted some light-emitting components, process from step S107 and arrive step S109.Here, carry out to check with definite address realm when pre-treatment whether only comprise individual address.If so, then process from step S109 and arrive step S110, at step S110, carry out checking, light more than a light-emitting component determining whether.If determine to have lighted more than a light-emitting component, then process arriving step S111, these true data of storage indication in this step.Then, process this data in above-mentioned mode with reference to Fig. 8.Yet, if the single light-emitting component of only lighting records its address, at step S112 with this address mark for distributing.
If the inspection of step S109 is determined to comprise more than an address when the scope of pre-treatment, is then processed from step S109 and arrive step S113.Here, before step S114 processes subrange, from when the address realm of pre-treatment, producing subrange.The processing of step S114 itself comprises that Fig. 8 B is to the processing of each subrange of producing among the step S113.Therefore, can see, step S109, S113 and S114 mean, when light-emitting component is positioned in the subrange, carries out and further process to determine the address of this light-emitting component.
The complexity that it should be noted that the process of describing with reference to Fig. 8 A and 8B is relevant with the logarithm of the number of light-emitting component and address number.This complexity is not linear with total address number.Therefore, for very large address realm, the processing of Fig. 8 A and 8B is feasible calculating.
Be appreciated that when (comprising static state or dynamic) in any suitable manner and when distributing the address, can use the processing of Fig. 8 A and 8B.The processing of Fig. 8 A and 8B provides a kind of effective means of determining each used address of light-emitting component.
Description is before paid close attention to and is determined that the address is to allow control element 6,7,8 to control respectively the mode of each light-emitting component 2.Described, bus 9,10,11 is load power (typically 5v power supply) also.9,10,11 data that address and instruction type is provided along bus 25 to bus.PC1 communicates by letter with bridger (bridge) 25a via the USB connection.Bridger 25a is connected with control element 6,7,8 via bus 25.Along the bus 26 that is connected with power subsystem 12 power is offered bus 9,10,11.Although bus 25 and 26 can be single common bus, currently preferred embodiment of the present invention is used two different buses 25,26.
Power subsystem 12 is 36v DC power supplys.Each control element 6,7,8 comprises for the device that this 36v DC power supply is converted to the desired 5v power supply of each bus.Use the 5V power supply to allow the processor of Application standard.Also provide the device of power supply being modulated to carry instruction to control element 6,7,8.
Typical LED light-emitting component consumes the electric current of 30mA.Therefore, under 5V voltage, a string 80 light-emitting components will draw the electric current of 2.4A.Use cheap narrow specification cable to connect (narrow gauge cabling) and can satisfy such requirement.
Linear relationship between electric current and the light-emitting component number has limited the scalability of single light-emitting component string.This scalability is subject to the further restriction of the following fact, and namely the number of lamp is larger, and the data volume that transmit is also larger, thereby has increased the frequency of modulation power source.If the number of lamp is excessive, then this frequency will become too high.
In view of the restriction to the scalability of single light-emitting component string, the device of Fig. 5 allows 8 control elements to be connected with single 36v power subsystem.Each control element can be controlled 80 lamps, this means that the configuration of Fig. 5 can be used for providing 640 light-emitting components.Can use the cable connection such as standard C AT5 cable connection and so on that these control elements are linked together.
If 640 light-emitting components are inadequate, then under the control of center control element, the device of Fig. 5 can link together with other similar devices.Fig. 9 has illustrated such configuration.Here, by bandwidth interconnections circuit 29 two device 27,28 (each configure as shown in Figure 5) are linked together.Then, center control element 30 provides the integral body control to this configuration, provides instruction to each PC31,32 that installs 27,28.
Above described, power and instruction all offer light-emitting component along bus 9,10,11.This realizes with pulse width modulating technology.Fig. 9 A shows illustrated example pulse sequence.Can see, generally speaking, provide+voltage of 5v.In the time will sending data, this voltage is down to ground.The time span that the value that sends is down to ground by this voltage represents.Particularly, from Fig. 9 A, can see, represent " 0 " bit with relatively short pulse, and represent " 1 " bit with relatively long pulse.
In addition, when such modulation with relatively high-tension power supply (for example 36v power supply) when using, voltage can be down to ground, but is down to simply lower level.For example, if maximum voltage value is 36v, then voltage can be down to 31v and be represented data.
Owing to having avoided voltage to be positioned at for a long time 0v or be lower than the value of desired value, so the transmission of above-mentioned data is favourable.In other words, relatively short by keeping pulse duration, the minute differences of the power that provides should be provided.
Bus 9,10,11 speed with 50kbps communicate.This speed allows to come deal with data by relatively cheap 4MHz processor.Speed with 500kbps is transmitted in the data that transmit between the control element on the bus 25.
Now description will be sent to the form of the data of light-emitting component.Fig. 9 B shows a kind of packet.Can see, this packet comprises the destination field 100 of 8 bits, the address that specific data will be sent to; The command field 101 of 8 bits, the order that indication is associated with this packet; And the length field 102 of 8 bits, indicate the length of this packet.Checksum field 103 for packet provide verification and.Payload field 104 is stored in the data that transmit in this packet.
The value indication light element addresses that destination field 100 is got.Yet destination field 100 can value be 0, and the destination of indicating this packet is the control element on the specific bus, or value is 255, the indication broadcast data packets.
As describing now, can specify various command in the command field of the packet of Fig. 9 B.
Order ON opens the one or more light-emitting components by the address designation in the destination field 100, and order OFF closes the one or more light-emitting components by the address designation in the field 100.
When initial, to the order SELF_ADDRESS of all light-emitting components broadcasting with blank (blank) payload field 104, with trigger light-emitting component in the above described manner (Fig. 8 step S6) distribute the address.When detecting address conflict, broadcast another SELF_ADDRESS order, but payload field 104 has the bit mode that indication has distributed the address here.In other words, this bit mode can comprise the bit for each possibility address.In case receive second packet that comprises the SELF_ADDRESS order, light-emitting component determines by the bit mode that provides in the payload field 104 is provided whether the address of its selection is illustrated as distributing.If the address of its selection is not illustrated as distributing, can determine that then this selected address has caused and the conflicting of the address of another light-emitting component.Therefore, this light-emitting component is selected different addresses.
When selecting different address, this light-emitting component can be considered the address to be allocated of payload field 104 indicatings, to eliminate further address conflict.
Order SELF_NORMALISE is used for deallocation.SELF_COMMAND is described such as the reference mentioned order, and the packet that sends from normalization SELF_NORMALISE order has the payload that indication has distributed the address.Order SELF_NORMALISE adjusts the address, so that the address is continuous.This is to process payload field 104 by light-emitting component to realize with the bit that sign is associated with its address.Bit before this address is counted, and added 1 to this counting, think that specific light-emitting component provides the address.
Order SET_BRIGHTNESS is used for setting light-emitting component brightness.The packet that sends this order has the payload field 104 of indication brightness and the destination field 100 of suitable configurations.Similarly, order SET_ALL_BRIGHTNESS is used for setting the brightness of all light-emitting components.
Order CALIBRATE makes each light-emitting component launch a series of pulses, and as described below, this series of pulses can be used for the sign light-emitting component, to be used for alignment purpose.By light-emitting component processing command FACTORY_DEFAULT, so that the setting of light-emitting component returns to the default value that dispatches from the factory.
How to have described after the light-emitting component move instruction, described in more detail the operation of light-emitting component and control element now.
Fig. 9 C shows the flow chart of the operation of light-emitting component.S1 powers up light-emitting component in step, carries out hardware initialization at step S121.At step S122, attempt from storage device, being written into the address of this light-emitting component.When using static address, or when light-emitting component stores the data of its last address of using of indication, from storage device, be written into the address at step S122.
A plurality of somes executable operations in the processing of Fig. 9 C are set the brightness of LED.This has comprised the frequency of control to the LED excitation effectively, so that the brightness of required expectation to be provided.Carry out such processing at step S123.
At step S124, carry out checking to determine whether light-emitting component can receive lock-out pulse in the bus that it was connected.If do not receive such pulse, then process and return step S123.Yet, if receive lock-out pulse, process to proceed to step S125, at step S125, reading out data bit on the bus.At step S126, carry out checking to determine whether to have read 8 Bit datas (1 byte).If also do not read 1 byte, then process and return step S125.When reading 1 byte, before step S128 upgrades checksum value based on handled byte, again configure the brightness of LED at step S127.At step S129, the byte that storage receives, but it should be noted that this processing of configuration, only to store the interested byte of specific light-emitting component at step S129.
Processing arrives step S130 from step S129, at step S130, carries out checking, whether represents packet header to determine 4 bytes processing recently.In other words, carry out to check whether represent as with reference to the described destination field 100 of Fig. 9 B, command field 101, length field 102 and checksum field 103 to determine 4 bytes processing recently.Really represent packet header if determine the byte of processing recently, then process arriving step S131, in this step, resolve this packet header.Then, process arriving step S132, this step checks based on the value of the command field 101 of handled packet header.If command field 101 these groupings of indication comprise multiplexing payload, then process arriving step S133, otherwise, process and return step S125, read other data in this step from bus.Multiplexing payload is the payload of this packet of indication target light-emitting component pointed.In other words, the payload as in above-mentioned SET_ALL_BRIGHTNESS order, providing.When packet comprised multiplexing payload, light-emitting component is calculated in the processing of step S133 can the interior suitable skew (offset) of interested payload.In other words, this payload will be relatively long, and light-emitting component may not have enough memory capacity to store whole payload.Therefore the processing of step S133 identifies the skew in the payload, has found interested data in this skew place.Can use the skew of determining at step S133 in the subsequent treatment, determining whether should be at step S129 storage data byte.
If the inspection of step S130 determines that 4 bytes of reception do not represent packet header recently, then process arriving step S134, carry out checking in this step, whether collective has represented complete packet with definite byte that receives recently.If not, then process and return step S123, and continue as mentioned above.Yet, if the inspection of step S134 determines to have received complete grouping, process arriving step S135, carry out checking in this step, whether effective with the checksum value that the processing of determining step S128 is calculated.If checksum value is invalid, then processes and return step S123.Otherwise, process proceeding to step S136, carry out checking in this step, whether should be processed by this specific light-emitting component with the packet of determining to receive.If the packet that should receive should not processed by this specific light-emitting component, then process and return step S123.Otherwise, carry out subsequent treatment, determine character and the desired action of the packet of this reception.
At step S127, carry out checking, determine whether the packet that receives represents ON order or OFF order.If so, then before step S123 is returned in processing, upgrade the state of LED at step S138.
At step S139, carry out checking, whether represent the SET_BRIGHTNESS order to determine the packet that is received.If so, then before step S123 is returned in processing, upgrade the monochrome information of above-mentioned steps S123 and step S127 use at step S140.
At step S141, carry out checking, whether represent the FACTORY_DEFAULT order to determine the packet that is received.If so, then process arrival step S142, the setting of the light-emitting component that in this step, resets.Then, step S123 is returned in processing.
At step S143, carry out checking, whether represent the SELF_ADDRESS order to determine the packet that is received.If so, then process and proceed to step S144, in this step process payload, the data of whether having distributed with the address that obtains the indication light element.If this address is distributed, then can determine not have address conflict.Yet, if this address is unallocated, can determine really to have occured address conflict.At step S145, carry out checking, address conflict has occured to determine whether the data that are associated with the address of light-emitting component indicate.If there is not such conflict, then processes and proceed to step S123.Yet, if address conflict has occured really, process from step S145 and arrive step S146, be that light-emitting component is selected other addresses in this step, selected address is not marked as in the payload of received data grouping to be distributed.
At step S147, carry out checking, whether represent the SELF_NORMALISE order to determine the packet that is received.If so, then process and proceed to step S148, in the payload of this step process packet, to determine distributed how many more addresses of low value to other light-emitting components.Then, at step S149, by to what have distributed more count the address of low value, and add 1 to count results, calculate the address of current light-emitting component.
At step S150, carry out checking, whether represent the CALIBRATE order to determine the packet that is received.If so, then process to arrive step S145, in step 151, determine the code that to launch by visible light.Then, at step S152, provide determined code to LED.The processing of step S153 has been guaranteed this yard emission 3 times.Below with generation and the use of this code of more detailed description.
After the operation of having described light-emitting component, come description control element 6,7,8 operation referring now to Fig. 9 D.
At step S155, control element is powered up, at step S156, the hardware of initialization control element.At step S157, control element receiving data frames on the bus 25 of its connection.At step S158, the frame that reads in step S157 is decoded, and at step S159 it is verified.If the checking of step S159 is unsuccessful, then processes and return step S157.Otherwise, process by arriving step S160 from step S159, in this step calculation check and value.At step S161 this checksum value is verified, if this checksum value is invalid, is then processed and return step S157.If this checksum value is effective, then process and proceed to step S162, in this step this frame is resolved.At step S163, carry out checking, whether should be processed by current control element with the frame of determining to receive.If not, then process arriving step S164, carry out checking in this step, whether should continue to be sent to the light-emitting component under this control element control to determine the frame that receives.If so, then before step S157 is returned in processing, transmit this frame at step S165.Should not continue to transmit this frame if process the control element of this frame, then process from step S164 and arrive step S157.
If the inspection of step S163 is determined should to be processed by specific control element when the frame of pre-treatment, then process arriving a plurality of inspections, these inspections are configured to determine the character of the order that receives.
At step S166, carry out checking, whether represent ping message to determine the frame that receives.If so, then control element produces response to this ping message at step S167, and sends this response at step S168.
At step S169, carry out to check, whether for the request of data, these data have been indicated the current electric current that is just drawing from this control element of the light-emitting component that is connected with control element with the frame determining to receive.That is, the frame of reception is whether for the request of the data of indication electric power consumption.If so, then consume at step S170 reading current, and before step S157 is returned in processing, at step S171, provide the electric current that reads by response.
At step S172, carry out to check, with the frame determining to receive whether for the request of correcting current.Whether the Request Control element is carried out calibration operation to the frame that namely receives, and to determine with lighting elements not, to light a light-emitting component and two levels of current that light-emitting component is associated, can use as described above such levels of current.If it is the request to correcting current that the frame that receives is determined in the inspection of step S172, then process arriving step S173, close all light-emitting components in this step by broadcast.At step S174, the current drain when measuring lighting elements not.Light a light-emitting component at step S175, measure the current drain that produces at step S176.At step S177, light two light-emitting components, measure the current drain of these two light-emitting components at step S178.Then, before step S157 is returned in processing, at step S179, when storage represents lighting elements not, when lighting a light-emitting component and the data of the current drain when lighting two light-emitting components.
At step S180, carry out checking, whether represent carrying out the request of addressing operation to determine the frame that receives.If so, then process proceeding to step S181, close all light-emitting components under this control element control in this step.At step S182, select the address, give an order to light any light-emitting component that is associated with selected address.At step S183, measure the current drain of the light-emitting component of lighting, to determine whether to occur address conflict.At step S184, close the light-emitting component of lighting, in step S185 scheduler mapping, indicate single light-emitting component and handled address to be associated, not to have light-emitting component to be associated with handled address or a plurality of light-emitting component is associated with handled address (namely having address conflict).At step S185a, carry out checking, to determine whether that the address will be processed in addition.If so, then step S182 is returned in processing.When not also during address to be processed, process arriving step S186, carry out checking in this step, to determine whether to exist any address conflict.If there is no address conflict can determine that then light-emitting component has the address of unique distribution, processes to proceed to step S157.Yet, if really there are one or more address conflicts, then process from step S186 and arrive step S187, send self-routing (self address) message in this step to all light-emitting components, the payload of this message has been indicated address assignment in the above described manner.At step S188, before step S183 was returned in processing, control element delay scheduled time section was to allow the light-emitting component deallocation.
At step S189, carry out to check, with the message determining to receive whether the Request Control element produce the data that are used to form as mentioned above to the basis of the SELF_NORMALISE order of light-emitting component.If so, then process and arrive step S190, close all light-emitting components in this step order, and any address mapping of storage before removing.At step S191, give an order, with in selected address lighting elements.At step S192, measure the electric current that consumes in response to this order, and at step S193 lamp is closed.At step S194, the scheduler mapping comes, to indicate whether having light-emitting component to be associated with the address of working as pre-treatment.This processes the electric current of measuring based at step S192.Processing arrives step S194a from step S194, carries out checking in this step, to determine whether in addition address how to be processed.If so, then step S191 is returned in processing.When not having other light-emitting component also will process, at step S195, produce the SELF_NORMALISE order to light-emitting component, the address that produces mapping is provided in the packet of transmitting this order.
The light-emitting component that many description concerns before are connected with fixed line.It should be noted that the address above mentioned distribution method is widely used in the set of any following equipment, namely this equipment has to all devices and sends the ability of broadcast and distinguish zero whether, one or more equipment in the ability of usefulness.Especially, in the situation that light-emitting component is detected by suitable video camera, can determine to light specific light-emitting component by the light that light-emitting component self is launched.Determine whether a bright light with the light of launching, this is especially valuable in wireless layout, in wireless layout, can not monitor the power that each light-emitting component consumes.It should be noted that also such scheme needing to have avoided light-emitting component to send on one's own initiative data, from the viewpoint of complexity and power consumption, this is especially favourable.
Description has before illustrated how a plurality of lamps link together to realize the distributed control of each lamp and provide power to each lamp easily.
Return with reference to Fig. 2, can see, at step S1, positioning luminous element 2 in the space.The ensuing part of this description has been described various location algorithms.Generally speaking, location algorithm to catch the image of luminous pattern, then uses these images by using a plurality of video cameras (or use successively or use simultaneously) to operate in localization process.
Figure 10 is the schematic diagram of two video cameras 33,34 observed 5 light-emitting component P, Q, R, S, T.Light-emitting component P, Q, R, S are in the visual field of video camera 33, and light-emitting component Q, R, S and T are in the visual field of video camera 34.Figure 10 A has illustrated the example image that video camera 33 is caught.Can see, 4 pixels are lit, among each pixel corresponding 4 light source P, Q, R, the S one.Figure 10 B has illustrated the example image that video camera 34 is caught.Here, or 4 pixels are lit, expression light-emitting component Q, R, S, T.Although in the image of Figure 10 A and 10B, which pixel is pixel can't identify and with which light-emitting component be associated with light-emitting component is relevant separately separately.Describe now this solution of problem scheme, at first with reference to Figure 11, in Figure 11,4 light-emitting component A, B, C, D are in the visual field of video camera 35.Each light-emitting component A, B, C, D have unique identification code in 4 light-emitting component A, the B that will be positioned, C, D.This identification code is taked the form of binary sequence.In the location of light-emitting component A, B, C, D, each light-emitting component presents its identification code by opening and closing according to identification code.
As shown in table 1, be 4 light-emitting component A, B, C, D assigned identification codes:
Light-emitting component Identification code
A 1001
B 0101
C 0111
D 0011
Table 1
Figure 11 A shows when each light-emitting component A, B, video camera 35 was caught when C, D presented its identification code image, here suppose that light-emitting component A, B, C, D synchronously present mutually its identification code, suppose that video camera 35 and light-emitting component are static toward each other, suppose that each light-emitting component is lit the one or more pixels in the image of catching.Figure 11 A is included in 4 different 4 images that constantly produce, and the time between image is enough to make each light-emitting component to present next bit of its identification code.
At moment t=1, video camera 35 detects light-emitting component A.At moment t=2, video camera 35 detects two light-emitting components, and the lamp that detects different from the lamp that detects at moment t=1 (being light-emitting component B and C) has namely detected 3 lamps altogether.At moment t=3, video camera 35 detects two light-emitting components again, but detects light-emitting component C and D this moment.Therefore, after the image of moment t=3, detected whole 4 light-emitting component A, B, C, D, relied on its locus in the generation image, these lamps can be distinguished from each other.At moment t=4, detect light-emitting component A, B, C, the D of whole 4 first prelocalizations.
By making up the data of whole 4 images, can determine the identification code of each light-emitting component, even video camera 35 moves, even or observe these light-emitting components from different video cameras, this also allows these light-emitting components are distinguished from each other.
Can see, detect light-emitting component A at moment t=1 and t=4, but can't detect at moment t=2 and t=3.Therefore, determine that the identification code of light-emitting component A is 1001, as shown in table 1.Detect light-emitting component B at moment t=2 and t=4, but can't detect at moment t=1 and t=3.Therefore, the identification code of determining light-emitting component B is 0101, and is again as shown in table 1.Detect light-emitting component C at moment t=2, t=3 and t=4, but can't detect at moment t=1.Therefore, determine that the identification code of light-emitting component C is 0111, as shown in table 1.At last, detect light-emitting component D at moment t=3 and t=4, but can't detect at moment t=1 and t=2.Therefore, the identification code of determining light-emitting component D is 0011, and is again as shown in table 1.
Can recognize, above-mentioned simple 4 bit code only are enough to provide different codes for 16 light-emitting components.Also can recognize, detecting simply in the above described manner lamp may go wrong, and error easily occurs.For example, the object that falls may have been covered the observability of video camera to light-emitting component such as leaf, thereby causes determining improperly its identification code.Really, even particulate matter also may be covered the observability of light-emitting component.On the contrary, the detection possible errors ground of external light source detected light-emitting component.Below description is intended to improve the various encoding mechanisms of the restoring force (resilience) of this identification procedure.
In preferred embodiments more of the present invention, use Hamming (Hamming) code that the light-emitting component identification code is encoded.Because the complexity of Code And Decode process is relatively low, so in some embodiments of the invention, Hamming code is preferred.Owing to need each light-emitting component to produce code, and light-emitting component is designed to have low-down complexity with the lifting scalability as mentioned above, so this being very important.Hamming code provides in each coding transmits the assurance up to 2 bit error codes (error) has been detected, or in the situation that does not need other transmission can correct the individual bit error code.In the situation that approximately 50%, can detect the coding that comprises 3 or more error codes and transmit.The relatively common situation of accidental (sporadic) bit error code that Hamming code is generally used for.
Hamming code is a kind of form of block parity (block parity) mechanism, describes as a setting now it.Using single Parity Check Bits is one of the simplest Error detection form.A given code word is added single added bit to this code word, and this added bit only is used for error code control (error control).Number according to the bit that has " 1 " value in the code word is odd number (odd) or even number (even parity check), sets this bit value of (being called as Parity Check Bits).In case receive the code word that comprises Parity Check Bits, the relative value of Parity Check Bits checks the odd even of code word, to determine whether error code occurs in transmission.
Although above-mentioned simple Parity Check Bits mechanism provides the Error detection of a bit, it can not provide any error correcting capability.For example, it can not determine which bit is wrong.Can not determine whether to have occured more than a mistake.
Hamming code has utilized a plurality of complementary Parity Check Bits that the code of more healthy and stronger (robust) is provided.This is called as block parity mechanism.Hamming code adds n additional parity-check bits to a value.For n〉3 (for example 7,15,31 ...), the code word of hamming code has 2 nThe length of-1 bit.(2 n-1) (2 in the individual bit n-1-n) individual bit is used for the data transmission, and n bit is used for the EDC error detection and correction data.In other words, can carry out hamming code to 4 bit message, form 7 bit codewords, wherein 4 bits represent the data that needs transmit, and 3 bits represent the EDC error detection and correction data.Can carry out hamming code to 11 bit message simply, form 15 bit codewords, wherein 11 table of bits are shown with and use data, and 4 bits represent the EDC error detection and correction data.
Hamming code is described now.By adopting the odd even of subset in the data bit, produce Parity Check Bits.Each Parity Check Bits is considered different subsets, formally selects subset, so that the individual bit mistake can produce the inconsistency between at least two Parity Check Bits.This inconsistency has not only been indicated the existence of error code, and it is incorrect also can providing enough information to identify which bit.Therefore, this permission is corrected error code.
Provide the example of cataloged procedure referring now to Figure 12.Here, 44 bit identification codes in the his-and-hers watches 1 carry out hamming code, produce 7 bit codewords.4 identification codes shown in the table 1 have formed the input data 36 to Parity Check Bits generator 37.Parity Check Bits generator 37 is 3 Parity Check Bits 38 of identification code output of each input.Then, will input data 36 and Parity Check Bits 38 makes up, produce the identification code 39 of hamming code.
The operation of Parity Check Bits generator 37 is described now in more detail.To each enter code word 36, produce 3 Parity Check Bits, sue for peace by 3 bits to enter code word, and adopt the least significant bit of the binary number that obtains, calculate each Parity Check Bits.The bit that Figure 12 shows input code 36 is noted as c 1To c 4(c wherein 1Highest significant position), Parity Check Bits p 1, p 2And p 3Be calculated as follows:
p 1=c 1+c 2+c 4
p 2=c 1+c 3+c 4
p 3=c 2+c 3+c 4
After each identification code having been calculated these 3 Parity Check Bits, by being incorporated into this identification code for 3 Parity Check Bits of each identification code generating, to produce 7 bit values, produce the identification code 39 of hamming code.Generally speaking, these Parity Check Bits interweave with the bit of specifying this identification code usually, so that parity data can all not lost in burst error (bursterror).In other words, 3 bit 40 expression EDC error detection and correction data of 7 bit values, and all the other 4 bits, 41 expression identification codes.
Although at length do not present here, can carry out in a very similar way since 11 bit values and produce 15 bit codewords, these codings are apparent for those of ordinary skills.
Also can expand to form extended hamming code to Hamming code.This comprises to code and adds final Parity Check Bits, and this bit operates the Parity Check Bits that produces as mentioned above.This allows code when having the ability of correcting 1 bit mistake take an added bit as cost, also can detect the dibit mistake in (rather than correction) single transmission.Can come to produce 16 bits of encoded values from 11 bit values with extended hamming code, and produce 8 bits of encoded values from 4 bit values.
In a preferred embodiment of the invention, light-emitting component has the 11 bit identification codes that are associated, and comes these identification codes are encoded with extended hamming code, to produce 16 bits of encoded identification codes.11 bit identification codes provide 2 11(2048) individual different identification code this means and can use 2048 light-emitting components and they are distinguished from each other.By using extended hamming code, each yard has good error code restoring force, and error detecting and error correcting function is provided.Use such extended hamming code to provide by aerial (a kind of noisy channel) well balanced between the needs of required robustness and use efficient coding mechanism when transmitting light pattern, to have kept the simplification of each light-emitting component.The relatively little expense that extended hamming code applies (i.e. 5 bits) can excessively not increase light-emitting component and send visibly the required time of code.
Although in some embodiments of the invention, 16 bit code of the above-mentioned type are preferably, can use other alternative codes, have the identification code of 4 bit lengths as encoding with 8 bit expanded Hamming codes.Code although it is so only provides 16 different identification codes, mean and can only use simultaneously 16 light-emitting components, but owing to having reduced code length, the sign accurately chance of code increases.Yet balance is that each light-emitting component is sent 28 bit expanded Hamming codes than the improved identity characteristic of short code with a kind of possible solution that needs the different identification code of greater number.Such technology will provide 255 different identification symbols, and each identifier comprises two codes.In addition, such technology will keep and the good error code restoring force that is associated than short code.
In optional embodiment of the present invention, may need the very different identification code of big figure.In this case, can distribute 26 bit identification codes to each light-emitting component, this 26 bit identification code can be encoded to 31 bit expanded Hamming codes.Code like this allows to use 2 26(approximately 6,000 7 hundred ten thousand) light-emitting component.
As mentioned above, light-emitting component comes to send its identification code to one or more video cameras visibly by opening or closing its light source.In order to improve scalability and minimization system complexity, light-emitting component and video camera asynchronous operation.That is, between light-emitting component and video camera, do not transmit timing signal.Therefore, light-emitting component changes between the moment of state and moment that video camera is caught frame not synchronously.
When using the asynchronous transmission of the above-mentioned type, must be with respect to the frame per second of video camera, carefully control sends the speed (frequency) of code, to guarantee catching at least one-frame video data for each transformation (transition).Otherwise data may be lost, and cause inaccurate code word to receive.More specifically, according to Nyquist's theorem, the frequency that sends code must be not more than half of video camera frame per second.Typically, video camera operates in the frame per second of 25 frame per seconds.Therefore, typically, send the sign code word with the frequency that is not more than 12Hz.
In a preferred embodiment of the invention, in the code process of transmitting, use one of two kinds of modulation techniques.Modulation technique is the mode that code word (a series of 0 and 1) is converted to physical effect (being the flicker of light-emitting component in this case).The first modulation technique is non-return-to-zero (NRZ) coding, and the second modulation technique is binary phase shift keying (BPSK).Two kinds of modulation techniques are all described in more detail following.
Nrz encoding is a kind of simple modulation technology for the data transmission." 1 " is converted to high impulse, " 0 " is converted to low pulse.In a preferred embodiment of the invention, transmit " 1 " and relate to and opens light-emitting component, transmit " 0 " and then relate to and extinguish light-emitting component.This is above-mentioned modulation technique with reference to Figure 11 and 11A.
Usually, NRZ modulation is not associated with asynchronous communication, and this is because in the code word continuous 0 of long string or 1 or cause in long-time section signal condition (being the state of light-emitting component in this case) constant.Thus, because the clock drift between transmitter and the receiver, some bits may be by " ignorance ".In addition, as described in more detail below, in situation of the present invention, such modulation may make the detection to transmitting beginning go wrong.
Yet, use in an embodiment of the present invention the NRZ modulation that some benefits are also arranged.Therefore at first, the transfer rate of data very slow (12Hz) is compared with the clock accuracy of current processor, and it is unconspicuous that clock drift can be considered to.Secondly, the efficient of NRZ modulation is relatively high---and each cycle can send 1 Bit data, and at 12Hz, per second can send 12 bits.Therefore, although there is above-mentioned shortcoming, the NRZ modulation of using in some embodiments of the invention.
Above-mentioned the second modulation technique is the BPSK modulation, and this is the relatively simple modulation technique of another kind.The advantage of BPSK modulation is, uses the code of BPSK modulation to transmit long-time section that does not comprise without changing.The BPSK modulation is described now.
BPSK modulation operates by the pulse (being light pulse in situation of the present invention) of transmission regular length, no matter and what will send is " 0 " or " 1 ".BPSK encodes to " 0 " value and " 1 " value in a particular manner, then encodes to send data with this.Referring now to example BPSK is described.In this example, " 0 " is encoded to low period heel along with the high period, " 1 " is encoded to high period heel along with the low period.Figure 13 shows this coding, wherein can see the pulse shape for expression " 0 " and " 1 ".
Two coded pulses that Figure 14 has illustrated the coding of use Figure 13 to produce flow 42,43.Can see, each stream of pulses comprises 4 pulses, and each pulse has the duration of two clock cycle.Stream of pulses 42 comprises " 1 " pulse, is following " 0 " pulse, then is being another " 0 " pulse, is following " 1 " pulse, therefore, and stream of pulses 42 expression codes 1001.Stream of pulses 43 comprises " 0 " pulse, is following 3 " 1 " pulses, therefore, and stream of pulses 43 expression codes 0111.
With reference to Figure 14, can see, no matter now data are why, exist never without change more than clock cycle of two, this means and can realize more easily that data transmit accurately.Yet, it should be noted that two clock cycle of present needs transmit individual bit.This has produced the low effective data transfer rate of 6 bits per seconds.
Description before provides NRZ modulation and BPSK to modulate the details of these two kinds of modulation schemes.The NRZ modulation is applicable to the embodiments of the invention of light-emitting component (be that video camera and light-emitting component are fixed, be not subject to DE Camera Shake, wind or other similar effects) fixed relative to one another.Under the 12Hz transfer rate, modulate to identify approximate 1.5 seconds of the time of 16 bit identification codes with NRZ.BPSK modulation provides the much more healthy and stronger scheme of supporting high mobility more, but cost is slightly high recognition time, and for 16 bit code, recognition time is 3 seconds.Owing to this time difference of most scenes can be ignored, therefore in many embodiment of the present invention, the BPSK modulation is likely preferred.
As the situation in many data communication systems, be placed on the frame from the data that light-emitting component is sent to video camera with the form of visible light, this frame adopts form as shown in figure 15.In order to allow synchronously (otherwise, be asynchronous) between light-emitting component and the video camera between the two, the first of the data of framing is silence period 44, does not send data in this section.Typically, the duration of this silence period equals 5 pulse periods.After this silence period, send individual bit data 45 in the mode of initial bits (start bit).This has indicated and will send data, and this bit can be taked the form of " 0 " pulse or " 1 " pulse.After having sent initial bits 45, then send the data that to communicate by letter.As mentioned above, these data typically comprise 11 bit values are carried out 16 Bit datas 46 after the extended Hamming code.After having sent data 46, send and to stop bit and indicate and be sent completely.
It should be noted that and modulating to realize with NRZ in the situation of the present invention, may need data 46 are further encoded, a plurality of " 0 " of guaranteeing that data 46 do not comprise and being enough to define silence period.The suitable encoding scheme that realizes this point is Manchester's code or 4B5B coding.Such as given pulse for the BPSK modulation, then when adopting the BPSK modulation, do not need to use such coding.
Describing how to be light-emitting component generation identification code, and how these identification codes after the communication, the processing of identifying light-emitting component from the image that video camera produces are described now between light-emitting component and video camera.Figure 16 has schematically shown the device that is suitable for carrying out this processing, and wherein 3 video cameras 50,51,52 are connected with PC53.Preferably, video camera 50,51,52 is connected with PC52 by wireless mode, and this helps the mobility of video camera.The view data that these video cameras are configured to catch is sent to PC53, and PC53 can have above-mentioned configuration in fact as shown in Figure 6.
Referring now to Figure 17 to 19, the processing that PC53 carries out the view data that receives is described.Describe this processing with reference to video camera 50, can recognize, video camera 51,52 need to be carried out similar processing independently.Figure 17 provides the schematic overview of this processing.PC53 comprises frame buffer 52, wherein the image data frame that receives is stored, and is processed frame by frame.Reference number 55 among Figure 17 has represented that this processes frame by frame.Can see, this frame buffer comprises the frame 56 of nearest reception and is right after the preceding frame 57, and both are by 55 uses of frame by frame processing that will describe with reference to Figure 18 now.
At step S15, the view data that receives is added timestamp.Because many video cameras can not be caught frame with accurate regular interval, so this process is very important.Therefore, the hypothesis of catching frame with 1/25 second sync interval may be incorrect, and the timestamp that applies is used as the more accurate mechanism of determining the time interval between the frame.
After the image that receives is added timestamp, at step S16, use narrow-band pass filter in color space, this image to be carried out filtering, to eliminate all colours outside the color of mating with the light-emitting component that is being positioned.Typically, this can comprise image is carried out filtering, to get rid of all light outside the pure white light.
At step S17, with respect to the image of previous reception, the image of up-to-date reception is carried out different filtering.This filtering compares the intensity (after the filtering of step S16) of each pixel intensity with the respective pixel of the frame of first pre-treatment.If intensity difference greater than predetermined threshold, then indicates this pixel to change.Therefore, the frame that is treated to when pre-treatment of step S17 has produced the tabulation that possible light changes.
The above hypothesis that each light-emitting component is mapped to the single image pixel of doing may be too simple, therefore at step S18, with the pixel cluster of mutual distance within preset distance together.Typically, this distance only is several pixels.After cluster, produce the set (each transition region may be corresponding with single light-emitting component) of transition region.This transition region set is to process frame by frame 55 output.A plurality of frames are carried out this processing, be the frame generation transition region data 58 of each processing.
Transition region data 58 are inputed to time processing (temporal processing) method 59.The flow process of Figure 19 there is shown this time processing.Each transition region that records in the set for the transition region data 58 of first processing, (spatiotemporal) filtering (step S19) when carrying out sky, the transition region that detects in the set with transition region and other transition region data 58 in the transition region data 58 that will process mates.By locating transition region in other transition region data acquisition systems in the tolerance limit when the transition region of processing empty, carry out this filtering operation.Can use movement compensating algorithm at this one-phase.Then, at step S20, in time transformation is divided into groups, to form code word.
At step S21, the code word that produces is verified.Typically, this checking comprises the inspection that stops bit, effective silence period and effective extended hamming code for the coupling initial sum.In case checking is finished, and has just known the identity of light-emitting component.By in the image of processing, determining the center of corresponding transition region, the easily position of the light-emitting component on the computed image.
Should be understood that because in processing procedure, information conversion is entered and be recorded in the time-domain with the form of transition region data 58, the processing of therefore describing with reference to Figure 17 to 19 needs Video Data Storage seldom---only need single the preceding frame.
Above description has explained how to come the positioning luminous element and to determine its identification code with single camera.In some cases, single camera is enough to positioning luminous element in three dimensions.For example, in the situation that known all light-emitting components are positioned at 2D plane or surface.Yet, in other cases, the information of using single camera to obtain is only arranged, be not enough to positioning luminous element in three dimensions.Therefore need further to process, this is further processed and uses the data that obtain from a plurality of video cameras to operate.For example, with reference to Figure 20, two shootings 50 and 51 all detect light-emitting component X in the image that video camera produces.One or more pixels at the image that produces detect this light-emitting component, and know that by its identification code (as mentioned above) this is same element.By using triangulation (triangulation) algorithm, and know video camera towards, carry out to process and construct from the extended imaginary line of video camera.This processing is described now.
With reference to Figure 20, can see, it is (C that the camera lens of the first video camera 50 is positioned at coordinate 1x, C 1y, C 1z) the position.Similarly, to be positioned at coordinate be (C to the camera lens of the second video camera 51 2x, C 2y, C 2z) the position.Figure 20 also shows the line 52 that extends and pass through the position of light-emitting component X from the camera lens of video camera 50.Line 53 extends from the camera lens of video camera 51, again by light-emitting component X.Triangulation algorithm is configured to, detection line 52,53 crosspoint, and the position of light-emitting component X has been indicated in this crosspoint.This algorithm is described now.This algorithm is with reference to imaginary plane 54a, 54b, and imaginary plane 54a, 54b lay respectively at apart from the camera lens of the first video camera 50 and 1 meter position far away of camera lens of the second video camera 51.Be the direction quadrature pointed with video camera with these floor plan.The line 52 that extends to light-emitting component X from the first video camera 50 will pass plane 54a, and the point that is passed by line 52 among the 54a of plane has coordinate (T 1x, T 1y, T 1z).Similarly, the point that is passed by line 53 among the 54b of plane has coordinate (T 2x, T 2y, T 2z).Therefore, with respect to the first video camera as initial point, the coordinate of the point that is passed by line 52 among the 54a of plane is as follows:
R 1x=T 1x-C 1x
R 1Y=T 1y-C 1Y
R 1z=T 1z-C 1z
Similarly, with respect to the second video camera as initial point, the coordinate of the point that is passed by line 53 among the 54b of plane is as follows:
R 2x=T 2x-C 2x
R 2y=T 2y-C 2y
R 2z=T 2z-C 2z
Defined after the point among plane 54a, the 54b in aforesaid mode, the equation of line 52 can be expressed as follows:
(C 1x+t 1R 1x,C 1y+t 1R 1y,C 1z+t 1R 1z)
Wherein:
t 1That indication is along the scalar parameter of the distance of line 52.
Similarly, line 53 is defined by following equation:
(C 2x+t 2R 2x,C 2y+t 2R 2y,C 2z+t 2R 2z)
Wherein:
t 2That indication is along the scalar parameter of the distance of line 53.
Can see, when the equation of above-mentioned line defines point in the imaging plane that each line passes, t 1And t 2Value will be 1.
Suppose perfect precision, should find line 52,53 points that intersect, this point is exactly some X.Can pass through the value of line taking 52 in two dimensions, 53 equation, and be worth to form a pair of simultaneous equations with these, carry out determining of such crosspoint.The value of supposing all C and R is known, and this will comprise two unknown number (t to simultaneous equations 1, t 2), therefore, can solve an equation to determine to insert (the t of the equation of the equation of line 52 or line 53 1, t 2) value, to produce the coordinate of light-emitting component X.
More specifically, at crosspoint place, on the coordinate of x, y and z, line 51,52 equation are equal to each other.Therefore, on the online y crosspoint, following equation is set up:
C 1x+t 1R 1x=C 2x+t 2R 2x
C 1y+t 1R 1y=C 2y+t 2R 2y
C 1z+t 1R 1z=C 2z+t 2R 2z
Because two unknown number (t only being arranged 1, t 2), any two values that can be used for determining unknown number in the above-mentioned equation, for example, use the equation in x and the y coordinate:
C 1x+t 1R 1x=C 2x+t 2R 2x
C 1y+t 1R 1y=C 2y+t 2R 2y
Again, because known all C and the value of R can be separated above-mentioned equation in a well-known manner, to determine t 1And t 2Value.After having produced such value, can determine the crosspoint (that is, some X) of line.
It should be noted that in some applications, may have error so that line can not ideally intersect.Therefore, need to determine the point of minimum distance between two lines, or use alternatively optional similar estimation.
For example, in one embodiment of the invention, the line 52 of above definition, 53 equation are converted in the following coordinate system, and namely a line is in the z direction, and the quadrature component of another line forms the y direction.The x intersection of these lines has provided the point of the minimum distance that can be transformed back to original coordinates.Referring to Figure 20 a, 20b, 20c and 20d this coordinate system is described in more detail.
Figure 20 a and 20b there is shown the first video camera 50 and the second video camera 52 at vertical view and side-looking respectively.Show in the drawings each vector (r1, r2 and c2).Vector c2 has defined video camera 50,51 position relative to each other.Vector r1, r2 have defined from video camera 50,51 and have extended line on the approximate direction of light-emitting component X.Note, suppose aspect sensed position slightly error, draw vectorial r1 and r2 and make it slightly depart from the actual position of light-emitting component X.Can see, in vertical view and end view, all have error.
With respect to the first video camera 50, be defined as to the vectorial r1 of the proximal line of light-emitting component X:
r1=(R 1x,R 1y,R 1z)
With respect to the second video camera 51, be defined as to the vectorial r2 of the proximal line of light-emitting component X:
r2=(R 2x,R 2y,R 2z)
Definition of Vector from the first video camera 50 (as initial point) to the second video camera 51 is:
c2=(C 2x-C 1x,C 2y-C 1y,C 2z-C 1z)
Define 3 unit vectors and come transformed coordinate system.Unit vector in the r1 direction is defined as:
z=r1/|r1|
Wherein | r1| represents the European norm (being length) of r1.
With the r1 quadrature but the unit vector y that has produced the y-z plane that comprises r1 and r2 be defined as:
y=(r2-r2.r1)/|(r2-r2.r1)|
Be defined as with the unit vector of y and z quadrature:
x=z×y
Wherein, z * y represents the vector cross product of z and y.
Vector x, y and z have defined coordinate system, from this coordinate system, especially easily calculate the near distance spot.
It should be noted that as long as vectorial r1 and r2 are not parallel, just can define well unit vector y.Yet for two video cameras (for example the first video camera 50 and the second video camera 51) of any distance of each interval, the sight line of (for example light-emitting component X) should be never parallel from each video camera to single source.Therefore, if the definition of above-mentioned unit vector " failure ", then video camera 50, one of 51 has detected the position of light-emitting component X mistakenly.
Although created coordinate system at mathematics,, can more easily understand it by the motion (namely shake mirror, tilt and/or rock, make the invariant position of the first video camera 50) of considering the first video camera 50.Figure 20 c and 20d have illustrated this point.Illustrated reference frame RF, to help the calculating of understanding this coordinate system and the near distance spot.For example, this reference frame is corresponding with the content of seeing by the view finder of the first video camera 50.
Shown in Figure 20 c, mobile the first video camera 50 makes its sensed position X1 to light-emitting component X (not being actual position namely) just in time be positioned at center, the visual field.Thereby position X1 has formed the initial point of new coordinate system.The z direction of this coordinate system (leaving the direction of the first video camera 50) then is the direction (defining such as above-mentioned equation) of vectorial r1.
Rotate the first video camera 50 until the sight line r2 of the second video camera 51 " upper right " with respect to the first video camera 50, namely r2 is now parallel with the y direction.Figure 20 d has described this situation.Can recognize, Figure 20 d is that the two dimension of this coordinate system is described, the sight line r2 of the second video camera 51 after the conversion and the component that also can have on the z direction.Obviously can see from Figure 20 d, minimum distance is the position that the sight line r2 of the second video camera 51 passes the x axle exactly.
Mathematicization ground more in this new coordinate system, by the second video camera 51 to the equation of the line r2 of the position X1 of the light-emitting component X that responds to is:
r2=((c2.x),(c2.y)+t2(r2.y),(c2.z)+t2(r2.z))
Wherein t2 is above-mentioned along line r2 Varying parameters.
From the equation in coordinates of the line r1 of the first video camera 50 be:
r1=(0,0,t1(r1.z))
For the value of any t2, can adjust the value of t1, so that the z coordinate of two equations of above-mentioned definition equates.Therefore, the near distance spot is when the y coordinate is zero:
(c2.y)+t2(r2.y)=0
t2=-(c2.y)/(r2.y)
At that point, the distance between line r1 and the r2 is:
(c2.x)
By the z coordinate with t2 substitution line r2, can find at the reach the standard grade mid point Xm of r1 and r2 of minimum distance, so this point is:
((c2.x)/2,0,(c2.z)-((r2.z)(c2.y)/(r2.y)))
Can convert it back to conventional coordinates now.
How above-mentioned processing has indicated in three dimensions uniquely positioning luminous element.Yet, before carrying out above-mentioned processing, must guarantee correctly to calibrate the video camera for the positioning luminous element.Figure 21 shows the flow chart of the step of camera calibration process execution.At step S22, carry out calibration to consider each video camera attribute.Can be when video camera be made, and/or just carry out before use such calibration.Such calibration comprises that configuration is such as the attribute of aberration and zoom and so on.
The calibration of step S22 must be considered various video camera pseudomorphisms.For example, some camera lens may have distortion (for example white point (fish eye) effect) at the edge.When making video camera, should determine ideally such distortion.Yet, can use optional method.For example, can before video camera, place the large test card with known color pattern, then process the image that produces.In optional embodiment of the present invention, in the situation that known desired image in advance by the light-emitting component that the reference video camera is responded to, is carried out this calibration.
In addition, some video cameras may have the zoom factor that can manually adjust that can not directly sense.Owing to can on-the-spotly adjust zoom, therefore may need to proofread and correct.This can also pass through to use the test target at known distance, or realizes with the layout of light-emitting component.
Although above-mentioned processing allows to come the positioning luminous element with respect to video camera,, if absolute position that need to be in the space, then need to be about the data of camera position.At step S23, camera position is calibrated.
The in many ways processing of execution in step S23.First method comprises the physical measurement camera position, and mark camera position on map subsequently.A kind of optional position calibration method comprises electronically positioning shooting machine.For example, for outdoor mounted, can use the single camera with GPS and digital compass.
Said method is determined absolute camera position in the space.This will allow video camera relative to each other to locate conversely, as mentioned above, also allow to come positioning lamp with respect to video camera.A kind of optional method that relative to each other comes the positioning shooting machine comprises by coming the positioning shooting machine with reference to a plurality of light-emitting components.Owing to detecting simultaneously light-emitting component, therefore need only from different perspectives and the distance observation, these information can be used for obtaining the relative position of video camera.A kind of so a plurality of light-emitting components can be the elements that is being positioned.This method for obtaining station-keeping data also can be used for having the special light-emitting component configuration of known dimensions, for example can use to have line cube (wire cube) or the pyramid (pyramid) that lamp is placed on its top.Because known dimensions, thereby therefore easilier relative to each other come the calibration camera angle with respect to known source.Also can be by video camera be pointed to mutually the other side, relative to each other to come the positioning shooting machine, wherein each video camera has visible or sightless light source.Then, can by triangulation, video camera relative to each other be located.
By using the device of laser designator (pointer) as comprising on the video camera and so on, strengthen above-mentioned for the process of positioning shooting machine relative to each other.For example, can allow visual field centre focus with each video camera on single known location in the laser designator that each video camera is installed.If placed little array of source (visible or invisible for human eye) at each video camera, and video camera points to mutually the other side's (when keeping its position), then can calculate their relative position, thereby can determine the relative position of video camera.
Above-mentioned localization method has multiple shortcoming, and described certain methods can not provide unambiguous data in all cases.For example, as mentioned above, if come the positioning shooting machine with respect to light-emitting component (its configuration is known or unknown), if the customized configuration of video camera and lamp position is carried out linear scale, then the image on each video camera is identical.This means, need to know or measure at least one measured value by additive method.Method although it is so may not provide the data of ambiguity, but this in practice may be inessential.For example, in some embodiments of the invention, only have relative size just important.
When two video cameras are relative to each other calibrated, similar problem has appearred, even when known camera position, also have multiple lamp and video camera towards configuration can cause identical content occurring at each video camera.Therefore, usually for accurate location, should use at least 3 camera positions (need not 3 video cameras, a video camera can be placed on 3 different positions successively).Again, these important embodiments of the invention that adopt the method that depend on whether in practice.
Return with reference to Figure 21, the final stage in the camera calibration is the fine correction of carrying out at step S24.Typically, this fine correction is about guaranteeing that video camera correctly is aligned with each other, and can use total algorithm.For example, can use such as the technology of simulated annealing (annealing), climb the mountain (hillclimbing) and so on or the difference that general algorithm minimizes the position of the light-emitting component of being sensed by different cameras.Yet, also can carry out multi-step with simpler heuristic and proofread and correct (the effectively form of mountain-climbing).Such method is below described.
Compare with the position of the measurement of light-emitting component, described fine correction method is based on the estimated position that projects to the light-emitting component on the camera plane.By measuring specific system deviation, can to the assumed position of video camera and towards particular aspects proofread and correct.
Figure 22 A to 22D has illustrated 4 dissimilar deviations.In each image, detect 5 light-emitting components.These images are expressed as filled circles with the desired locations of each light-emitting component, and the actual position of each light-emitting component is represented by open circles.
Figure 22 A illustrated in the horizontal direction, i.e. the deviation that causes of systematic error on the directions X.Can see, each filled circles is positioned at the left side of each open circles, but on vertical or Y-direction perfect alignment.Such error by video camera about towards rotation (yawing) or the translation on X plane cause.Can be according to this effect whether even or no to all lamps and to the Range-based of lamp, check the difference between two kinds.
Figure 22 B has illustrated the deviation that the systematic error on Y-direction causes.Can see, each filled circles be positioned at each open circles directly over.Such error by video camera up and down towards (pitching) or the error of the height of camera position cause.
Figure 22 C has illustrated proportional deviation on directions X and Y-direction.Such error is caused by the configuration (rocking) on the supposition plane of video camera.Figure 22 D has illustrated the deviation that caused by the zoom factor of video camera.
At the image of having processed Figure 22 A to 22D, determine required correction, and carried out this correction (step S24) afterwards, correctly configured video camera.
Then, above-mentioned processing can be for detection of light-emitting component and light-emitting component the position in the space.Can recognize, can revise in many ways above-mentioned various processing.Some such modifications are described now.
May wish to allow light-emitting component invisible with human viewer, or the mode that does not directly occur at least sends identification code.For example, may wish to allow when just showing image with light-emitting component, to send identification code.In this case, should not disturb the mode to the visible image of human viewer, send identification code.A kind of technology of this point that realizes comprises by the intensity of light-emitting component is modulated to send identification code.For example, if light-emitting component has from 0 to 1 strength range, then can be by showing image with the intensity between 0 to 0.75.When sending identification code, can send light with full intensity (namely 1).Therefore, only come to distinguish with the light of demonstration image and between launching with the light that transmits identification code in emission with less difference.Human viewer unlikely perceives so little difference, but the video camera that is used for the positioning luminous element can by revising simply above-mentioned image processing method, relatively easily detect this difference.
When using in an embodiment of the present invention the colorful light-emitting element, can utilize the operation in the color space, typically, human eye is more insensitive to such operation.For example, typically, compare with the difference of brightness, human eye is more insensitive to the change on the tone (spectrum color).This phenomenon is used for various Image Codings, such as jpeg image format, wherein, has used the picture signal of the less bit tone of encoding.Compare with the similar fluctuation on brightness or the saturation, human eye can not be noticed the little variation on the tone of keeping same brightness and saturation very much.Therefore, by transmitting identification code with tone variations, can in the situation that do not disturb the appreciable image of human viewer, effectively transmit identification code.
Description is before paid close attention to and is come the positioning luminous element based on identification code, and this identification code is transmitted by the visible light that atmosphere sends by light-emitting component.In optional embodiment of the present invention, replace with invisible light and transmit identification code.For example, outside above-mentioned visible light source, each light-emitting component can also comprise infrared light supply, and this infrared light supply sends the light-emitting component identification code in the above described manner.If digital camera uses charge coupled device (CCD) to produce image, detect well infrared light, and in the image of catching, the infrared light that detects is designated as the pure white zone, then the use of infrared light is comparatively convenient.
Transmit identification code (or transmitting with controlled intensity as mentioned above) with infrared light in this manner and mean, with human eye invisible or hardly appreciable mode transmit identification code.This means and in the situation of the image that does not interrupt using light-emitting component to show, to transmit identification code.In a similar way, can use other forms of electromagnetic radiation, for example can send identification code with ultraviolet source.
Use so non-visible light source (or transmitting with controlled intensity as mentioned above) to mean that light-emitting component can be regularly, or even send continuously its identification code, and such transmission is not disturbed human viewer.Continuous or regular transmission identification code like this has multiple advantage.For example, in some embodiments of the invention, light-emitting component is not to arrange with fixed form, but moves when showing image.Therefore, wish by using suitable track algorithm, along with light-emitting component is followed the tracks of in the variation of the position of light-emitting component.
The example of the tracking of the image that the use video camera of the above-mentioned type produces is provided now.The process of stating in the use is designated transition region after the light-emitting component, and any transformation subsequently during this position predetermined empty within the tolerance limit has the higher probability that sends from identical sources.Yet, if send identification code continuously or regularly, suppose that the identification code of expectation light-emitting component is known, can frame by frame checking cause the identity of the light-emitting component of the transformation that detects, to guarantee that this hypothesis is correctly.
This additional information provides the how up-to-date extrapolation positional information about the position of light-emitting component.Compare with waiting for the whole identification code of reception, this has allowed to verify quickly the identity of light-emitting component.Allow like this embodiments of the invention that faster reaction is made in the motion of light-emitting component.
Irregular or transmit discontinuously in the embodiment of the invention of identification code, the light-emitting component in operation light of emission allows to carry out certain tracking.More specifically, suppose that the apparent position of light-emitting component is known (by above-mentioned processing), by observing the output of said frequencies band pass filter, can provide certain following function.Not high for the light-emitting component mobility, but the embodiment of the invention that slightly moves in time, this is particularly useful.
Use the BPSK modulation scheme to be conducive to track algorithm.This is because the BPSK modulation has produced higher conversion rates, thereby provides how up-to-date positional information when following the tracks of.
In some cases, the error correcting capability of not considering above-mentioned Hamming code for sending identification code is useful.For example, when detecting identification code first, processing will guarantee typically that the code word that receives does not have error code, and carry out essential processing until receive errorless identification code.This has reduced to occur the probability of false positive.After having determined identification code, embodiments of the invention can be accepted mistake may prove as the position of one or more bits.
In some embodiments of the invention, can carry out with single camera the location of light-emitting component, this video camera moves to a plurality of different positions, is collectively determined for executing location at the image that diverse location produces.Really, above-mentioned many processing can be performed as off-line or at line process.In other words, when video camera during towards light-emitting component, can be online the processing with this processing execution, or alternatively, carry out processed offline with the data of precedence record.Really, can be by the sequential observation of single camera, or observe to collect data by a plurality of video camera the time.Yet, it should be noted that generally speaking, when light-emitting component moves, usually need at least two video cameras to be used for accurately location.
The light-emitting component that before description is considered has in fact the consistent optical effect of controller that is associated with himself and its.It should be noted that the optical effect that light-emitting component is set up may be inconsistent with light-emitting component self or its controller that is associated.For example, LED can omit light by one or more optical-fibre channels, occurs in place away from the LED position so that light the optical effect of LED.Similarly, can reflect from reflecting surface the light of light-emitting component emission, this reflecting surface provides the optical effect of light-emitting component, and is positioned at the space place different from the place, space at this light-emitting component place.Suppose between the place that light-emitting component and light-emitting component tell on to have one-to-one relationship, then can recognize, can use above-mentioned technology and come suitable positioning luminous element.
Yet some light-emitting components produce its optical effect in relatively large zone, so that it can not be considered to point-source of light.Really, can use the light source of relatively dispersing, this is so that its location relative complex.Really, in some cases, in order to reduce calculation requirement and to reduce ambiguousness, the priori of light source position is useful, or or even essential.
In some cases, can suppose from the diverging light of single source is approximate and be positioned at a plane.When illuminating the part wall, spotlight has such situation.Here, each video camera can calculate the barycenter of this light source, then, can use above-mentioned algorithm to it.Can determine with the diffusion of the light around the barycenter angle on this plane.A plurality of light sources have been set up the 3D model on the surface that is illuminated effectively, it can be fed back the point that is associated with the specific light source that illuminates a plurality of objects corner with refinement.
In some cases, can avoid determining the 3D scope of divergent light source.If light drops on known, then single camera can be determined the two dimensional range of light source.Even be not this situation, it is important also only having from the observation of single point of observation, in this case, and can be with the two dimensional range of the effect in source as important positional information.
When using divergent light source, the generation of image also has additional complexity.Because light source is not a little, therefore, simply turn on the fully lamp in the zone that needs illuminate of those effects, will cause not opening the source, this is all to produce effect outside the zone that needs illuminate owing to all light sources.What need certain form determines to light which light-emitting component near coupling.
Can use least square approximation (common in statistics) to determine to light which light-emitting component.Interested three-dimensional or two-dimensional space are divided into a plurality of volume elements or pixel (N p).Each volume elements or pixel are noted as P k, k=1...N wherein pA plurality of (N) is provided light source.Each light source is noted as I i, i=1...N wherein.
For each light source I iWith each volume elements/pixel P k, determine illumination (illumination) level that is caused in this volume elements/pixel by light-emitting component.This level is marked as M KiThis value is based on light source I iFull illumination.If with each light source igniting to horizontal iI i(supposition is measured illumination between standardization scope 0 and 1) is then at particular voxel/pixel P kOn illumination provided by following equation:
IP K = Σ i = 1 N 1 M Ki IL i
Suppose that required illumination pattern on this volume elements/pixel is by DP jProvide, determine that then the illumination level of each light source is so that square error and minimizing.Square error and provided by following equation:
Figure G2007800158526D00422
Can the Application standard method separate above equation.This scheme is:
IL=QM TDP
Wherein Q is symmetric positive definite matrix M TM's is contrary, and DP is required illumination level vector, and IL is the vector of the illumination level of determined light source.Can with polyteny return to carry out this square error and scheme.This is at Freund J. , ﹠amp; Walpole R.: " Mathematical Statistics ", Longman, 1986, ISBN-10:0135620759 describes among the pp480et seq.
Note, said method can provide almost impossible high brightness value for specific light source, and may provide negative brightness value for other light sources.In this case, set suitably illumination level with threshold process.
In some cases, a plurality of light sources may independently be controlled.For example, the control of light source is in the time of can not opening and closing light source independently, to be exactly this situation.Alternatively, each light source can have the reflection that is associated.In this case, each video camera can detect several two-dimensional points to individual address.Given two video cameras can carry out triangulation by potential point to every pair of in the first and second video cameras single source being detected, and in the step S103 of Figure 23 A error of calculation value.Usually, the two-dimensional points that different source positions are detected has provided higher error amount, therefore can be dropped.By accident, strange position overlaps the sure position that mistake can occur, and still, this is regarded as a kind of potential problem, can overcome this problem with a large amount of video cameras.
Among the embodiment of the invention described above, each light-emitting component has the address.Each light-emitting component also sends identification code, sends and use this identification code by light-emitting component in position fixing process.This identification code can be the address of light-emitting component, or alternatively can be different.When identification code and address not simultaneously, for example can link it by look-up table.Yet in some embodiments of the invention, light-emitting component does not send identification code under the control of himself.But control position fixing process by master controller based on the address of light-emitting component.Referring now to Figure 23 such process is described.
With reference to Figure 23, at step S25, order all light-emitting component utilizing emitted lights, so that have the complete picture of all light sources for detection of the video camera of process.At step S26, close all light-emitting components.At step S27, counting variable i is initialized as 1.In processing procedure, this counting variable increases to N from 1, and wherein N is the bit number in the address of each light-emitting component.At step S28, light all light-emitting components that bit i in the address is set as " 1 ".Record the image that produces at step S29.Step S30 determines whether that bit will be processed in addition, if i equals N, then all bits has been carried out this processing, processes being transferred to step S31 (following description).Otherwise, at step S32, increase i, process and return step S28.
At step S31, processed the sequence of N image.These images will be taked the form shown in Figure 11 A, can process these images, so that determine the address of each light-emitting component with said method.
In optional embodiment of the present invention, light-emitting component can send code under himself is controlled, but can carry out under the prompting of master controller.
The above-mentioned traditional triangulation algorithm of method use of coming the positioning luminous element from the image that produces.Such algorithm may have many problems.For example, some light-emitting components may be blocked outside the visual field of some video cameras.If only use two video cameras in procedure of triangulation, then this will mean and can not correctly locate some light-emitting components.Yet, when using the video camera of big figure more, can carry out triangulation by the image that produces based on the video camera that really can see light-emitting component simply, to overcome this problem.
Another problem of the triangulation of the above-mentioned type is owing to noise, video camera precision and digital error produce.This may mean that the imaginary line that comes from the video camera projection can not accurately intersect.Therefore " closest approach " method that needs certain form is to determine apparent position based on the imaginary line that is produced.For example, can select three-dimensional position, so that the projection of estimated position on all video cameras minimizes with the quadratic sum of the difference of measuring position separately.
For example, a kind of algorithm operating based on " closest approach " method is as follows.Take single light-emitting component as example, for each video camera that records this light-emitting component, imaginary line is projected to the test point of light-emitting component from video camera.For the every pair of video camera that records this selected light-emitting component, calculate between the projected light near the point of approach (closest approach), with the estimation of the mid point between these lines as the actual position of light-emitting component.This is every pair of estimated position that video camera has produced this light-emitting component.This has also indicated near the distance between the line on the approach, and this distance provides useful error metrics.If the error metrics of any in the estimation point in essence greater than other points, is then ignored these other points.To producing each such point, typically the false positive on pretreatment stage video camera is corresponding with it for each such point by specific video camera.The estimation right to all the other video cameras averages, to provide the overall estimated position to this light-emitting component.Then, each light-emitting component that detects is repeated this algorithm.Figure 23 A shows suitable process.
With reference to Figure 23 A, at step S100, the results_set array of initialization sky.This array is a pair of in each element storage, every pair of estimation and error metrics that comprises source location.At step S101, counting variable c is initialized as zero.At step S102, calculate the right location estimation of video camera that is represented by counting variable c, at step S103, also calculate the right error metrics of this video camera.At step S104, with a pair of adding results_set array, this is to the location estimation of calculating that is included in step S102 generation and the error metrics that calculates that calculates at step S103.Increase counting variable c at step S105, at step S106, carry out checking, to determine whether that video camera is to processing in addition.If also have video camera to processing, then process and return step S102.Otherwise, process proceeding to step S107.At step S107, in all elements of results_set array, calculate the error metrics average.
After step S107 has calculated this average, at step S108, another counting variable p is initialized as zero.This counting variable is counted all elements of results_set array successively.At step S109, from error amount that the element p of results_set array is associated deduct the mean error value of calculating at step S107.Then, carry out to check, with the result that determines this subtraction whether greater than predetermined threshold.If so, the element p that the results_set array then is described has represented exceptional value (outlying value).Then, at step S110, remove such exceptional value, then recomputate all original mean errors of this array at step S111.If the inspection of step S109 is not satisfied, then process directly arriving step S112, increase counting variable p in this step, then process arriving step S113, carry out checking in this step, to determine whether that element p needs to process in addition.If so, then step S109 is returned in processing.Proceed to step S114 otherwise process.
At step S114, calculate the mean place of all elements of results_set array and estimate.Then, step S115 is reset to zero with the value of counting variable p, then processes successively each element of results_set array.At step S116, the corresponding element of distance array is set to equal the location estimation and average estimate poor that are associated with the element p of results_set array.Increase counting variable p at step S117, carry out checking at step S118, to determine whether that array element needs to process in addition.If so, then process and return step S116, arrive step S119 otherwise process, determine the average distance of the average estimation that have a few and step S114 calculate in this step.
Then, process arriving step S120, again counting variable p is made as zero in this step.At step S121, carry out to check, with the difference of the distance determining this average distance and be associated with the element P of distance array whether greater than boundary.If so, then at step S122, the element P in the deletion distance array also deletes the element P in the results_set array, then, before step S124 increases counting variable p, recomputates this average distance at step S123.If the inspection of step S121 is not satisfied, then process directly and arrive step S124 from step S121.Carry out inspection at step S124, to determine whether that the element of distance array needs to process in addition, if so, then process and return step S121, otherwise process from step S125 and arrive step S216, in the usually average estimation of calculating location of all the other units of this step use location array.
Can recognize, the process of describing with reference to Figure 23 A only is exemplary, can use various similar process.For example, in some embodiments of the invention, the exceptional value that each stage during the course can be carried out other removes.
If from the visual angle of particular camera, two or more light-emitting components are aimed at, and then this video camera will produce image effectively, and this image is the logic OR of the code of two light-emitting components transmissions.If code is fully sparse, then typically can identify error detection.Yet, suppose the visual angle from least one video camera, the light-emitting component misalignment, thus the imaginary line that produces is non-intersect, if then video camera has been determined in fact the effective code that the light-emitting component by two alignings causes, procedure of triangulation just can detect mistake.
Referring now to Figure 24, the optional triangulation scheme of the problem of the light-emitting component attempt to solve aligning is described.The method of Figure 24 operates the image that video camera produces in the above described manner, and simultaneously, operates successively the paired image of being caught by different cameras.At step S33, variable f is initialized as 1, this variable uses as frame count, successively each frame of catching is counted.At step S34, since the first video camera, detect each pixel projection imaginary line of light-emitting component.Imaginary line like step S35 projection-type, but be from the second video camera specifically.Intersect from the line of the first video camera and the second video camera projection, any crosspoint of line is thought the light-emitting component that detects.This has consisted of logic and operation, and carries out this computing at step S36.If should be successful with computing, then at step S37 record light-emitting component, alternatively, if should be unsuccessful with computing, then do not record any light-emitting component at step S38.Then process and arrive step S39, check in this step, to determine whether to have processed all frames.If not yet process all frames, then increase frame count f at step S41, process and return step S34.If treated all frames are then processed at step S40 and are finished.
Under the control of PC1, carry out the processing of above-mentioned positioning luminous element.Figure 24 A shows the processing that PC1 carries out.At step S200, video camera is connected with PC1.At step S201, give an order to the light-emitting component that will locate, make it launch in the above described manner the light of its identification code of expression.This is by providing suitable order to realize to control element 6,7,8 (Fig. 5), and the form of the CALIBRATE order that control element 6,7,8 is described with reference Fig. 9 B and 9C successively offers light-emitting component along bus 9,10,11 with order.
At step S202, from the video camera receive data that connects, carry out inspection at step S203, to determine whether to have identified acceptable light-emitting component number.At step S204, check to determine when the image of pre-treatment first image to be processed whether.If so, then at step S205, use the position of video camera as initial point, in step 206, this position for video camera of storage indication is in the also further indication light elements relative of initial point in the data of the position of this initial point.If the inspection of step S204 determines that this is not first image to be processed, then process and arrive step S207, in this step, for example by determine the position when the video camera of pre-treatment with above-mentioned Camera Positioning technology.Then, process by step S207 and arrive step S206, in the data of this step storage indication video camera and luminous element position.
Processing arrives step S208 from step S206, carries out checking in this step, to determine whether that image (that is, camera position) will be processed in addition.If so, then step S200 is returned in processing.Finish at step S209 otherwise process.
Figure 24 B shows the processing that is used for coming from the data that the processing of Figure 24 B is stored the positioning luminous element that PC1 carries out.At step S215, carry out checking, to determine whether also having other light-emitting component to locate.If do not have so other light-emitting component to exist, then process at step S216 and finish.If such light-emitting component exists really, then select light-emitting component at step S217, be used for the location, at step S218, sign comprises the image of the light-emitting component that will locate.At step S219, abandon and have the image that abnormality reads.At step S220, carry out checking, to determine whether having comprised the light-emitting component that to locate more than an image.If not, then because correct positioning luminous element, therefore step S215 is returned in processing.Yet, if find comprised the lamp that will locate more than one image, step S221 select a pair of image for the treatment of, carry out above-mentioned triangulation at step S222.At step S223, the position data that storage obtains from the triangulation operation.
At step S224, carry out checking, exist to determine whether the image that comprises in addition this light-emitting component, if really there is such image, then processes and return step S221, obtain other position data in this step.When not having image also will process, process proceeding to step S225, carry out statistical analysis in this step, to remove unusual position data.At step S226, the set position data that obtains is afterwards in the final position data of determining of step S227 storage.
Figure 24 C is the screenshot capture of the graphic user interface that provides of the application by the upper operation of PC1, and this is used and allows to carry out with reference to Figure 24 A and the described calibration process of 24B.Can see, this interface provides calibration knob 150, and this button can be used for making light-emitting component to launch its identification code, to allow to carry out the sign operation.Provide zone 151, to allow configuration camera position and parameter.
Can use the processing described and the position data that obtains is stored in the XML file.This XML file comprises a plurality of<light id〉label.The form of each label is:
<light id=“65823”x=“0.0005”y=“0.6811”z=“6.565”>
Wherein, " light id " number afterwards is the light-emitting component identifier, and each number afterwards is coordinate among x, y and the z.It should be noted that in a preferred embodiment of the invention, to come storing coordinate such as the higher precision shown in upper.
Return with reference to Fig. 2, can see, the positional information of determining with said method can be used for showing image with light-emitting component.The process that shows image can be taked various multi-form, character and the position of depending on light-emitting component, but generally speaking, should note, when the image mapped that will show to space representation (as shown in Figure 4), and during the position of the light-emitting component in known should the expression, it is relatively straightforward that placement of images shows.It should be noted that in some embodiments of the invention, for each volume elements in the space representation is distributed the address.As mentioned above, each light-emitting component also has the address, and then, the relation by between light-emitting component address and the volume elements address is placed into light-emitting component in the space.Below discuss addressing scheme in more detail.
Can arrange light-emitting component with multiple different configuration and position.For example, in some embodiments of the invention, can be generally used for as mentioned above dressing a Christmas tree and tradition " color lamp " mode of public place object, light-emitting component be arranged on tree or the similar structures.Optional embodiment of the present invention is used and is had more ambulant light-emitting component equipment, and such equipment must not link together by non-wireless means.For example, in a large amount of crowds' occasion occurring, many people have " optical wand " or are bonded in luminaire such as the form of the lamp on the dress ornament article of cap and so on.For example, the mobile phone that has a backlight LCD screen can be used as light-emitting component.Such occasion comprises the stadium occasion such as football match and so on, and such as the opening ceremony of the important competitive sports such as the Olympic Games.Although the Members of The Public that occurs in such occasion as everyone knows has such luminaire, current they operate independently of each other.In an embodiment of the present invention, show image with these luminaires, now this is described.
Each luminaire has unique address, comes it is positioned with said method.In a preferred embodiment, all luminaires send continuously its identification code and realize the location.For example, this can realize by the infrared or ultraviolet source that the above-mentioned type is provided to luminaire.It should be noted that in the application based on the stadium, the holder of luminaire may be positioned at a side in stadium, and namely they may be positioned on the single plane.Thus, single camera may be enough to positioning luminous equipment.In other words, may not need above-mentioned triangulation method.Yet larger stadium may need a plurality of video cameras to be used for position fixing process, and each video camera is caught the different piece in stadium.
Locating luminaire, make its position and address known after, order each luminaire, or order more possibly luminaire group utilizing emitted light.Can transmit these instructions with any wireless data transportation protocol that enough addressabilities are provided.In a preferred embodiment of the invention, luminaire can be launched the light of multiple different colours, and in such embodiments, instruction also comprises color data.The holder of luminaire knows that the luminaire of oneself is opened or closed, or the emission different colours.Near the operation of the luminaire they also know is being experienced similarly and is being changed.Yet although the holder of luminaire only knows local change, at that time, the people who is arranged in the stadium opposite side can observe the image of the larger stadium size that is shown by luminaire collective.For example, can display pattern, football club's sign, national flag or even such as the text of lyrics and so on.
Referring now to Figure 24 D, the description control light-emitting component shows the process of predetermined image.At step S230, create the model of the content of indicating to show.Create this model with traditional graph technology, this utilization two dimension and/or 3-D graphic primitive.Upgrade this model at step S231.When finishing this model, storage application model 155.
At step S233, read the data of indication light position of components.At step S234, determine to be positioned at the light-emitting component in the represented zone of model 155.At step S235, carry out checking, to determine whether to provide the emulation of light-emitting component.Below will describe such emulation in detail.When emulation was provided, at step S236, what supply a model in simulator was visual, lights suitable light-emitting component at step S237 afterwards.If do not need emulation, then process directly and arrive step S237 from step S235.
Figure 24 E is the screenshot capture from the graphic user interface intercepting that allows to control in the above described manner light-emitting component.Can see, provide and opened button 160 and allow to open the model data file.In addition, zone 161 allows to show various standard effects with light-emitting component.
Figure 24 F is the screenshot capture that intercepts from above-mentioned simulator provided by the invention.Can see, show all light-emitting components, it is brighter that the light-emitting component that wherein is lit is expressed ground.Can see, the control light-emitting component shows the image of fish.
The application that provides to control light-emitting component also allows Interactive control.Particularly, Figure 24 G allows to load the data that the definition light-emitting component is arranged.Shown in Figure 24 H, in simulator, load and show.Can see, light-emitting component is disposed on the Christmas tree.Interface shown in Figure 24 I allows user selection paintbrush (brush).Then, use this paintbrush " drawing " in the window of Figure 24 H, the window of Figure 24 H allows to select suitable light-emitting component to light.
As mentioned above, luminaire can move with its holder's motion.Yet typical motion may be slowly and relatively less generation.But, sometimes need to recalibrate the position of luminaire.Can use as mentioned above invisible light source (for example infrared or ultraviolet), or alternatively also as mentioned above by changing light intensity, carry out such recalibration.
It should be noted that because luminaire only needs to receive (and not sending) data, therefore can minimize the complexity of luminaire based on the embodiment of the invention of removable luminaire.Make and use up (visible or invisible) and only carry out transmission.
With reference to Fig. 5, described from PC1 sending for the instruction of lighting each light-emitting component to light-emitting component 2 via control element 6,7,8, appointed some data to transmit tasks to control element 6,7,8.Can recognize, in the embodiment of the invention of using wireless luminous equipment, can create similar hierarchy (hierarchy).But, when using wireless luminous equipment, may need light-emitting component to carry out dynamically or the connection of self-organizing (ad-hoc) from wireless base station different and that change.
In described embodiment of the present invention, store the details of the position of shining upon for the address at PC1 or control element 6,7,8.Yet, in optional embodiment of the present invention, in case determine the position of light-emitting component or equipment, just this position is sent to this light-emitting component or equipment, or optionally, is sent to suitable control element.Then, the mode by broadcast or multicast message sends instruction.For example, be 4 layers hierarchy if will comprise the spatial division of lamp, then can represent the position with four-tuple.Generally speaking, be the hierarchy of multilayer if will comprise the spatial division of light-emitting component, then can represent specific zone with IP-based Octree or quaternary tree.Such method is below described in more detail.Can send indication by the instruction of all lamps in the defined unit of arbitrary grade element of hierarchy.In case receive such instruction, each light-emitting component determines whether that it is arranged in any suitable element, thereby determines whether that it should light, and can determine also which kind of color it should light.
Can recognize, a plurality of light-emitting component set can be used together, produce larger demonstration.
Above-mentioned the method for positioning luminous element has various other application for the purpose of image demonstration, describes now this type of application.For example, can use the positioning equipment of emission invisible light to follow the tracks of people or equipment on every side in the precalculated position.Can locate such positioning equipment with said method, but it should be noted that such positioning equipment may stronger motion occur than above-mentioned light-emitting component.
In needs people from location (for example around the job site) the embodiment of the invention, the people wears badge (badge), and this badge has the LED that is configured to occur infrared light.This badge also is configured to send continuously the identification code through suitable coding and modulation of the above-mentioned type.Then along with the motion of people around in the job site, video camera detects this identification code, and infrared light is invisible to human viewer, but can clearly be detected by video camera.If detected the code of emission by single camera, then this will allow the people that the location is associated with the badge with detected identification code in the visual field of this video camera at least.If by two or more multiple-camera detect the identification code of transmission, then can use the triangulation method of the above-mentioned type in the space, to carry out absolute fix.
If only detected the code of transmission by single camera, then only may be enough to like this in the space, the people be positioned.Suppose that badge rest on the ground 1 meter height (may be like this), and the supposition video camera is placed on the ground position apparently higher than 1 meter (for example under construction smallpox flaggy), then can realize this point, on the plane that suppose 1 meter highly can be used for 1 meter height on the ground the people be positioned.In other words, image can use simultaneously with highly measuring, with the location badge.
As mentioned above, the triangulation of two video cameras of use has produced the linear equation of following form:
(C x+tR x,C y+tR y,C z+tR z)
In aforesaid situation, known target is on the ground approximately on 1 meter the height.Suppose the height that will define on the z dimension, then known:
C z+tR z=1
Suppose C zAnd R zValue known, then can easily derive the value of t.After having derived such value, can recognize, the above-mentioned equation of its substitution can be derived x and y and sit target value.
Above-mentioned example is paid close attention in the job site that a plurality of video cameras are installed the people is positioned.Can come the positioning equipment project with very similar technology.Each items of equipment that will locate is equipped with small-sized label (tagging) equipment, and this equipment has the outward appearance of little black button and comprises infra-red transmitter.This transmitter sends the unique identification code continuously, and the video camera of suitable placement detects this identification code, to determine the position of equipment.Can recognize, transmitter can be continuously, or alternatively, send discontinuously or periodically its unique identification.Again, if detected the code of transmission by at least one pair of video camera, then can locate this equipment with triangulation.When using single camera, as mentioned above, can use the hypothesis (in this case, ground level may be suitable hypothesis) about the height level, come positioning equipment with the image of being caught with single camera.
The embodiment that it should be noted that the invention described above must not depend on additional firmware.Really, can realize that the desired position determines purpose with existing assembly.Particularly, existing screen equipment can be used such as the equipment of computer and so on, the LED that indicates traditionally its power rating can be used such as the equipment of mobile phone and so on.
In the example of above-mentioned location, quoted infra-red transmitter.It should be noted that in some embodiments of the invention, used the ultraviolet or the infrared reflective device that are covered (shutter) by LCD.For example, can replace light-emitting component among the embodiment of the invention described above with suitable reflecting surface.Any light source can shine on these reflectings surface, thereby produces a plurality of light-emitting components.In these light-emitting components each can be rendered as point-source of light, and is similar with the mode of LED.In order to control such reflecting surface, need the reflectivity of these reflectings surface of control.Can by providing the surface with controlled opacity (such as LCD) on the surface of high reflectance (such as mirror), realize such reflectivity control.This can produce the light-emitting component of lower-wattage, its utilizing emitted light rather than generation light.
The embodiment of the invention described above relates to visible or invisible light and comes the positioning luminous element.Some embodiment relate to the oriented element of use and send by visible light, to show image.Yet, it should be noted that some embodiments of the present invention use sound replaces light, describes such embodiment now.
Figure 25 provides with a plurality of sonic transceivers that are positioned, then are used to send according to its position sound, produces the general survey of the hardware of three dimensional sound scape.The hardware of Figure 25 comprises controller PC55, will illustrate in more detail this PC55 in Figure 26.Can see, the structure of PC55 and PC1 shown in Figure 6 are very similar, and indicate similar assembly with original similar reference number.Here the so similar assembly of more detailed description no longer, i.e. CPU13 ', RAM14 ', hard disk drive 15 ', I/O interface 16 ', keyboard 17 ', display 18 ', communication interface 19 ' and bus 20 '.Yet, it should be noted that PC55 also comprises sound card 56, this sound card 56 has input 57, receives voice data by inputting 57, and this sound card 56 also has output 58, by exporting 58 to for example loud speaker output sound data.
Return with reference to Figure 25, can see, PC55 loud speaker 59,60,61,62 connects, and these loud speakers are connected with the output 58 of sound card 56.PC55 also is connected with microphone 63,64,65,66, and these microphones are connected with the input 57 of sound card 56.PC55 also is configured to carry out radio communication with a plurality of sonic transceivers, and in described embodiment, a plurality of sonic transceivers are taked mobile phone 67,68,69,70 form.Although it should be noted that only to show 4 mobile phones in Figure 25, practical embodiments of the present invention may comprise more mobile phone or other suitable sonic transceivers of big figure.Mobile phone 67,68,69,70 with PC55 between be connected and can take any easily form, comprise and use mobile telephone network (for example GSM network) or use as the wireless connections (supposing that PC55 and mobile phone 67,68,69,70 all are equipped with suitable interface) of other agreements of WLAN and so on.Really, in some embodiments of the invention, PC55 and mobile phone 67,68,69,70 can link together by wired connection.Describe now and produce the three dimensional sound scape with device shown in Figure 25.
At first describe the embodiment of the invention that PC55 guide sound scape produces, at first with reference to Figure 27, Figure 27 shows the flow chart of processing general survey.The processing that each step is carried out is below described in more detail.At step S45, mobile phone 67,68,69,70 all connects with PC55.At step S46, carry out initial calibration, locating mobile telephones 67,68,69,70 in the space at step S47, carries out refinement to this initial calibration.At step S48, with respect to output volume with towards calibrating mobile phone.After having carried out these various calibration processes, present sound at step S49 with these mobile phones.
Figure 28 illustrates in greater detail the processing of the step S45 of Figure 27.At step S50, PC55 wait for to receive from mobile phone 67,68,69,70 connection request.When receiving such request, process to be transferred to step S51, PC55 is created in the data of storing in the data repository in this step, and this data indication is connected with mobile phone, and indicates the address of mobile phone, in order to carry out data communication with it.It should be noted that the request that a mobile phone produces can take any easily form.For example, telephone network carry out mobile phone 67,68,69,70 with PC55 between the situation of communicating by letter under, when needs connected, mobile phone can be called out predetermined number, and the calling of predetermined number has been consisted of connection request.Then within the duration that connects, there is call between mobile phone and the PC55.Can carry out such call to the telephone number of predetermined high stage speed (premium rate).Also it should be noted that distribute to mobile phone 67,68,69, employed communication mechanism may be depended in 70 address.For example, when telephone network was communicated by letter, telephone number can be used as the address.
Set up mobile phone 67,68,69,70 with PC55 between be connected after, carry out calibration at the step S46 of Figure 27.Figure 29 illustrates in greater detail this calibration, and Figure 29 shows the calibration process that PC55 carries out.At step S52, PC55 makes loud speaker 59,60,61,62 play predetermined tone (tone), and mobile phone 67,68,69,70 microphone detect these tones, and the tone that detects is sent to PC55.Each phone that receives data from it is carried out following processing successively.At step S53, receive the data of indication pitch detection at step S53.At step S54, the data that receive with relevant by the tone of each output in the loud speaker 59,60,61,62, are calculated in phone and the loud speaker 59,60,61,62 distance of each with this output of being correlated with.Then, at step S56, by triangulation, determine the position of phone with this range data.Step S57 determines whether that phone needs calibration in addition, if so, then processes and returns step S53.Finish at step S58 otherwise process.
The process that triangulation distances is calculated is described now in more detail.Each process can be taked multiple different form, depends on the character of the sound that loud speaker 59,60,61,62 produces.Yet generally speaking, position fixing process comprises that the actual sound that one of sound that each loud speaker is produced and microphone of mobile phone receive mates, and the sound that receives is the combination of the sound that produces.Then, process the sound that receives, the sound component that produces to identify each loud speaker.
If loud speaker 59,60,61, the simple tone of 62 outputs, then identification procedure can be comparatively direct, can use to received signal a plurality of band pass filters, and each band pass filter is applicable to an expected frequency, the sound that produces to distinguish different loud speakers.If the signal of each loud speaker output is opened or closed, or modulated, the time that then spends between the sending and receiving of these modulation provides be sent to the good indication of mobile phone 67,68,69, time of 70 in the air from loud speaker 59,60,61,62 sound.If known this time, then because the velocity of sound in the known air, therefore can determine loud speaker 59,60,61,62 and mobile phone 67,68,69,70 between distance.In addition, by using the relative intensity of band pass filter id signal in receiving signal, this provides the measurement to relative distance.
Above-mentioned information allows to determine the position in multiple different mode.
If known to the moment of loud speaker 59,60,61,62 transmission sound and the moment that receives same sound at one of mobile phone, then can determine the absolute distance measurement between mobile phone and each loud speaker.Therefore, for each loud speaker, can determine that mobile phone is positioned at centered by this loud speaker, radius by on the sphere of sign distance.The intersection point of three balls is one of two three-dimensional positions with the station location marker of mobile phone, and one of them may be dropped at below ground usually owing to it.If use more than 3 loud speakers (for example 4 loud speakers shown in Figure 25), then can further improve well-determined accuracy.
If transmitter and receiver clock are asynchronous, be still possible based on the measurement of aerial delivery time.For example, if the known moment by each loud speaker transmitted signal, also one of known mobile phone receives the relative moment of same signal, can determine that then loud speaker is to the distance between the various mobile radio.Then, can use paired loud speaker, and on more complicated 3D surface (rotate typically hyperbola, that is, about the hyperbola of its main shaft rotation) the specific mobile phone in upper location, can determine unique 3D position with the intersection point on this 3D surface.
Also can based on the volume of microphone 63,64,65,66 signals that receive, determine relative position.Yet, it should be noted that because directivity sound tendency, such measurement may be not robust so.
When simple tone that loud speaker 59,60,61,62 outputs can be distinguished mutually with band pass filter, above-mentioned technical work gets fine.Produce in the situation of more complicated sound (for example music) at loud speaker 59,60,61,62, need more complicated correlated process.For example, can determine the desired audio from particular speaker, the actual sound that then this desired audio has been received with being offset specific time-delay multiplies each other, and sues for peace at the short time window.That produce and the skew covariance is provided, this skew covariance can be with the tolerance of the signal strength signal intensity in this delay of opposing.Like this, having the delay of higher signal strength will be corresponding with the aerial delivery time.
In optional embodiment of the present invention, do not carry out in the above described manner and proofread and correct and distance calculating.The substitute is the desired audio of each point in the PC55 computer memory.Export which kind of sound owing to known from each loud speaker, therefore can carry out such calculating.Then, the sound that receives can be the object by the search of each desired point, determines that phone is positioned at the point that has and receive the immediate desired audio of sound.
Operation to hue and luminance has more than been described in positioning luminous element process.The sound operation that can not hear can be used in the location of sound source, more easily creates, with detection and location signal when playing " normally " sound.For example, the high or low frequency pulse that can not hear can be mixed with sound source, or can with the mode that can not hear revise sound the time/the frequency characteristic, this is similar with the method that is used for compression MPEG-3 recording.
After having carried out processing shown in Figure 29, the position of known each phone, PC55 can be in these data of the other storage of address date of each phone.After having determined position data, at the step S47 of Figure 27 this position data is carried out refinement, Figure 30,31,32,33 shows this processing.
With reference to Figure 30, at step S59, PC55 computer memory sound mappings, the desired audio on each point in the space has been determined in this spatial sound mapping.After having determined such spatial sound mapping, each mobile phone is carried out following processing successively.Use the position data that produces as mentioned above, the sound (step S60) that comes definite loud speaker that should pass through this mobile phone to play at step S61, provides determined sound to this mobile phone.Step S62 determines whether should carry out in addition the phone of processing, if so, then processes and returns step S60, finishes at step S63 otherwise process.
When carrying out the processing of Figure 30, to each phone, carry out successively the processing of Figure 31 simultaneously.At step S64, the phone that carry out processing is eliminated the noise, make it temporarily stop to send any sound.Then, the sound that near the mobile phone mobile phone is caught with its microphone sends.The sound of catching is sent to PC55, and at step S65, center P C55 receives this sound.The spatial sound mapping that sound and the step 59 (Figure 30) of reception are calculated is relevant, the data of the locus of the indication phone that this relevant refinement PC55 of being used for stores.Step S68 determines whether that phone need to be carried out processing in addition.If so, then process and return step S64, finish at step S69 otherwise process.Periodically carry out the processing of Figure 30, to guarantee to have kept accurate position data.
Carry out concomitantly the processing of Figure 32 with the processing of Figure 30 and 31.At step S70, PC55 receives the sound that microphone 63,64,65,66 detects.At step S71, the spatial sound mapping that the step S59 of the sound that receives and Figure 30 is calculated is relevant, relevantly comes definite mapping (step S72) that relative volume on the each point in the space, phone place is indicated with this.
Typically, the loud speaker of some mobile phones is more loud than other, and in addition, some zones will comprise than the more mobile phone in other zones.Therefore, the wave volume that may need each mobile phone to play is to realize required sound scape.For this reason, the essential actual volume that calculates the sound that all phones produce in each zone is to produce this regional volume mapping.
Under simple scenario, so that the mobile phone in the specific region produces fixedly tone, can produce the volume mapping by arranging.Then, measure (use fixing microphone or use alternatively the microphone of other mobile phones) by these fixing volumes of the sound of tone generation from a plurality of known location.By measured sound and known volume (expecting that it is from the loud speaker of known location and known power) are compared, can determine the effective power of this position.Each zone sequence is carried out this processing, produce the volume mapping.
Although it is fine that said method work gets, in some embodiments of the invention because destructive relatively large, because of rather than preferably.Therefore, can carry out more complex technology based on band pass filter or correction to the morbid sound that receives in whole zone.Very similar with the fixedly loud speaker extraction signal (above-mentioned localization method, using) from each phone, can carry out filtering to the signal that comes the self-retaining microphone or check with the sound of each region generating, the next signal strength signal intensity that can compare with above-mentioned expectation strength for each regional generation is to determine the power output in the specific region.
Figure 33 has illustrated to be used for the further processing of refinement calibration.Each phone is carried out this processing successively, and this processing is corresponding with the step S48 of Figure 27.At step S73, phone is eliminated the noise so that its output sound not.At step S74, PC55 receives the sound of the microphones capture of phone.At step S75, related data is combined with the position data of this phone.Use these data step 76 calculate mobile phone towards, in step S77 calculated gains.Step S78 determines whether that phone will be carried out processing in addition, if so, then processes and returns step S73, finishes at step S79 otherwise process.
As mentioned above, calculated the gain of the microphone of particular telephone.After the position of having calculated mobile phone, the signal that the volume of the signal that this mobile phone can be received and reference receiver are desirably in this known location reception compares.This allows to calculate the gain of mobile telephone microphone.In other words, be 50 signal if the microphone that expectation has a reference sensitivity receives intensity in this known location, and the actual signal strength signal intensity that receives is 35, can think that then this mobile phone has 70% sensitivity.If use later on the signal (for example when the mapping of refinement volume or position) from this mobile phone, then can operate with this known gain values the numeral of reception, so that the reception value is converted to value desired for the microphone with reference sensitivity.
In addition, above also described determine each mobile phone towards.If known mobile phone is equidistant with two loud speakers that produce the sound equate volume, if be higher than another from the intensity of the signal of a loud speaker, can infer that then this microphone is towards the loud speaker that receives the maximum signal from it.Typically, obtain similar reading from a plurality of loud speakers, will provide the more accurately estimation of rotation.Although it should be noted that can calculate in this manner towards,, because mobile phone hands, therefore, suppose that towards changing fast in time, then such information can not have very large value.Yet, for equipment have more be fixed towards optional embodiment, that this calibrated horizontal can allow is directed, spatially organized sound produces.
The processing of step S49 that Figure 34 has illustrated is that PC55 carries out, produce Figure 27 of desired audio with mobile phone.At step S80, the spatial sound of calculation expectation, at step S81, this spatial sound mapping and expectation volume mapping combination are to produce the spatial sound of revising at step S82.To each phone, carry out successively following the processing.Obtain the position (as before determined) of mobile phone.Use this position data, carry out search operation in the spatial sound that step S82 produces, so that determine will be by the sound (step S83) of this phone output.Then, at step S84, desired audio is offered this phone.Step S85 determines whether that phone will be carried out processing in addition.If so, then process and return step S84, finish at step S86 otherwise process.
Above-mentioned processing with reference to Figure 28 to 34 relates to the processing that PC55 carries out.Referring now to the flow chart of Figure 35, the processing that mobile phone 67,68,69, one of 70 is carried out is described.At step S87, mobile phone uses the processing of the above-mentioned type, is connected with PC55.Then, two of mobile phone executed in parallel are processed stream.First processes stream comprises from PC55 audio reception data (step S88), and exports the voice data (step S89) of this reception at the loud speaker of mobile phone, so that this mobile phone and other mobile phone combination results three dimensional sound scapes.Second processes the microphones capture sound (step S90) that stream uses mobile phone, and sends it to PC55 (step S91).The second processing flows to PC55 provides data, to allow maintenance and home position data.
The embodiment of the invention that aforesaid operations produces the three dimensional sound scape be center P C55 is determined will be from the sound of each phone output, and provide suitable voice data.In optional embodiment of the present invention, phone can oneself determine to export which kind of sound.Figure 36 has illustrated such embodiment.
With reference to Figure 36, at step S92, download the calibration data that is used for the calibration mobile phone.This calibration data can comprise the data of the tone that the indication mobile phone will produce in calibration process, can comprise that also indicative of desired is by the data of other equipment at the sound of different spatial generation.At step S93, by the microphone of mobile phone, receive the sound that other mobile phones produce, then, at step S94, carry out related operation with the sound of calibration data and reception.Can carry out as mentioned above related operation, but it should be noted that generally speaking, because the relatively limited processing capacity of mobile phone, it is preferred using the related operation of relatively low computer power.After carrying out such related operation, at step S95, can determine the position of mobile phone.
After carrying out said process, mobile phone is configured to identify oneself with in the generation of sound scape of the above-mentioned type.Therefore, at step S96, download the voice data of indicating the sound that will produce.At step S97, use determined position data, process the voice data of reception, and determine with this voice data will be by the sound of this mobile phone output.Then export determined sound at step S98.
Occur after being shown in step S92 to S95 although it should be noted that step S96 to S98,, in some embodiments of the invention, the processing executed in parallel of the processing of step S96 to S98 and step S92 to S95.
Make with light harmony embodiments of the invention have been described after, the addressing scheme that is applicable to the embodiment of the invention is described now.Explained (for example with reference to Fig. 5), preferably, hierarchically processed the control of light-emitting component.Preferably, each control element 6,7,8 is controlled the light-emitting component in the predetermined portions in the space that will light.In other words, if used suitable addressing mechanism, only need to be in the address, processing section at different levels of hierarchy.For example, the first of address can indicate one of control element simply.This is master controller PC1 unique address part to be processed.Then, control element can come with the second portion of the address that represents in detail each light-emitting component the correct light-emitting component of instruction.Addressing scheme is described now in more detail.
At present, the space address system is preferred, wherein, can come light-emitting component is carried out addressing based on space address, for example, can provide instruction to open all lamps in the 10cm cube centered by coordinate (12 ,-3,7).With reference to Figure 37, can see, space address 75 can be converted to address, a plurality of this locality (native) 76, thus each local address with by this space address indicated and the location light-emitting component be associated.
In addition, it should be noted that currently preferred embodiment use IPv6 of the present invention address.As shown in figure 38, the IPv6 address is 128 bit long (16 bytes), typically is comprised of two logical gates: 64 bit network prefixs 77 and 64 bit host addressing suffix 78.
Outside the indicated network of 64 bit network prefixs 77, do not explain 64 bit host addressing suffix 78, therefore, host addressing suffix 78 can be used for the indicated directly related information of network of coding and network prefix 77.Can as shown in figure 39, can see in the drawings with the 64 bit suffix three-dimensional location data of encoding, 64 bit host addressing suffix comprise the first component 79 of indication x coordinate, the second component 80 of indication y coordinate and the three-component 81 of indication z coordinate.In three components each comprises 21 bits, and 1 bit does not use.Provide 21 bits to each x, y, z coordinate, this allows, and the cube to 1 cubic millimeter carries out respectively addressing in the cube of 2km.Similarly, this addressing scheme can provide the three-dimensional addressing to the earth, allow to 1 meter longitude and latitude resolution, 1 meter height resolution to 10, and 000 meter, 10 meters height resolution to 100, many resolution mappings of 000 meter are enough to location for example any aircraft or ship.
Use required comparing with great majority, this is obviously more fine-grained addressing.In practice, can use non-cube of less addressing.The coordinate frame of this type of application is usually relevant with certain point or original calibrated camera position in the display.
In optional embodiment, host addressing suffix 78 can be divided into two parts, each comprises 32 bits, with the indication two-dimensional position data.Really, can recognize, network prefix 77 indicated networks can be explained host addressing suffix 78 in any easily mode, therefore, host addressing suffix 78 can represent for example combination of locus, time and direction, in certain embodiments, even can represent the combination of books (book) ISBN and the page number.
Figure 40 has illustrated longitude and latitude two-dimensional encoded, and wherein, host addressing suffix 78 comprises two parts.First 82 comprises 31 bits and represents longitude, and second portion 83 comprises 32 bits and represents latitude.Also exist and comprise the third part of not using bit.Such addressing scheme provides the address relevant with 1 square centimeter of earth surface.The second portion 83 that it should be noted that the expression latitude is compared with first 82, comprises an added bit.This is because the girth of the earth is about 40,000km, and from the arctic to the distance in the South Pole be 20,000km.Addressing scheme shown in Figure 40 allows the expression network, and in this network, for each point on the earth surface provides the virtual web server, these web servers provide such as data such as height above sea level and land uses.Alternatively, such web server can be used for semantic web geographical space URI is provided.
With reference to Figure 41, can via internet 86, between the first computer 84 and second computer 85, transmit the IPv6 address of the above-mentioned type.Although the host addressing suffix of this type of address can representation space information, because internet 86 only carries out route with network prefix 77, therefore, can transmit pellucidly by internet 86 address of the above-mentioned type.
When the address arrives the indicated network of network prefix 77,64 bit suffix are converted to local non-space address.Figure 37 has schematically shown this conversion.
In optional embodiment of the present invention, the router of suitable configurations and the network of network controller can be explained the IPv6 address of expression spatial information, and this router and network controller are known the mode of carrying out the space addressing.Therefore the embodiment of such network can control broadcast and multicast message, so that it only is sent to relevant network node by keeping the space address scope to operate in router.Figure 42 shows this type of embodiment of the present invention.
With reference to Figure 42, can see, the first router 87, the second router 88 are connected device 89 and are connected with network 90 with Third Road.Can see, in the data of network transmission for address 2001:630:80:A000:FFFF:5856:4329:1254.These data and the address that is associated thereof are transferred into three routers 87,88,89.As mentioned above, this address has encapsulated spatial data.Suppose spatially to have configured router 87,88, then they can determine that its connection device 91,92 does not separately need the data that are associated with this locus.Correspondingly, router 87,88 does not transmit these data.On the contrary, router 89 determines that the assembly of its 3 connections needs to receive the data for this locus really, correspondingly, router 89 with this data retransmission to assembly 93.
It should be noted that action need of the present invention shown in Figure 42 uses the Routing Protocol with spatial knowledge.Such agreement can comprise data from a coordinate system transformation to another.
A kind of this space-like Routing Protocol that uses in the embodiment of the invention can be associated each router 87,88,89 with the three-dimensional case (bounding box) of delimiting, this demarcation case comprises all devices that is connected with this router.Delimit case for being arranged in the relatively high-rise router of hierarchy, calculating, with the demarcation case of the router that comprises all connections.Therefore, in such system, the demarcation case of space address and router can be compared, if delimit within the case at this in the zone of addressing, then this message is sent to the more router of low layer, in this router, repeat this process.
Use aforesaid high resolution space addressing scheme really to have some problems.Because the volume data set can be very large, therefore, because the restriction of widely available rated output is not always can carry out addressing by separately each being consisted of volume, present whole scene.For example, be the black/white volume elements mapping of 10 cubic metres volume generation cubic millimeter class resolution ratio, under the transfer rate of 1M bits per second, need cost 12 days.In addition, in the situation that light-emitting component, the distance between the lamp may be much larger than resolution.Therefore, the instruction of opening the light-emitting component in specific 1mm cube may not have effect, and this is owing to unlikely being placed with light-emitting component in this 1mm cube.
The present invention has overcome more above-mentioned problems in many ways.For example, different luminous networks is used different resolution.Send more substantial data of description, describe such as similar X3D mark or other forms of three-dimensional model (solid modelling).
Yet some embodiments of the present invention are used hierachical data structure, have created multi-resolution encoding in single space address.This is based on the following fact, that is, the required bit number in the address of low resolution descends rapidly.
For example, can represent position (for example one-dimensional space address) on 1 meter ruler with 8 bits, to come to be encoded in the position with hierachical data structure.For 8 bits of encoded systems, the number in first " 0 " " 1 " has before produced " level indicator ".7 " 1 " expression highest level (whole chi), next level are 6 " 1 " and then 1 " 0 ", and minimum level (interlayer 8) is provided by single the preceding " 0 ".The bit that is not used to indicate level is used for locating the actual address of required scope.Coming the most accurate locative mode with this hierarchy is the space address of using with " 0 " beginning.Allow like this to specify the 8mm scope:
1000 2 7 &ap; 8 mm
Similarly, mean that at front bit " 10 " all the other 6 bits can represent the 16mm scope, " 110 " provide the 32mm scope, etc.This means, we can represent each 8mm section of this chi, any 16mm section or first or the second half expressions are as a whole with the precision of about 500mm, or represent simply whole chi.This is shown in the following table 2.
At front bit The desired position bit number Position number that can appointment Precision/mm
0 7 128 8
10 6 64 16
110 5 32 32
1110 4 16 63
11110 3 8 125
111110 2 4 250
1111110 1 2 500
11111110 0 1 1000
Table 2
For three dimension system, the equivalence of this space addressing method is to use the data structure that is called as Octree.
Octree is a kind of data structure, and wherein, each node of Octree represents a cube volume, and each node represents 1/8th of his father's node.Figure 43 has schematically shown such structure.Can see, top layer volume 94 comprises 8 component volume 95.In these 8 component volume each self comprises 8 component volume 96.
For 64 bits of encoded systems (that is, the open ended coded system of host addressing suffix of IPv6 address), produced " level indicator " at the number of first " 0 " " 1 " before.21 " 1 " expression highest level.That is, can be used as the cube 94 that integral body is come addressing.Next level have 20 front " 1 " and then 1 " 0 " indicate, this level provides 3 bits, can be used for representing volume 95 in the mode of x, y and z value.Figure 43 shows such value in conjunction with volume 95.
Next level is followed 1 " 0 " indication by 19 in front " 1 ".This level provides 6 bits, can be used for independently addressing volume 96, but can not further son division of independent addressing.
At minimum level (level 21), can the single volume elements of independent addressing.This level is indicated in front " 0 " by 1.Minimum level address like this is identical with address shown in Figure 39, and all the other bits are used to indicate the level of address.
The resolution that the at all levels of addressing hierarchy has been shown and has been associated in the following table 3:
Number front 1 At front bit Bit number for each x, y, z The desired location bit number The hop count that can represent for each x, y, z Total addressable volumetric region Resolution
0 0 21 63 2 21 8 21 2 0
1 10 20 60 2 20 8 20 2 1
2 110 19 57 2 19 8 19 2 2
3 1110 18 54 2 18 8 18 2 3
4 1111 0 17 51 2 17 8 17 2 4
5 1111 10 16 48 2 16 8 16 2 5
6 1111 110 15 45 2 15 8 15 2 6
7 1111 1110 14 42 2 14 8 14 2 7
8 1111 1111 0 13 39 2 13 8 13 2 8
9 1111 1111 10 12 36 2 12 8 12 2 9
10 1111 1111 110 11 33 2 11 8 11 2 10
11 1111 1111 1110 10 30 2 10 8 10 2 11
12 1111 1111 9 27 2 9 8 9 2 12
1111 0
13 1111 11111111 10 8 24 2 8 8 8 2 13
14 1111 11111111 110 7 21 2 7 8 7 2 14
15 1111 11111111 1110 6 18 2 6 8 6 2 15
16 1111 11111111 1111 0 5 15 2 5 8 5 2 16
17 1111 11111111 111110 4 12 2 4 8 4 2 17
18 1111 11111111 1111110 3 9 2 3 8 3 2 18
19 1111 11111111 11111110 2 6 2 2 8 2 2 19
20 1111 11111111 11111111 0 1 3 2 1 8 1 2 20
21 1111 11111111 11111111 10 0 0 2 0 8 0 2 21
Table 3
In table 3, in front 1 number row (row 1) expression address at the number of first 1 before 0.The bits of original that in front bit column (row 2) expression address, can be used for the level of this addressing hierarchy of unique identification.This adds that by 1 of the number that represents in the row 1 single 0 forms.The bit number that is used for single coordinate for bit ordered series of numbers (row 3) expression of each x, y, z.Because the different resolution of each level in hierarchy needs bit more or less to store x, y, z coordinate.Desired location bit ordered series of numbers (row 4) equals 3 times of number in the row 3.This is because 3 coordinates of needs come the volumetric region on each hierarchical level of addressing.Each level in hierarchy has the cube of different numbers regional.The hop count row (row 5) that can represent each x, y, z have illustrated in single dimension how many these cube zones are arranged.For example, in Figure 43, in highest level, only have a cube to be installed on the x direction, but under level have 2 cubes on the x direction, next level has 4 again.Total addressable volumetric region row (row 6) have provided total cube number that each level in hierarchy can represent.For example, in Figure 43, in highest level 1 cube is arranged, have 8 at the second level, have 54 at next level.Exactly, these row are values that the value that provides in the row 5 (hop count that can represent each x, y, z) is brought up to 3 powers.Resolution row (row 7) have provided the cubical length of side at each hierarchical addressing.This provides with respect to the smallest addressable zone.Be that minimum level is " size " 1.These regional physics size and in fact whether they evenly and linearly are mapped to physical space and depend on the accurate situation of using.For example, if be used for fairly large geographical addressing, then x and y can be longitude and latitude, and the z direction can be height.Therefore, the levels of precision of each zone take rice as unit will change according to the position.
Use above-mentioned addressing scheme, message can be addressed to any Octree cube from single volume elements to whole space.
For example, can send instruction and light volume: all light-emitting components in 11,111,111 11,111,111 1,110,000,000,000,000 00,000,000 00,000,000 00,000,000 00 01 10 10.19 " 1 " indication level that the address begins.As above shown in the table, 2 bits (namely 2 are arranged 2=4) for the scope on coding x, y and the z direction.X, y, the z coordinate of last 6 bits (01,10,10) indication volume of address.
This will be in the following address scope all volume elements of addressing:
2 19≤ x<2^20 position 01, resolution 2 19Volume elements
2 20≤ y<2^20+2^19 position 10, resolution 2 19Volume elements
2 20≤ z<2^20+2^19 position 10, resolution 2 19Volume elements
Observe in more detail these equatioies, it should be noted that 18 is 2 of basic volume elements width at the addressed volume of front 1 indication 19Doubly.The x coordinate of coding is 01 binary system, therefore refers to have 1 * 2 19With 2 * 2 19Between the zone of x coordinate, or only refer to 010000000000000000000 to 011111111111111111111.
With in a scope individually each independent volume elements of addressing compare, the data of using Octree to transmit are wanted much less.
Optional mapping (still using octotree data structure) is that the initial initial bits position to x, y, z coordinate is kept fixing, and determines level with end bit.For the demarcation case filtering on router, this may have advantage.For example, above-mentioned x, y, z position are encoded as with replacing 01000,000 00,000,000 00000 100 00,000,000 0000000000 100,111 11,111,111 11111111.
Being mapped in of these compactnesses has sufficient " free time " bit under the low resolution, allow to comprise in identical address realm various other shapes, rotation or offset area.
Foregoing description relates to the addressing of area of space.Send message to such space address and usually carry certain payload.For example, can comprise " turning on all lamps in this zone " or " all lamps in should the zone become blueness " like this message of form.
Can recognize, the present invention is applicable to the signal source size of relative broad range, allows device of the present invention is decreased to micron or nanometer scale.Small-scale like this device can produce the ability that the large-scale array in micron of the present invention or nanowire signal source is used in exploitation, employing, calibration and control.For example, can construct display with small-scale like this signal source, such as cathode ray tube, liquid crystal display and plasma screen.Can recognize, use the signal source of such miniaturization, can adopt in the mode of self-organizing such display apparatus.For example, studies show that, the miniaturization signal source can be injected in from tank (canister) on the supporting construction (for example wall), then use technology of the present invention to calibrate.Can recognize, in such self-organizing is used, the small-signal source can from prior to the signal source deposition or draw power with the substrate that signal source deposits.This substrate self can be connected with power supply.
Various embodiment of the present invention has below been described by way of example.Can recognize, can the next feature in conjunction with described various embodiment of multitude of different ways.Such combination will be apparent to those skilled in the art.It should be noted that the above-mentioned description that provides should not be considered to restrictive.But schematically, to those skilled in the art, modification is apparent.Such modification within the spirit and scope of the present invention.Especially, can recognize, although describe feature of the present invention with light-emitting component, some such features also are applicable to any suitable equipment on an equal basis.For example, although the scheme of addressing light-emitting component has been described,, can recognize, such addressing method can be used for other equipment similarly.

Claims (38)

1. method that a plurality of signal sources are positioned and identify, described a plurality of signal sources are positioned at predetermined space, and described method comprises:
From described signal source each receives corresponding framing signal, in the described framing signal each includes the pulse at a plurality of in time intervals, and indicated identification code, described identification code is a signal source in the described a plurality of signal sources of unique identification in described a plurality of signal sources;
Based on the pulse at described a plurality of in time intervals, from the described framing signal that the corresponding signal source receives, produce the identification data for each signal source;
Based on described framing signal, produce the position data of the position of the described a plurality of signal sources of indication; And
Described position data is associated with described identification data.
2. the method for claim 1, wherein each in the described framing signal is the modulation format of the identification code in corresponding signal source.
3. method as claimed in claim 2, wherein, each in the described framing signal is binary phase shift keying modulation format or the non-return-to-zero modulation format of the identification code in corresponding signal source.
4. such as the described method of one of claim 1 to 3, wherein, each in the described signal source has related address, and the described identification data of each has predetermined relationship with corresponding address in the described signal source.
5. method as claimed in claim 4, wherein, the identification data of each signal source is the address of described signal source.
6. the method for claim 1, wherein receiving each described framing signal comprises: the electromagnetic radiation that receives a plurality of in time intervals.
7. method as claimed in claim 6, wherein, described electromagnetic radiation is visible light, infrared radiation or ultra-violet radiation.
8. the method for claim 1,
Wherein, receiving framing signal from each signal source comprises:
Receive the framing signal that each described signal source sends in the signal receiver termination, described signal receiver is configured to produce the two-dimensional position data that described signal source is positioned in detecting frame;
Wherein, producing described position data comprises:
Based on described two-dimensional position data, produce position data.
9. method as claimed in claim 8, wherein, described detection frame definition pel array, described signal receiver produces the data of at least one pixel in the described pel array of indication.
10. method as claimed in claim 8 wherein, receives the framing signal that each described signal source sends and comprises:
Receive described framing signal with video camera,
Wherein, described framing signal comprises the emission of the detectable electromagnetic radiation of video camera.
11. method as claimed in claim 10 wherein, receives described framing signal with video camera and comprises:
Use receives described framing signal to the charge coupled device (CCD) of electromagnetic radiation sensitivity.
12. such as claim 10 or 11 described methods, wherein, produce described position data and also comprise: the frame that described video camera is produced divides into groups in time, produces described identification data.
13. method as claimed in claim 12 wherein, is divided into groups in time to produce described identification data with a plurality of described frames and is comprised: processes the zone of mutual distance within preset distance in the described frame.
14. method as claimed in claim 8 wherein, receives described framing signal and also comprises:
Receive the framing signal that each described signal source sends in a plurality of signal receiver terminations, each in the described signal receiver is configured to produce two-dimensional position data corresponding the detection in the frame, and described two-dimensional position data positions described signal source.
15. method as claimed in claim 14 wherein, produces described position data and also comprises: the two-dimensional position data that described a plurality of signal receivers are produced makes up, to produce described position data.
16. method as claimed in claim 15 wherein, makes up described two-dimensional position data and comprises: make up described two-dimensional position data by triangulation or trilateration.
17. the method for claim 1 also comprises with described a plurality of signal sources and comes the presentation information signal, wherein the presentation information signal comprises:
Based on described information signal and described position data, be each the generation output data in described a plurality of signal sources; And
Send described output data to described signal source, to present described information signal.
18. method as claimed in claim 17, wherein, each described signal source is to be configured to cause that the emission of electromagnetic radiation presents the electromagnetic component of described information signal.
19. method as claimed in claim 18 wherein, sends described output data to described signal source and comprises to present described information signal:
Send instruction to cause some electromagnetic radiation-emittings in the described electromagnetic component.
20. method as claimed in claim 19, wherein, described electromagnetic component is light-emitting component, and described instruction causes described light-emitting component emission visible light.
21. method as claimed in claim 20, wherein, described light-emitting component can be lighted with a plurality of predetermined strengths, and the intensity of each light-emitting component that will light is specified in described instruction.
22. method as claimed in claim 20, wherein, the intensity modulated of the described electromagnetic radiation of being launched by each light-emitting component represents each described framing signal, to present described information signal.
23. method as claimed in claim 20 wherein, is lighted described light-emitting component, to show any in a plurality of predetermined colors, described instruction is each light-emitting component designated color.
24. method as claimed in claim 23, wherein, the tone of the described light of being launched by each light-emitting component modulates to represent each described framing signal, to present described information signal.
25. the method for claim 1, wherein each in the described signal source is the ELECTROMAGNETIC RADIATION REFLECTION device.
26. method as claimed in claim 25, wherein, each in the described signal source is to have the ELECTROMAGNETIC RADIATION REFLECTION device that can control reflectivity.
27. method as claimed in claim 26, wherein, each in the described signal source comprises reflecting surface and variable opacity element, and described variable opacity element is configured to control the reflectivity of described signal source.
28. method as claimed in claim 17, wherein, each in the described signal source comprises sound source,
Sending described output data to described signal source comprises to present described information signal: send instruction to cause some the output sound data in the described sound source, produce predetermined sound scape.
29. the method for claim 1, wherein receiving described framing signal comprises: receive voice signal from described a plurality of signal sources.
30. the method for claim 1, wherein receiving described framing signal comprises:
At least some signal sources in described a plurality of signal sources send voice signal;
From described signal source receive data, the voice signal that described data indication described at least some signal source places in described a plurality of signal sources receive.
31. method as claimed in claim 30, wherein, at least some signal sources in described a plurality of signal sources send voice signals and comprise: each in described at least some signal sources in described a plurality of signal sources sends a plurality of voice signals, and described a plurality of voice signals are to send from different locus.
32. method as claimed in claim 31, wherein, each voice signal in described a plurality of voice signals is different.
33. method as claimed in claim 32 wherein, produces described position data and comprises:
The data of the voice signal that indication described at least some signal source places in described a plurality of signal sources are received are processed, to produce described position data.
34. method as claimed in claim 33, wherein, processing said data comprises:
The data that receive are carried out filtering, to produce the component that from the described a plurality of alternative sounds signals that are sent to described signal source, obtains.
35. method as claimed in claim 34, wherein, processing said data also comprises:
Based on the relative intensity of described component, produce described position data.
36. method as claimed in claim 33 wherein, sends described a plurality of voice signal at predetermined instant,
Processing said data comprises: the time difference between the moment of determining to send the moment of each voice signal and receive described voice signal in signal source.
37. the device that a plurality of signal sources are positioned and identify, described a plurality of signal sources are positioned at predetermined space, and described device comprises:
Receiver, each that is configured to from described signal source receives corresponding framing signal, in the described framing signal each includes the pulse at a plurality of in time intervals, and indicated identification code, described identification code is a signal source in the described a plurality of signal sources of unique identification in described a plurality of signal sources; And
Processor, be configured to the pulse based on described a plurality of in time intervals, from the described framing signal that the corresponding signal source receives, produce the identification data for each signal source, based on described framing signal, produce the position data of the position of the described a plurality of signal sources of indication, and described position data is associated with described identification data.
38. device as claimed in claim 37, wherein, described device is used for coming the presentation information signal with described a plurality of signal sources,
Described processor also is configured to based on described information signal and described position data, is each the generation output data in described a plurality of signal sources, and
Described device also comprises: transmitter is configured to send described output data to described signal source, to present described information signal.
CN200780015852.6A 2006-03-01 2007-03-01 Method and apparatus for signal presentation Active CN101485233B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
GB0604076A GB0604076D0 (en) 2006-03-01 2006-03-01 Method and apparatus for signal presentation
GB0604076.0 2006-03-01
US78112206P 2006-03-09 2006-03-09
US60/781,122 2006-03-09
PCT/GB2007/000708 WO2007099318A1 (en) 2006-03-01 2007-03-01 Method and apparatus for signal presentation

Publications (2)

Publication Number Publication Date
CN101485233A CN101485233A (en) 2009-07-15
CN101485233B true CN101485233B (en) 2013-01-16

Family

ID=36218902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200780015852.6A Active CN101485233B (en) 2006-03-01 2007-03-01 Method and apparatus for signal presentation

Country Status (2)

Country Link
CN (1) CN101485233B (en)
GB (1) GB0604076D0 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014522A (en) * 2009-09-04 2011-04-13 李志海 Network monitoring system and method and corresponding location label thereof
PL2647222T3 (en) 2010-12-03 2015-04-30 Fraunhofer Ges Forschung Sound acquisition via the extraction of geometrical information from direction of arrival estimates
PL3045017T3 (en) 2013-09-10 2017-09-29 Philips Lighting Holding B.V. External control lighting systems based on third party content
US9795015B2 (en) * 2015-06-11 2017-10-17 Harman International Industries, Incorporated Automatic identification and localization of wireless light emitting elements
CN108780602B (en) 2017-08-21 2021-09-07 庄铁铮 Electronic device control method and system with intelligent identification function
FR3097045B1 (en) * 2019-06-06 2021-05-14 Safran Electronics & Defense Method and device for resetting an inertial unit of a means of transport from information delivered by a viewfinder of the means of transport

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1729727A (en) * 2002-12-19 2006-02-01 皇家飞利浦电子股份有限公司 Method of configuration a wireless-controlled lighting system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1729727A (en) * 2002-12-19 2006-02-01 皇家飞利浦电子股份有限公司 Method of configuration a wireless-controlled lighting system

Also Published As

Publication number Publication date
CN101485233A (en) 2009-07-15
GB0604076D0 (en) 2006-04-12

Similar Documents

Publication Publication Date Title
EP1989926B1 (en) Method and apparatus for signal presentation
US10952296B2 (en) Lighting system and method
CN101485233B (en) Method and apparatus for signal presentation
US10230466B2 (en) System and method for communication with a mobile device via a positioning system including RF communication devices and modulated beacon light sources
US10218440B2 (en) Method for visible light communication using display colors and pattern types of display
US20170368459A1 (en) Ambient Light Control and Calibration via Console
CN106464361A (en) Light-based communication transmission protocol
CN106574959A (en) Light based positioning
CN109076680A (en) Control lighting system
CN103168505A (en) A method and a user interaction system for controlling a lighting system, a portable electronic device and a computer program product
CN110011731A (en) System and method for Free Space Optics transmission of tiling
CN106255284A (en) Automatically identifying and localization of wireless luminous element
CN106443585A (en) Accelerometer combined LED indoor 3D positioning method
CN109964321A (en) Method and apparatus for indoor positioning
WO2019214643A1 (en) Method for guiding autonomously movable machine by means of optical communication device
WO2019214642A1 (en) System and method for guiding autonomous machine
CN107707898B (en) The image distortion correcting method and laser-projector of laser-projector
CN106301555A (en) A kind of signal transmitting method for light projection and transmitter
US11132832B2 (en) Augmented reality (AR) mat with light, touch sensing mat with infrared trackable surface
JP4922853B2 (en) Viewing environment control device, viewing environment control system, and viewing environment control method
CN108120435A (en) A kind of plant area&#39;s alignment system and localization method based on visible ray
CA2873657A1 (en) Feedback-based lightpainting, user-interface, data visualization, sensing, or interactive system, means, and apparatus
US8073155B2 (en) System for reproducing audio information corresponding to a position within a displayed image
US10609365B2 (en) Light ray based calibration system and method
JP6370733B2 (en) Information transmission device and information acquisition device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant