EP3479055A1 - Procédé d'identification et de localisation d'un objet mobile - Google Patents

Procédé d'identification et de localisation d'un objet mobile

Info

Publication number
EP3479055A1
EP3479055A1 EP17819423.9A EP17819423A EP3479055A1 EP 3479055 A1 EP3479055 A1 EP 3479055A1 EP 17819423 A EP17819423 A EP 17819423A EP 3479055 A1 EP3479055 A1 EP 3479055A1
Authority
EP
European Patent Office
Prior art keywords
transmitter
receiver
pixel
bit sequence
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP17819423.9A
Other languages
German (de)
English (en)
Other versions
EP3479055A4 (fr
Inventor
László MARCZY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magicom Kft
Original Assignee
Magicom Kft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from HU1700189A external-priority patent/HUP1700189A1/hu
Application filed by Magicom Kft filed Critical Magicom Kft
Publication of EP3479055A1 publication Critical patent/EP3479055A1/fr
Publication of EP3479055A4 publication Critical patent/EP3479055A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S1/00Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
    • G01S1/70Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith using electromagnetic waves other than radio waves
    • G01S1/703Details
    • G01S1/7032Transmitters
    • G01S1/7038Signal details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/783Systems for determining direction or deviation from predetermined direction using amplitude comparison of signals derived from static detectors or detector systems
    • G01S3/784Systems for determining direction or deviation from predetermined direction using amplitude comparison of signals derived from static detectors or detector systems using a mosaic of detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves

Definitions

  • the present invention relates to a method for identifying and locating one or more movable objects in accordance with preamble of claim 1, in particular a method comprising the steps of arranging at least one transmitter and at least one receiver in the defined space by mounting one of the transmitter and receiver on the movable object and the other one of the transmitter and receiver in the environment of said movable object in a fixed manner, emitting, by the transmitter, signals assigned to the object uniquely identifying said object, receiving in a wireless manner, by the receiver, the signals emitted by the transmitter, determining from the received signals at least one element of a group comprising of: identity the movable object, location, position, direction of displacement, velocity of displacement of the transmitter and receiver relative to each other; wherein said determination comprises signal processing, and providing at least one element of said group as a result.
  • GB 2 475 077 Al describes a locating system wherein laser beams scan the entire space in order to locate objects.
  • This solution has major drawbacks, i.e. laser beams cannot be used in all settings; the approach is costly; and it does not identify the objects at all.
  • JP 2002 165230 A describes a method for determining the distance between objects and a receiver wherein the transmitter and the receiver are interconnected and the object is not identified.
  • CN 1710378 A discloses a solution for determining the location of the centre of a light beam that falls on the receiver and is unfit either for determining the distance of an object or for identifying it.
  • motion tracking systems which use different unique visual patterns as transmitters, and cameras as receivers to follow the patterns.
  • the different unique visual patterns also serve as the identification code of an object.
  • Another type of motion tracking systems use visual patterns generated by using several light sources, mostly LEDs in different geometrical arrangements, e.g. in a matrix form. In such cases no light pulses shall be used to generate a unique ID code of an object.
  • US 8,892,252 Bl proposes, on the one hand, a method for scanning across a surface of a part within a capture volume, comprising scanning the surface of the part using an inspection unit, acquiring measurement data representing one or more motion characteristics of the inspection unit using a motion capture system operatively disposed with respect to the capture volume, the one or more motion characteristics being measured using a plurality of retro-reflective markers attached to the inspection unit in a known pattern, deriving position data and orientation data from the measurement data, said position data and orientation data representing positions and orientations of the inspection unit in a coordinate system of the part being scanned, acquiring inspection data, and combining the position data with the inspection data.
  • US 8,892,252 Bl proposes an additional method comprising moving an inspection unit along a desired path on the surface of the part, measuring positions of retro-reflective markers attached to the inspection unit with respect to a motion capture coordinate system, converting measurements of the positions of the retro-reflective markers with respect to the motion capture coordinate system into first position data and orientation data representing the positions and orientations of the inspection unit with respect to the coordinate system of the part, encoding the first position data into simulated encoder pulses which indicate the positions of the inspection unit with respect to the coordinate system of the part, acquiring inspection data during movement of the inspection unit along the desired path, sending the simulated encoder pulses and the acquired inspection data to a processor, decoding the simulated encoder pulses into second position data representing positions of the inspection unit with respect to the coordinate system of the part, associating the second position data with the inspection data, and displaying the inspection data in accordance with the associations made.
  • US 6,324,296 Bl describes a motion capture method for tracking individually modulated light points, comprising imaging a plurality of light point devices attached to an object to be tracked in a motion capture environment, each being operable to provide a unique plural bit digital identity (ID) of the light point device, capturing a sequence of images of said pulses corresponding to substantially all of said plurality of light point devices; and recognizing the identities of, and tracking the positions of, substantially all of said plurality of light point devices based upon said light pulses appearing within said sequence of images, respectively.
  • ID digital identity
  • Our invention is based on the recognition that a bit sequence usable for a unique identification of selected objects and an algorithm suitable for the identification can also be combined with a locating algorithm using sensors arranged in a matrix, thus the same optical signal can be used both to identify and to locate the object.
  • the invention relates to a method according to claim 1 for identifying and locating movable objects in a defined space, comprising the steps of arranging at least one transmitter and at least one receiver in the defined space by mounting one of the transmitter and receiver on the movable object and the other one of the transmitter and receiver in the environment of said movable object in a fixed manner, emitting, by the transmitter, signals assigned to the object uniquely identifying said object, receiving in a wireless manner, by the receiver, the signals emitted by the transmitter, determining from the received signals at least one element of a group comprising of: identity the movable object; location, position, direction of displacement, velocity of displacement of the transmitter and receiver relative to each other; wherein said determination comprises signal processing, and providing at least one element of said group as a result.
  • optical signals as signals emitted by the transmitter and assigned to the object are used, said optical signals are emitted as a bit sequence consisting of bits in an unoriented manner, the emitted signals are received and buffered in receiving cycles by the receiver, during which the emitted optical signals are captured during a receiving cycle by a CCD, CMOS or other pixel-based imaging sensor of the receiver, and the data represented by the bits are read from the imaging sensor pixel by pixel.
  • the transmitter emitting the bit sequence is identified based on evaluation of a bit sequence captured by a single pixel of the optical sensor, while in case a moving object a pixel area of the optical sensor including a pixel capturing an initial bit of the bit sequence will be designated, and it will be determined whether one or more subsequent bits of the bit sequence emitted by the transmitter can be captured by the same pixel of the sensor of the receiver or can be captured by another pixel of the sensor located toward an edge of said pixel area.
  • said pixel area of the optical sensor shall be redesignated so that the subsequent bits fall on a pixel in the centre region of said pixel area as much as possible, but at least, they should fall within said pixel area, so they can be captured by a pixel within said pixel area.
  • each optical signal emitted by the transmitters and received by the optical sensor it will be determined for each optical signal emitted by the transmitters and received by the optical sensor whether the received and buffered bits comprise a full bit sequence that is necessary and sufficient for identification, and if the received and buffered bits comprise a full bit sequence that is necessary and sufficient for identification, the object will be identified, while if the received and buffered bits do not comprise a full bit sequence that is necessary and sufficient for identification, receiving the optical signals of the transmitters by the optical sensor will be continued until they make up a full bit sequence, and after acquiring the full bit sequence the object can and will be identified.
  • each bit sequence comprises at least one trigger bit for marking the beginning of the bit sequence for indicating unambiguously the beginning of a new bit sequence to a signal processing unit connected to the receiver.
  • each bit sequence can be preferably started with three bits of zero value constituting said trigger bits.
  • an area of the defined space monitored by the sensor of the receiver is associated with a matrix wherein the rows and columns of said matrix correspond to 2D points in the space monitored by the sensor of the receiver and the individual identified transmitters are assigned to this matrix that is provided for further processing in an output buffer of the signal processing unit.
  • Figures la-lc show, in functional block terms, alternative scenarios for assigning transmitters and receivers to the objects
  • Figure 2 shows an exemplary block diagram of a configuration implementing the method according to the invention
  • Figures 3a-3c show an outline of the pixel image of the light beam of the transmitter on the imaging sensor of the receiver; and Figures 4a-4b show a flowchart of an implementation of the method in accordance with the present invention.
  • An exemplary implementation of the method according to the invention for identifying and locating movable objects within a specific area is explained in detail below.
  • a tracked object 1 can essentially be any object, e.g. one already stored or to be stored in the warehouse, one that is processed on the shop floor, or even a receptacle, vessel or crate holding the objects to track; and the object 1 can be either stationary or moving, as we shall see.
  • Objects 1 may also be persons or animals, though this requires further consideration.
  • each object 1 requires at least a transmitter 2, a receiver 3, and a signal processing unit 4 connected to the latter.
  • the process can be implemented by mounting the transmitters 2 on the objects 1 and fixing the receivers 3 in the indoor space where the objects 1 are, as shown in Figure la, but also alternatively, by mounting the receivers 3 on the objects 1 and fixing the transmitters 2, as shown in Figure lb.
  • the two scenarios can also be combined, i.e. each object 1 is assigned a transmitter 2 and a receiver 3, and also, transmitters 2 and receivers 3 are fixed in the space where the objects 1 are, as shown in Figure lc.
  • a key feature of the method according to the invention is that all communications between the transmitter 2 and receiver 3 rely solely on light, which obviously means that ambient light should not interfere with the light used for communication purposes, and that the latter itself should not disturb, or interfere with, the objects or persons in the space.
  • the transmitters 2 use infrared light, because it does not disturb people in the premises and offers some interference protection; and the light of the transmitter is received by at least one receiver 3.
  • the method according to the invention can thus be implemented under ordinary lighting conditions, and it should be noted, that though for sake of simplicity, the exemplary implementation includes a single transmitter 2, it is both possible and advantageous to have a number of transmitters 2 in the defined space, each with its own unique identification, ID.
  • the transmitter 2 of the exemplary implementation obviously contains an energy source 5, such as a battery or a rechargeable battery, a signal generator stage 6 and a light emitting diode, LED, 7.
  • the exemplary implementation includes an ATMEGA88PA microcontroller as the signal generator stage 6, which drives a 950 nm wide-angle undirected I LED 7, e.g. SFH4240, through a serial resistor Rl.
  • the transmitter 2 has an integrated energy source 5, i.e. a rechargeable 3V battery in the exemplary configuration.
  • the rechargeable battery can actually be a known solar cell charger unit plus a battery, or power can simply be supplied by a non-rechargeable battery.
  • the microcontroller acting as a signal generator stage 6 ensures that the light signals of the LED 7 are unique to the transmitter 2 and thus uniquely identify the object 1 to which the transmitter 2 is attached.
  • the unique signals in the exemplary configuration are flashes of light that make up a bit sequence ID, with the microcontroller flashing the LED 7.
  • the ID sequence in this exemplary consists of 10 bits.
  • the position and location, resp. of the tracked object 1 within the space, e.g. on the shop floor, can be read from the frames recorded by the appropriately mounted and positioned receiver 3.
  • the matrix sensor 8 of the receiver 3 is a CCD IP camera, such as Aircam by Ubiquity Networks, with a resolution of 1024x768 pixels and a refresh rate of 30 fps (frames per second).
  • the matrix sensor 8 is symbolised by a square grid in the drawing.
  • the factory I filter of the camera in front of the matrix sensor 8 was replaced with a colour filter that has a very steep pass-through curve, which only allows the 950 nm signal of the transmitter 2 to pass.
  • the digital output of the camera of the receiver 3, i.e. the frames captured by the sensor is transmitted to a buffer stage combined with a signal processing unit, to be presented later on.
  • the frequency of the bit sequence emitted by the transmitter 2 matches the frame reading frequency of the sensor, i.e. the CCD of the sensor receiver 3 in the present case. This means that in a 10-bit sequence, the duration of one bit is 1/30 s.
  • the transmitter 2 and the receiver 3 must have an unimpeded optical connection, i.e. they must be visible to each-other. Should this not be feasible, the optical connection must be replaced by some other wireless connection which, though it should not be a problem for a person skilled in the art, is outside the scope of the present invention.
  • the CCD sensor of the transmitter 2 can also be replaced with a CMOS sensor, but this requires additional technical arrangements to function properly and without errors.
  • the transmitter 2 can transmit its unique bit sequence ID on an ongoing basis.
  • the term "ongoing” in this case means repeating the 10-bit sequence with an interval that allows the receiver 3 and the connected signal processing unit 4 to process the signal of the transmitter 2 successfully and to identify the transmitter 2.
  • An energy-saving alternative is to have the transmitter 2 transmit the bit sequence ID non- continuously, and linking the transmission mode using a known solution to the state of the object 1, i.e. moving or stationary.
  • the transmitter 2 keeps transmitting the ID, as the receiver 3 needs current bit sequences in order to locate the transmitter 2.
  • the transmitter 2 is stationary, ongoing transmission is unnecessary, it suffices for the transmitter 2 to transmit the ID every now and then, e.g. every 10-30 seconds, partly to signal its location, and partly to confirm that it is still operational.
  • This mode of operation can be altered by a person skilled in the art by adding a motion detector to the transmitter 2 and using the motion detector's output to vary the transmission schedule of the transmitter 2.
  • the bit sequence of the transmitter 2 is received by a receiver 3.
  • Lights of the LEDs 7 of the individual transmitters 2 fall on a single pixel or a group of pixels in the sensors 8 of the type of CCD or CMOS, which have visual contact with the transmitter 2. If the beam falls on several pixels, the centre pixel is selected using a known mathematical averaging method. The location of said pixel or pixel group in the matrix depends on where the transmitter 2 is in relation to the receiver 3, so the position of the transmitter 2 can be determined mathematically, i.e.
  • a signal processing unit 4 connected to the receiver 3 first determines whether the bit sequence transmitted by the transmitter 2 is a full and integral sequence that allows the receiver 3 to identify the transmitter 2. To that end, the receiver 3 receives the signal in so-called reception cycles, wherein each cycle corresponds to the duration the bit sequence transmitted by the transmitter 2. _ g _
  • each reception cycle we test the bit sequence received on a given pixel, and first ascertain whether the signals received in that cycle constitute a full sequence that allows for the clear and unique identification of the transmitter 2.
  • the signal received on a given pixel or pixel group of the CCD sensor allows for determining where the bit sequence is coming from, so the position or location of the transmitter 2 can be computed simply and precisely, using the known triangulation method. This ensures that the transmitter 2 is identified and its location is determined, too.
  • an auxiliary algorithm is used to make sure the target pixel is, to the extent possible, always at the centre of a predefined area that is smaller than the area of the sensor 8, so we don't lose sight of the moving target mid-cycle, and so that we can determine the ID of the transmitter 2. If it is determined that the target pixel is moving, it is tracked using an observation field that comprises additional pixels surrounding the target pixel and is shifted in the direction the target pixel is moving on the sensor 8.
  • Another version of the method according to the invention allows for determining not only the location of the transmitter 2 but also its velocity, based on the distance the target pixel travels during a reception cycle and the size of the space observed.
  • the signals of the transmitters 2 may be mixed in theory and also in practice by the time they reach the sensor of the receiver 3. This is, however, not a problem for the proposed process, because the integrity and validity of the sequence will be checked in each cycle and should this test fail, the sequence is ignored for that cycle, until full sequences are received again from the transmitters 2 assigned to the objects 1.
  • the transmitters 2 assigned to the individual objects 1 are not synchronised with each-other, i.e. there is no mechanism to ensure that each transmitter 2 starts transmitting its own ID sequence at the same time.
  • transmitters 2 can also be identified by storing each frame temporarily, then arranging the frame pixel values, for instance, in a matrix, to check which sequence the bit on each element of the matrix belongs to, and also whether it is the initial, interim or final bit of the sequence. If it is the final bit in a sequence, the transmitter 2 can be identified by evaluating the 10 last stored bits, i.e. the length of the sequence. If the bit turns out to be other than the last bit of a sequence, we will wait until the receiver's 3 frames yield a full sequence, and then identify the related transmitter 2 on that basis.
  • the method according to the invention may use special bits to mark clearly the beginning of a new sequence for the signal processing unit 4 connected to the receiver 3.
  • the initial marker bits can take the form of three bits with zero value, for example, i.e. the transmitter 2 does not emit any light pulses for the duration of these bits. As should be obvious to a person skilled in the art, this acts as a trigger that clearly marks the beginning of a new bit sequence.
  • Another optional scenario is to associate the area surveyed by the sensor 8 of the receiver 3 with a matrix whose lines and columns correspond to 2D points in the space surveyed by the sensor 8 of the receiver 3, to identify and locate the individual transmitters 2 in this matrix, and to make this output matrix available in the output buffer of the signal processing unit 4 for any further processing.
  • the signal processing unit 4 is a computer module that processes the bit streams coming from the receivers 3.
  • the signal in the exemplary implementation is an MPEG4 stream due to the specific camera, but other formats can also be used.
  • the central signal processing unit 4 determines the exact location and ID of each transmitter 2 and offers connectivity to other systems, such as industrial process control or management systems.
  • the tracked objects 1 are identified based on the unique light signals of the transmitters 2.
  • the flashing LED light represents binary signals, e.g. an active light-on condition stands for logical 1, and a light-off condition stands for logical 0.
  • the bit sequences of the transmitters 2 are different from each-other, they uniquely identify each transmitter 2 and thus each object 1.
  • the signal rate of the transmitter 2, i.e. the transmission and repeat rate of individual bit sequences, must be the same as the sampling rate of the sensor 8 of the receiver 3, and they must also be synchronised for proper reception. This can be ensured by setting the signal generator stage 6 of the transmitter 2 to the known sampling rate of the sensor 8 of the receiver 3.
  • the position of the transmitter 2 can be read from the frame captured by the sensor 8 of the receiver 3 and the signals in several consecutive frames make up the bit sequence that serves as the ID.
  • the transmitters 2 emitting infrared bit sequences are mounted on the objects 1, and the goal is to determine their precise location at all times in a defined indoor space.
  • the CCD sensors 8 of the receiver 3 receive the I signals of the transmitters 2 through a lens 9 and a colour filter 10. Objects 1 are located and identified on the basis of the signals received. A possible identification implementation is shown in Figures 3a-3c.
  • the light from the transmitter 2 mounted on the object 1 falls on the pixel matrix of the CCD sensor 8 of the receiver 3, symbolised by black dots in the Figures.
  • the ID of the object 1 - based on three exemplary frames and the corresponding points of light read from the lower left corner of the CCD sensor 8 - is 1-0-1.
  • the ID of another object 1 - based on the points of light in the frames in the top right corner of the CCD sensor 8 - is 1-1-0.
  • Every single pixel of each frame is processed by an appropriate software utility in the signal processing unit 4.
  • the pixels that correspond to the signal from the transmitters 2 visible to the sensor 8of the receiver 3 are recognised.
  • the ID is the bit sequence that consists of light (binary 1) and dark (binary 0) pixels at the same position in consecutive frames.
  • Each transmitter 2 sends a unique ID, i.e. a unique bit sequence converted into optical signals. According to the method, alternation of dark and light pixels in the consecutive frames will be found and detected. If the alternating dark/light pixels show the same pattern as the preprogrammed flash sequence of any of the transmitters 2, the match is recognised and the transmitter 2 is identified.
  • each transmitter 2 emits its unique flash sequence. Since the flash sequence of each transmitter 2 is known, each transmitter 2 can be identified unambiguously.
  • the present implementation uses 10-bit IDs, which enables up to 1023 transmitters 2 in the system. This number is sufficient for covering large indoor areas.
  • the interval between bit sequences is at least as long as the time required for identification.
  • the signal can thus be precisely distinguished from optical noise and other interference.
  • Beside identification the invention is also aimed at determining the current location of individual objects when certain conditions - e.g. duplicated transmitters - are available. This requires the mounting position of each receiver 3 to be recorded precisely in the signal processing unit 4. Position data are then used for computing the location with the mathematical triangulation method. Light falls on a specific X, Y pixel of the sensor 8 of the receiver 3. The X and Y pixel parameters determine the angle at which the transmitter 2 is visible from the perspective of the receiver 3. Distance data combined with angles of visibility clearly determine the location of the transmitter 2 on a 2D plane. The location is determined using the known triangulation method. The use of two receivers 3 also enables locating in 3D where the transmitter 2 is located using the triangulation method in this scenario as well.
  • Figures 4a-4b show a flowchart of an exemplary method of implementation of the signal processing in accordance with the present invention.
  • the size of the frame captured for processing by the camera of the receiver 3 is Xpic, Ypic. Each variable is set to the default value of 0 at the start of the operation.
  • Operation of the exemplary method starts in step 102, where image input data 101 of individual frames are stored pixel by pixel in a BufferO. A light pixel represents a logical one, a dark pixel represents a logical zero in the process presented. Operation proceeds from start step 102 to step 103, where it is tested whether the image input data of a frame has been received in full. If not, the process returns to step 102, for reading more image data and storing said data in the BufferO.
  • step 103 When it is defined in step 103 that image data of a full frame is stored in BufferO, the pixels are digitised i.e. converted into logical 0/1 (dark/light signals) in step 104 and are used in this digital form for further processing in step 105.
  • the digitised dark/light signals of the frame are then stored in Bufferl.
  • the evaluation of individual pixels starts with the pixel in row 0 of column 0 in the matrix coming from the sensor 8 and proceeds column by column, then moves on to the next row. The process continues with each and every pixel of the frame until the last column of the last line in the matrix is reached. Pixel evaluation yields an X, Y coordinate pair in BufferO, provided that the sensor 8 of the receiver 3 detected and captured the signal from the transmitter 2. Then the bit value with the mentioned coordinates in BufferO is compared with the bit having the same coordinates in Bufferl.
  • step 106 If in step 106 it is determined that the bit with the specific coordinates in BufferO is identical with the bit at the same position in Bufferl, and if in step 107 it is determined that a code sequence is already being compiled for that pixel, the bit stored in BufferO is added to the code sequence in step 108. If in step 106 it is determined that the bit in BufferO is not identical with the bit in Bufferl and if in step 109 it is determined that no code sequence is being compiled for that pixel either, in step 110 the bit in Bufferl is regarded as the initial bit and a new code sequence with it is started.
  • step 111 After comparing the BufferO bit with the Bufferl bit having the same X, Y coordinates, in step 111 it is determined whether the code sequence is complete, i.e. whether it has reached the full predefined length in number of bits. If not, the next frame(s) is/are read and processed. When the sequence has reached the predefined length, in step 112 a time stamp is added to it and the sequence thus completed, i.e. the code and time stamp for the pixel at the X and Y coordinates, is released for subsequent processing.
  • Step 113 The sequence is also checked in Step 113 for CRC/checksum before release and if the test returns an error, the code sequence is discarded in step 114.
  • ID and location of the transmitter 2 are unknown for a short period, i.e. a few seconds, as the corresponding code has been discarded. If the CRC/checksum result is correct, the X, Y coordinates serve as input for locating the transmitter 2, while the code sequence represents its ID.
  • step 115 If it is determined in step 107 that no sequence has been started for the specific pixel, assembling the code is continued in step 115.
  • step 116 it is checked whether the end of the pixel row has been reached, and if so, in step 117 reading the next row is started; if not, the process returns to step 106 and the operation is repeated from there.
  • step 118 it is checked whether the end of a pixel column has been reached. If not, the process returns to step 106 and the operation is repeated from there; if yes, then after analysing the pixels of a frame in BufferO, in step 119 Bufferl is flushed, frame data are copied to Bufferl, and then BufferO is flushed to make room for the next frame.
  • any beam that can be focused on the receiver 3 side can be used for locating and identification purposes.
  • their signal generator stage 6 must be capable of controlling the transmitter 2.
  • the exemplary implementation uses the IR range, which is recommended indoors, the process can also be implemented using signals of a different wavelength.
  • the receiver 3 must be capable of detecting, focusing and digitising the optical/light signals from the transmitters 2 and of passing them on to the signal processing unit 4.
  • Another optional implementation may include multiple transmitters 2 operating on different wavelengths.
  • the receivers 3 receive the signals through colour filters 10 of the appropriate wavelengths.
  • the RGB colour filters 10 of the receivers 3 are replaced with three different IR filters. This speeds up processing, because three bits of information can be encoded in a single frame as opposed to just one bit, i.e. data density is tripled.
  • the sensor 8 of the receiver 3 used in this implementation has a resolution of 1024x768 pixels, but the resolution can obviously be lower or higher as well. As should be obvious to a person skilled in the art, resolution determines locating precision.
  • the sensor 8 of the receiver 3 can be that of any digital camera wherein the sensor pixels are arranged so as to allow for locating the transmitter 2 of the light signals using a known algorithm.
  • the interval between the bit sequences of the transmitters 2 is set to equal to, or greater than, the time required for identification.
  • This process can be enhanced with optional error-checking algorithms, such as C C or parity check, in order to distinguish the genuine signal better from optical noise and interference.
  • the signal processing rate is set higher than the speed at which the objects 1 and the transmitters 2 mounted on them move, so that transmitter 2 displacement can be tracked using a mathematical method and objects can be identified on the move.
  • any method or system can be used, the method according to the invention raises no special requirements.
  • the process can be implemented in a system wherein the receivers 3 contain a microcomputer 3a.
  • Components may include, for example, a Raspberry PI3 module with 4 GB of RAM, an SD card as buffer, and an IP camera serving as the sensor 8 of the receiver 3, with its CSI port connected to the microcomputer's CSI interface.
  • the microcomputer 3a processes the signals coming from the sensor 8 of the receiver 3, which results in the unique IDs of the transmitters 2, and then sends the appropriate X, Y coordinates to a signal processing unit, i.e. in this case the central signal processing unit.
  • Transmitters 2 and receivers 3 may be positioned in a number of different ways.
  • transmitters 2 are mounted in fixed positions, e.g. on the ceiling, and receivers 3 are mounted on the objects 1 whose paths are to be determined through location.
  • the receiver 3 CCD sensors 8 receive the transmitters' 2 IR signals through the lens 9 and the colour filter 10.
  • the objects 1 are located and identified based on the signals received, so if the goal is to move an object 1 to a different location, the difference between the current and target positions can be used to manipulate its controls so that it reaches its destination.
  • Figure lc shows an implementation wherein transmitters 2 and receivers 3 are mounted in fixed positions and also on the objects 1 to identify. The resulting two-way communication allows for locating and identifying the objects 1, and for directing them to the desired positions.
  • Locating precision is significantly higher than in known solutions.
  • the same technology - light beams - can be used both to locate and to identify objects in a simple way.
  • the distance between transmitters and receivers can be greater as in the known solutions.
  • the transmitters and receivers are of optical type, they cause no radio frequency interference.
  • the transmitters and receivers are not sensitive to radio frequency noise or interference.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

La présente invention concerne un procédé d'identification et de localisation d'objets mobiles ou pouvant être mobiles (1), uniquement des signaux optiques étant utilisés, des émetteurs optiques (2) émettant des signaux optiques uniques en interrompant le signal optique (c'est-à-dire le clignotement) pour envoyer une séquence de bits aux récepteurs (3). L'identificateur encodé dans la séquence de bits représente l'ID de l'émetteur (2). Les signaux sont reçus à l'aide des récepteurs optiques (3), de telle sorte que les signaux apparaissent sur au moins un pixel du capteur (8) à matrice CCD, CMOS etc. du récepteur (3). La localisation de l'émetteur (2), et de là la localisation de l'objet (1), peuvent être déterminées à l'aide d'un procédé mathématique basé sur les pixels éclairés. La vitesse de traitement des signaux est paramétrée plus haute que la vitesse à laquelle les émetteurs (2) se déplacent, de telle sorte que le déplacement de l'émetteur (2) peut être suivi à l'aide d'un procédé mathématique et les objets (1) peuvent être identifiés pendant le déplacement. Comme nouvelle caractéristique de la présente invention, la paire émetteur (2) - récepteur (3) met en œuvre la localisation et l'identification à l'aide d'une technologie unique. Les émetteurs (2) peuvent être montés sur les objets mobiles (1), avec les récepteurs (3) fixes ; ou en variante, le mouvement de plusieurs objets (1) peut être régulé indépendamment l'un de l'autre en montant les récepteurs (3) sur les objets (1) et en fixant les émetteurs (2).
EP17819423.9A 2016-06-30 2017-06-29 Procédé d'identification et de localisation d'un objet mobile Withdrawn EP3479055A4 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
HUP1600400 2016-06-30
HU1700189A HUP1700189A1 (hu) 2017-05-03 2017-05-03 Eljárás mozgatható objektum azonosítására és helyének meghatározására
PCT/HU2017/050025 WO2018002679A1 (fr) 2016-06-30 2017-06-29 Procédé d'identification et de localisation d'un objet mobile

Publications (2)

Publication Number Publication Date
EP3479055A1 true EP3479055A1 (fr) 2019-05-08
EP3479055A4 EP3479055A4 (fr) 2020-02-26

Family

ID=89992433

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17819423.9A Withdrawn EP3479055A4 (fr) 2016-06-30 2017-06-29 Procédé d'identification et de localisation d'un objet mobile

Country Status (2)

Country Link
EP (1) EP3479055A4 (fr)
WO (1) WO2018002679A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210209791A1 (en) * 2018-05-20 2021-07-08 Avular B.V. Estimating a pose of a spatially movable platform
CN111474515B (zh) * 2020-03-23 2023-05-05 惠州拓邦电气技术有限公司 光导航方法和装置以及伸缩门

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6324296B1 (en) * 1997-12-04 2001-11-27 Phasespace, Inc. Distributed-processing motion tracking system for tracking individually modulated light points
JP4470102B2 (ja) * 2004-04-26 2010-06-02 ソニー株式会社 情報処理装置、情報処理方法、データ出力制御プログラム及びデータ送受信システム
JP5669212B2 (ja) * 2009-06-08 2015-02-12 国立大学法人 名古屋工業大学 3次元情報提示装置
WO2011154949A2 (fr) * 2010-06-10 2011-12-15 Audhumbla Ltd. Système et procédé de suivi optique pour gestion de troupeau à l'aide de ceux-ci
US8892252B1 (en) * 2011-08-16 2014-11-18 The Boeing Company Motion capture tracking for nondestructive inspection

Also Published As

Publication number Publication date
EP3479055A4 (fr) 2020-02-26
WO2018002679A1 (fr) 2018-01-04

Similar Documents

Publication Publication Date Title
US9175956B2 (en) Construction laser system having a rotation laser and a laser receiver, with functionality for automatic determination of the laser receiver direction
KR100996180B1 (ko) Led 광원을 이용한 이동체의 위치인식시스템 및 위치인식방법
CN105203046B (zh) 多线阵列激光三维扫描系统及多线阵列激光三维扫描方法
JP6069281B2 (ja) 回転軸を中心に運動する走査ユニットを備えるセンサ
JP3779308B2 (ja) カメラ校正システム及び三次元計測システム
US20140198206A1 (en) System and Method for Estimating the Position and Orientation of an Object using Optical Beacons
US10488550B2 (en) Optoelectronic detection of objects by measurement of angular position to determine relative movement of a deflection unit
CN105357511B (zh) 深度数据检测系统
RU2699177C2 (ru) Система динамического ведения и способ автоматического управления с применением трехмерных времяпролетных камер
KR101834124B1 (ko) 다중 라이다 시스템 및 그 구동방법
US20100141740A1 (en) Device and Method for Non-Contact Recording of Spatial Coordinates of a Surface
JP2007310382A (ja) 光学パターンを投影する装置
JP2008140370A (ja) ステレオカメラ侵入検知システム
IL155921A (en) Tracking system using optical tags
WO2020086698A1 (fr) Procédés et systèmes utilisés pour mesurer des bandes de roulement de pneu
EP1771778B1 (fr) Systeme de poursuite utilisant des elements rayonnants optiques
JP2021521573A (ja) 光通信装置を用いた自律移動可能な機器の案内方法
EP3479055A1 (fr) Procédé d'identification et de localisation d'un objet mobile
KR20220140849A (ko) 물체 상의 위치를 측정하기 위한 장치, 방법 및 시스템
US10038895B2 (en) Image capture device calibration
CN111596259A (zh) 一种红外定位系统、定位方法及其应用
JP2010217093A (ja) 測位システム
JP4637066B2 (ja) 情報取得装置、情報取得方法、情報取得プログラム、及び、距離情報取得システム
US20170371035A1 (en) Protection and guidance gear or equipment with identity code and ip address
CN108345000A (zh) 一种具有面阵光电传感器的探测方法

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190125

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20200128

RIC1 Information provided on ipc code assigned before grant

Ipc: G01S 1/70 20060101ALI20200122BHEP

Ipc: G01S 5/16 20060101ALI20200122BHEP

Ipc: G01S 3/784 20060101ALI20200122BHEP

Ipc: G01B 11/00 20060101AFI20200122BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20200825