WO2006110803A2 - Wireless communications with proximal targets identified visually, aurally, or positionally - Google Patents

Wireless communications with proximal targets identified visually, aurally, or positionally Download PDF

Info

Publication number
WO2006110803A2
WO2006110803A2 PCT/US2006/013633 US2006013633W WO2006110803A2 WO 2006110803 A2 WO2006110803 A2 WO 2006110803A2 US 2006013633 W US2006013633 W US 2006013633W WO 2006110803 A2 WO2006110803 A2 WO 2006110803A2
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
message
wireless
sending
user
Prior art date
Application number
PCT/US2006/013633
Other languages
French (fr)
Other versions
WO2006110803A3 (en
Inventor
Charles Martin Hymes
Original Assignee
Charles Martin Hymes
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Charles Martin Hymes filed Critical Charles Martin Hymes
Publication of WO2006110803A2 publication Critical patent/WO2006110803A2/en
Publication of WO2006110803A3 publication Critical patent/WO2006110803A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. SMS or e-mail
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Definitions

  • any description of the invention including descriptions of specific order of steps, necessary or required components, critical steps, and other such descriptions do not limit the invention as a whole, but rather describe only certain specific embodiments among the various example embodiments of the invention presented herein. Further, terms may take on various definitions and meanings in different example embodiments of the invention. Any definition of a term used herein is inclusive, and does not limit the meaning that a term may take in other example embodiments or in the claims.
  • the primary purpose of some embodiments of this invention is to facilitate communication between people that are within "perceptual proximity" of each other, i.e. they are physically close enough to each other that one person can perceive the other, either visually or aurally.
  • This invention is partially embodied in the form of a small mobile device, either its own dedicated device, or as enhanced functionality to other mobile devices such as, for example, PDA's (personal digital assistants) or cellular telephones.
  • the device may include a small digital camera which can record both single still images as well as video images, the means to enter text and record audio, the ability to transfer information (text, audio, image, or video) from a computer to the device as an alternative to entering information directly into the device, the ability to display text or images and playback audio or video, a programmable microprocessor, and memory storage and retrieval functions — all commonly available features on today's cellular telephones and PDA's.
  • the device may have additional hardware capabilities.
  • This invention works by providing a user of the invention with the ability to communicate electronically (text, voice, image, or video) to specific other individuals (or vehicles - automobiles, motorcycles, etc.) in his or her physical environment that have been identified visually or aurally, but for whom contact information (telephone number, email address, etc.) may not be known. It is expected that the primary application of the invention will be to facilitate romantic relationships, although other social, business, civic or military applications can be foreseen.
  • Perceptual Addressing is the ability for one person, User #1, to establish an electronic communications channel with another person, User #2, that User ⁇ 1 can perceive in his or her physical environment but for whom no contact information is known.
  • John is driving from Seattle to San Francisco. It is getting late, and he needs to find a hotel for the night. He is driving behind a pickup truck and thinks that perhaps the driver can recommend a place to stay nearby. He takes out his mobile device and presses a button. He sees two license plate numbers displayed. He selects the one that corresponds to the truck in front of him. He then holds his device to his ear. Just then, Pete, in the pickup truck in front of John, picks up his mobile device and sees on its display, "call from license plate # AYK-334" . Pete then presses a button and says "how's it going?" They proceed to have a brief cellular telephone conversation in which John asks about hotels in the area and Pete makes a couple of recommendations.
  • These communications can be in the form of individual messages sent from one person to another or can be in the form of live interactive audio and/or video communications; and the content of these communications can consist of any form of media including text, images, video, and voice.
  • Devices in the same vicinity can communicate with each other via direct device-to-device transmission, or alternatively, devices can communicate via a wireless connection to a network — the internet, cellular telephone network, or some other network.
  • the communication may also be mediated by a remote data processing system (DPS).
  • DPS remote data processing system
  • This invention is compatible with a wide variety of methods of transmitting information and, depending upon the method of Perceptual Addressing used, may include more than one mode of transmission.
  • the modes of wireless transmission could include various technologies, frequencies and protocols — for example, radio frequency (RF), infrared (IR), Bluetooth, Ultra Wide Band (UWB), WiFi (802.11) or any other suitable wireless transmission technology currently known or yet to be invented.
  • RF radio frequency
  • IR infrared
  • UWB Ultra Wide Band
  • WiFi 802.11
  • non-wireless means may also be used if practical.
  • the user of a device embodying the present invention specifies one target person or target vehicle, out of potentially many possible target persons/vehicles in the user' s perceptual proximity, by expressing one or more of the target's distinguishing characteristic(s) .
  • Perceptual proximity is here defined as a range of physical distances such that one person is in the perceptual proximity of another person if he or she can distinguish that person from another person using either the sense of sight or the sense of hearing.
  • a distinguishing characteristic is a characteristic of the target person or target vehicle, as experienced by the user, that distinguishes the target person or target vehicle from other people or vehicles in the user's perceptual proximity.
  • the user of this invention specifies the target by expressing his or her perception of the distinguishing characteristic in one of two ways: (1) Direct 1 expression of a distinguishing characteristic of the target person/vehicle, or (2) Selection from presented descriptions of distinguishing characteristics of people/vehicles in the user's perceptual proximity.
  • Examples of Direct Expression are: (a) the user expresses the target's relative position by pointing the camera on his or her device and capturing an image of the target; or (b) the user expresses the appearance of a license plate number by writing that number.
  • Selection are: (a) the user selects one representation of position, out of several representations of position that are presented, that is most similar to the way the user perceives the target's position; (b) the user selects one image out of several presented that is most similar to the appearance of the target; (c) the user selects one voice sample out of several presented that is most similar to the sound of the target's voice.
  • the selection of a target person based upon distinguishing characteristics can occur in one or more stages, each stage possibly using a different distinguishing characteristic. Each stage will usually reduce the pool of potential target people/vehicles until there is only one person/vehicle left - the target person/vehicle.
  • the user compares his or her visual experience of the target person (distinguishing characteristic) with his or her visual experience of each of the ten images displayed on his or her device, and then expresses his or her experience of the appearance of the visual appearance of the target by choosing an image that produces the most similar visual experience. Because the ten images were already associated with ten telecommunication addresses, by selecting the image of the target, an association can immediately be made to the target's address, (c) A user points a camera at a target person and takes a picture, thus associating the experienced relative position of the target (distinguishing characteristic) with the captured image.
  • the user circles the portion of the image that produces a visual experience that is most similar to the experience of viewing the face of the target person (distinguishing characteristic).
  • the image or the target person's face is subjected to a biometric analysis to produce a biometric profile. This profile is then found to be associated with the target person's telecommunications address in a database.
  • This associative process may occur on the user's terminal, on the terminals of other users, on a data processing system, or any combination. Once the correct address of the intended recipient has been determined, the Perceptual Addressing task has been completed. There are no restrictions on the varieties of subsequent communication between terminals.
  • a non-directional signal is broadcast to all devices in perceptual proximity.
  • the signal contains the device ID# and network address of the transmitting device (Device #1) as well as a request for all receiving devices to send their own device ID#'s and addresses as well as a thumbnail image (or voice sample) of their user to the requesting device.
  • the user initiating the request contains the device ID# and network address of the transmitting device (Device #1) as well as a request for all receiving devices to send their own device ID#'s and addresses as well as a thumbnail image (or voice sample) of their user to the requesting device.
  • User #l's device (Device #1) then broadcasts a non-directional unaddressed transmission to all other devices within range.
  • the transmission includes User #l's device ID# and network address, as well as a request that images (or voice samples) be sent to Device #1.
  • Each device receiving the request responds automatically (without user awareness) with a transmission addressed to Device #1, sending its own device ID# and network address, as well as an image (or voice sample) of its user. (Only Device #1 will receive these transmissions as other devices will ignore an addressed transmission if it is not addressed to them.)
  • Method #2 Non-Directional Transmission to Devices within a Limited Radius This method is identical to Method #1 with the modification that the initial request for user images (or voice samples) is made with a more limited signal strength, requiring User #1 to be within a few feet of the target person (User #2), thus limiting the number of other devices that will be contacted, and in turn limiting the number of images (or voice samples) that will be received.
  • the signal strength then starts at a very low level and increases until the maximum number of people have been contacted (as measured by the number of responses received).
  • the initial transmission requests device ID#'s, network addresses, and associated images (or voice samples).
  • User #1 selects the image (or voice sample) corresponding to the intended recipient, thus selecting the device ID# and network address of the correct target. This method consists of the following steps:
  • User #1 sees (or hears) someone, User #2, to whom she wants to communicate. She walks to within five feet of User #2 and instructs her device (Device #1) via its user interface (she presses a button, for example) to contact all other devices within five feet, and obtain device ⁇ D#'S, network addresses, and images (or voice samples) from each device.
  • User #1 could have controlled the signal strength of the initial broadcasted request by specifying the maximum number of people that should be contacted so that her device gradually increased signal strength until the maximum number of responses is received. If she had sent the maximum equal to one person, the result would be that only the person closest to her would be contacted.
  • Device #1 broadcasts a non-directional transmission to other devices with enough signal strength to effectively reach approximately 5 feet, under "normal" conditions.
  • the transmission includes Device #l's device ID# and network address, as well as a request for the device ID#, network address, and image (or voice sample) from other devices.
  • Each device receiving the request responds with a transmission addressed to the device making the request, sending its own device ID# and network address as well as an image (or voice sample) of its user.
  • Device #1 receives the device ⁇ D#'S, network addresses, and images (or voice samples) from all other users in the area.
  • User #1 selects the image (or voice sample) of User #2, thereby selecting User #2's device ED# and network address.
  • Device #1 can now initiate communications with User #2 using Device #2' s network address.
  • This method is identical to Method #2 with the modification that the number of other users contacted is not governed by modulating signal strength, but rather by measuring the distance between users and requiring that the recipient of the initial communication is within a specified distance from the requesting device.
  • This feature allows the user (User #1 using Device #1) to limit the number of other devices that are contacted, and therefore limit the number of images (or voice samples) that will be received.
  • There are two different user options for how to regulate the radius of contact i.e.
  • the distance between terminals can be measured by either the instigating terminal or the receiving terminals. If measured by the receiving terminals, then they will respond only if the distance is within the communicated radius of contact.
  • the particular method of measuring the distance between terminals is not central to this method, but one that will suffice is for each terminal to determine its own position (via GPS or some other means) and then to compare with the reported position of the other terminal.
  • the initial broadcasted transmission reports Device #1 's ID, address, position, and radius of contact; and requests of receiving terminals within the radius of contact their device ⁇ D#'S, network addresses, and associated images (or voice samples). Receiving devices respond with the requested information if within the radius of contact. After receiving the requested images (or voice samples), User #1 then selects the image (or voice sample) corresponding to the intended target person or vehicle, thus selecting the device ID# and network address of the correct target.
  • Method #4 Directional Transmission to Other Users' Devices This method is identical to Method #1 except that instead of Device #1 making an initial transmission in all directions, the transmission is focused in a relatively narrow beam toward the target person (User #2), thus reducing the number of other users contacted by the transmission, while at the same time allowing User #1 to be at a relative distance from User #2.
  • the transmission uses frequencies in the range of 100 GHz to sub-infrared in order to balance the dual needs of creating a highly directional transmission from a small handheld device with the need to penetrate barriers (such as clothing and bodies) between the transmitting device and receiving device.
  • This method consists of the following steps: (1) User #1 sees someone, User #2, to whom she wants to communicate.
  • She aims her device (Device #1) at User #2.
  • User #1 instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2 and obtain device ID#'s, network addresses, and images of those users.
  • User #l's device (Device #1) sends a directional transmission to all other devices in the target user's direction. The transmission includes Device #l's device ED# and network address, as well as a request that images be sent to User #1.
  • Each device receiving the transmission responds with a transmission addressed to Device #1, sending its own device ID# and network address, as well as an image of its user.
  • Device #1 receives device ID#'s, network addresses, and images from all other local users in the direction ofUser #2.
  • User #1 selects the image of User #2, thereby selecting the device ID# and network address of User #2's device, Device #2.
  • Device #1 can now initiate communications with User #2 using Device #2 ' s network address .
  • the emphasis is placed on a high frequency highly directional beam (infrared, for example) without regard for its penetration properties. It involves the use of one or more tiny Radio Frequency Identification Tags (RFID) tags clipped onto the outside of clothing of each user which, when scanned by the devices of other users, transmits the device ID# of the target user's own device to the interrogating device.
  • RFID Radio Frequency Identification Tags
  • devices In order to scan the RFID tag(s) of a target user, devices have highly directional scanning capability using a high-frequency signal (infrared, for example).
  • User #1 points her device (Device #1) toward the person of interest (User #2).
  • the beam will contact the RFID tags of one or more individuals, including User #2, which will then transmit device ID#(s) back to Device #1.
  • Device #1 then sends a non-directional transmission addressed to each of the devices contacted.
  • the transmission contains User #l's device ID# and network address, and also a request for an image of the other users.
  • User #1 selects the image of the intended recipient, User #2, thus addressing a communication to only that individual.
  • a line of sight is required between User #1 1 S device and the RFID tags of other users, and there is a range limitation as to how far passive RFID tags can transmit back to the scanning device.
  • This method consists of the following steps: (1) User #1 sees someone, User #2, to whom she wants to communicate. She aims her device (Device #1) at User #2. User #1 instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2. (2) Device #1 transmits a high-frequency (infrared, for example) directional signal in the direction of User #2. This signal, containing Device #l's device E)# makes contact with the RFID tags of one or more users.
  • Each RFID tag which receives the transmission from Device #1 then makes a transmission addressed to Device #l's device ID# and containing the device ID# of its user.
  • Device #1 receives the device ID#'s from all RFID tags contacted and then sends a non-directional transmission addressed to each of those device ID#'s. These transmissions include Device #l's device ID# and network address as well as a request for an image of the user. If any of the other devices cannot be contacted with a direct transmission because they are now out of the immediate area, or for some other reason, then a transmission is made to the device's network address.
  • Each device receiving a request for an image then transmits a user's image to Device #1.
  • Device #1 receives all user images and displays them. User #1 selects the image of the user she intended to contact, User #2, thereby selecting Device #2's device ID# and network address.
  • Device #1 can now initiate communications with User #2 using Device #2' s network address.
  • Method #6 Non-Directional Transmission to RFID tags This method is identical to the previous method (Method #5) with two important differences: (a) The transmission to scan the target person's RFID tag is non-directional; (b) Because the scanning is non-directional, scanning must be very short range, hi order to select the person of interest, User #1 must stand very close to User #2 when activating the scanning transmission. It is also important that User #1 makes sure that there are not any other users within scanning distance.
  • Method #7 Directional Transmission to Intermediate RFID tags Similar to Method #5, RFID tags are worn by users who receive highly directional high-frequency transmissions from User #l's device (Device #1). But instead of transmitting a high frequency signal back to the Device #1, the
  • RFID tag converts the incoming signal to a relatively low frequency radio frequency (RF) signal (that easily penetrates clothing and bodies) and then transmits this RF signal to its owner's device (at most only two or three feet away) by addressing it with the device's device ID#.
  • RF radio frequency
  • this signal carries Device #l's Device ID, network address, and a request for a user image
  • the target device after receiving the signal the target device makes a non-directional transmission addressed Device #1, sending its own device ID#, network address, and an image of its user.
  • User #1 then needs only select the image of the person she intended to contact, User #2, in order to address subsequent transmissions to that person.
  • This solution does not have the range limitations of the previous method, although it still requires a line of sight between the device of the sender and the RFlD tag of the receiver.
  • This method consists of the following steps: (1) User #1 sees someone, User #2, to whom she wants to communicate.
  • User #1 aims her device (Device #1) at User #2 and instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2.
  • Device #1 transmits a high-frequency (infrared, for example) directional signal in the direction of User #2.
  • This signal containing
  • Device #l's device ID# makes contact with the RFID tags of one or more users.
  • Each RFID tag contacted then transforms the signal to a much lower RF frequency and then transmits the same information, addressed to its user's device ID#.
  • a low power transmission is adequate as the signal has to travel only a few feet (for example, from the RFID tag on the target person's lapel to the device in the target person's pocket).
  • the receiving device After receiving the transmission, the receiving device makes a transmission addressed to Device #l's device ID# which includes the recipient device's device ID# as well as an image of the recipient.
  • Device #1 will receive and display one image for every device contacted. User #1 selects the image of the user she intended to contact, User #2, thereby selecting Device #2's device ID# and network address.
  • Device #1 can now initiate communications with User #2 using Device #2's network address.
  • This method is similar to Method #1 with the exception that the images of nearby users, instead of being sent from the nearby devices themselves, are sent from a data processing system (DPS) which also mediates communication between devices.
  • the DPS of this application has access to location information of all users (using GPS or some other means) as well as a database of all users containing their addresses, device ID#'s 3 and facial images. Upon request the DPS is able to send images of proximal users within a pre-defined distance to a requesting device.
  • This method consists of the following steps:
  • User #1 sees someone, User #2, to whom she wants to communicate, and instructs her device, using the device interface (she presses a button, for example), to contact the DPS and request images of other users currently in her proximity.
  • User #l's device (Device #1) then transmits a request to the DPS.
  • the transmission includes User #l's device ID# and network address, as well as a request that images be sent to Device #1.
  • the DPS retrieves the necessary location information and determines which other users are within viewing distance of User #1. The DPS then transmits the images of those other users along with their associated device ID#'s to Device #1.
  • User #1 reviews the images received and selects the image of User #2, thereby selecting Device #2's device ID#.
  • Device #1 initiates an addressed communication to Device #2 via the DPS by specifying Device #2's device ID#.
  • Method #9 Location Identification & Spatial Mapping
  • each user's device determines its own location coordinates periodically (at least once per second is recommended), and broadcasts periodically (at least once per second is recommended) those coordinates, along with the device's device ID#, to other devices sharing this application in the perceptual proximity.
  • It would also be an acceptable solution for a centralized system to track the location of all devices and transmit to all devices the locations, device IDtf's and network addresses of all devices local to each device, updating that information periodically.
  • Each device is therefore aware of the positions of all other devices nearby - both their device ID#'s and location coordinates. Devices then have the information necessary to display a two dimensional self-updating map of all other users in the perceptual proximity in which each user is represented by a small symbol.
  • a device ID# and network address is associated with each symbol so that a user need only select the symbol associated with a particular person to address a transmission to that person.
  • User #1 To contact a person of interest, User #1 first views the map on her device and compares the configuration of symbols on the map with the configuration of people before her. She then selects the symbol on the map which she believes corresponds to the intended recipient. Her device (Device #1) then makes a transmission to User #2's device (Device #2) containing Device #l's device TD# and network address and a request for an image of User #2. Device #2 then transmits an image of User #2 to Device #1. User #1 then compares the image received to the actual appearance of the person she intended to contact. If she determines that she has contacted the correct person, then she instructs her device via the user interface to initiate communications with User #2. If, on the other hand, the image that User #1 received does not correspond to the person that User #1 intended to contact, then User #1 may select another symbol which could possibly represent the person she wants to contact.
  • the advantage of this alternate method is that it would save energy and bandwidth for devices not to be determining and broadcasting position when it is not needed or used.
  • the disadvantage is that there is a short delay between the time User #1 initiates the positioning process and the time all user's positions are displayed on her device.
  • Yet another alternate version entails the above alternate method with the following changes: All devices maintain time synchronization to one-second accuracy by means of periodic time broadcasts via a network from a DPS. All devices constantly update their position ⁇ at least once per second ⁇ and record what position they are at each point in time. This data is saved for a trailing time period, 10 seconds for example. Then, when a device makes a request of other devices for positions and network addresses, the request specifies the precise time for which position information is sought.
  • All devices periodically determine their own position coordinates and broadcast those coordinates along with their device ID#'s to other devices in the perceptual proximity.
  • User #1 instructs her device via its user interface (presses a button, for example) to display a 2-dimensional map of the locations of all other devices in the perceptual proximity in relation to itself.
  • Each of the other devices are represented on the display by a small symbol (which can potentially represent useful distinctions such as the sex of the user, or whether the user is participating in the same "application” such as “dating", or "business networking”, etc.).
  • Device #1 makes a transmission addressed to the target device that includes its own device ID#, network address, and a request for an image of the target user.
  • Device #2 responds by sending a transmission addressed to Device #1 that includes its own device ID#, network address, and an image of User #2.
  • User #1 views the image of User #2 on her display to confirm that it is the person she intended to contact.
  • Device #1 If the image received corresponds to the person she intended to contact, then she instructs her device (by pressing the "send" button, for example) to initiate an addressed communication to the target device.
  • Device #1 also sends an image of User #1, Device #l's device ID#, and Device #l's network address to Device #2.
  • This method is similar to method (6) except that it employs a different user interface, "virtual beaming", for selecting which devices will be contacted.
  • a different user interface "virtual beaming”
  • it also incorporates direction technology such as, for example, a digital flux-gate compass and/or a gyroscopic compass.
  • the direction technology incorporated into the device in combination with the position technology already discussed, it can be determined with simple geometry which target individuals are positioned within a narrow wedge (in either two or three dimensions, depending on the sophistication of the positioning information) extending out from the user's position in the direction she is pointing her device:
  • User #1 's device (Device #1) has already received information as to her own position and also the device IDWs and position coordinates of all other devices in the perceptual proximity.
  • the direction that User #l's device was pointing when she targeted the person of interest can be represented as the "target vector", which begins at User #1 's position and extends in the direction determined by the direction technology in her device.
  • a target volume can then be defined as the volume between four vectors, all extending from User #1 's position — two lying in a horizontal plane and the other two lying in a vertical plane.
  • one vector lies X degrees counterclockwise to the target vector, and the other vector X degrees cloch ⁇ dse to the target vector, where X is a small value (5 degrees is recommended) which can be adjusted by the user.
  • X is a small value (5 degrees is recommended) which can be adjusted by the user.
  • the vertical plane one vector extends in a direction X degrees above the target vector, and the other vector X degrees below the target vector
  • Device #1 When User #1 points her device and "presses the button", Device #1 then makes an addressed transmission to all other users within the target area (or volume).
  • the transmission includes Device #l's device ID# and network address, and a request for an image of the recipient. After the images are received, the user then selects the image of the person (and the corresponding device ID# and network address) she is interested in. Further communication is addressed solely to the selected device.
  • One advantage of this method is that the user is not required to read a map on her device, trying to make an accurate correspondence between the person she is interested in and the corresponding symbol on her display. This is of particular value when the target individual is moving. Another advantage is that obstructions between the user and the target person are not an issue when targeting: a user may hold the device within a coat pocket or bag when targeting an individual.
  • Method #9 The only disadvantage in comparison with Method #9 is that the initial request for an image possibly may be made to more than one target device.
  • This method consists of the following steps: (1) AU devices periodically (at least once per second is recommended) determine their own position coordinates and broadcast those coordinates along with their device ID#'s to other devices in the perceptual proximity.
  • User #1 sees someone, User #2, to whom she wants to communicate. She aims her device (Device #1) at User #2. User #1 instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2 and obtain images of those users.
  • Device #1 determines which of the positions reported by other devices lie in the target area defined by its own position and the direction it was pointing when User #1 instructed her device to initiate contact. If there was only one device in the target area, then Device #1 is now able to communicate with that device using its network address.
  • Device #1 must determine which of those devices is the intended target. User #1 can either repeat the same process, hoping that there will be only one person in the target area the second time, or hoping that only one person will appear in both the first and second attempts. Alternatively, User #1 can use a different distinguishing factor - appearance - to determine which of the addresses obtained belong to the intended target. Following is the later procedure:
  • Device #1 makes a transmission addressed to all devices in the target area as defined above.
  • the transmission includes Device #l's device ID# and network address, and a request that user images be sent to Device #1.
  • Each device receiving the transmission responds with a transmission addressed to Device #1, sending its own device ID# and network address, as well as an image of its user.
  • Device #1 receives images from all users in the target area. (8) From the images received, User #1 selects the image corresponding to
  • Method #11 Addressing with Spatial Position via a DPS
  • User #1 notices the person to which she wants to send a message, User #2, and with her device, Device #1, determines the precise distance and direction that User #2 is from her own position. This can be accomplished with any compass and distance measuring capabilities (for example, a flux gate compass and an ultrasonic or laser distance sensor) built into Device #1.
  • Device #1 transmits a message, along with the relative position of the intended target, to a DPS with instructions to forward the message to whatever device is at the specified position.
  • the DPS has access to absolute positions of all users (via GPS or some other means) and can easily calculate the absolute position indicated by adding the submitted relative position to Device #1 's absolute position. The DPS then determines which user is nearest to the calculated position of the target and forwards the message to that user.
  • Device #1 has access to its own absolute position (via GPS or some other means), and with the known relative position of the target person, is then able to calculate the absolute position of the target person. This being the case, Device #1 submits to the DPS the targets absolute position, rather than the target's position relative to itself.]
  • This method generally involves capturing an image of the target person's face, analyzing the image to produce a unique biometric profile, and then associating the biometric profile with a similar biometric profile and address in a database.
  • the image analysis can be performed on either (1) the user's device or (2) on a data processing system (DPS).
  • DPS data processing system
  • the user's device (Device #1) would send its own ID/address, any message, and the biometric profile to the DPS, where the biometric profile would be matched with a biometric profile stored in a database along with an associated address, and then facilitate communication with that address (forward a message or report the address to Device #1, for example),
  • the user's device would send its own ID/address, any message, and the captured image to the DPS.
  • the DPS would then analyze the image; match the resulting biometric profile to a biometric profile and address stored in its database; and facilitate communication with that address.
  • biometric profiles There are several types of biometric profiles that this method could be applied to: facial recognition, outer (external) ear recognition, and retinal pattern, for example.
  • the retinal analysis would require a specialized camera for that purpose to be integrated into users' devices.
  • this invention is agnostic as to the specifics of what kind of biometric analysis is used, whether it is current or future biometric technology.
  • the method of using a visually obtained biometric "signature" to address a message remains the same, hi all of the above variations, the user selects the intended target person by aiming the user's device at the target person and capturing an image.
  • This method is analogous to Method #12, but instead of using an image of person's face to address a communication, it uses a person's distinct vocal characteristics as a means of determining the target person's address.
  • a voice sample needs to be collected. This can be done by the user moving close to the intended target and recording a voice sample when the target is speaking. Sound recording and editing features can easily be incorporated into small devices and this is existing technology.
  • a directional microphone integrated into the user's device could be aimed at the target person for recording their speech. (It may be easier for a blind person to aim a microphone than to maneuver close to the target.)
  • the voice sample After the voice sample is collected it can be analyzed either on the user's device or on a DPS.
  • the message along with the biometric profile can be sent to the DPS, where the biometric profile will be matched with a biometric profile that is stored in a database along with an address. Once the association is made to the address, the message is then forwarded to the target person.
  • the voice sample is analyzed on the DPS, then the user's device sends the message along with the voice sample itself to the DPS.
  • the DPS then converts the voice sample to a biometric profile, finds a match for the biometric profile using a database, associates the biometric profile with an address, and then forwards the communication to that address.
  • Method #14 Addressing Directly to Target Terminals Using Image, Voice Quality, or Position
  • This method is analogous to the three previous methods (Method #'s 11, 12, and 13) in which the information describing the distinguishing characteristic was sent to a DPS where it was associated with an address, and then forwarded to that address.
  • the information describing the distinguishing factor is not sent to a DPS, but rather, it is broadcast to all proximal terminals.
  • Each terminal receiving the broadcast compares the expression of the distinguishing characteristic with the distinguishing characteristics of its user. If there is a match, then the terminal accepts the communication and responds if appropriate.
  • Device #1 using Device #1 expresses a distinguishing characteristic of a target person (captures an image of the target's face and transforms this image into a biometric profile) and broadcasts this information together with Device #1 's ID/address and a brief message.
  • Device #2 along with several other devices, receives the broadcast from Device #1.
  • Device #2 has stored in its memory the biometric profile of the image of its user's (User #2) face. It compares the two biometric profiles. If they do not match then it ignores the communication from Device #1. If they do match, then it responds according to User #2's wishes.
  • This method has three main variations — one for each type of distinguishing characteristic which is used to specify the target person or vehicle.
  • the distinguishing characteristics of targets may be expressed by User #1 as described in Method #'s 11, 12, and 13. This method consists of the following steps:
  • User #1 using Device #1 captures an image or a voice sample or the target person/vehicle, or else determines the position of the target using techniques described in Method #'s 11, 12, and 13.
  • Device #1 broadcasts message, Device #l's ID/address, and captured image of target (or biometric abstraction thereof) or voice sample of target (or biometric abstraction thereof), or position of target.
  • receiving devices analyze them to create a biometric profile. Receiving devices then compare the features of biometric profile or position w/features of their user's biometric profile or position. A device with a close enough match receives the communication. It knows the address of the sender and can respond if appropriate.
  • VARIATION Another distinguishing characteristic related to appearance of the target person/vehicle may be used. Because it is only necessary to distinguish the target from other people or vehicles in the perceptual proximity of User #1, the level of specificity required in expressing the characteristics of the target is less stringent than if the target was to be distinguished from millions of other people in a database.
  • the profiles stored on each terminal describing their user may be updated frequently - possibly daily, ensuring a higher degree of similarity than if the information was kept on a DPS and updated less frequently.
  • these two preceding factors allow for another type or category of visual profile of the target — one that is descriptive of the visual quality of their clothing.
  • a color profile, or pattern profile, or contrast profile could be created which would allow for adequate specificity, could be obtained from any angle, and would not require facial information to be obtained.
  • ADVANTAGE Easier to frequently update image of self stored on own device, so can compare with captured images with even temporary features such as color of shirt, jacket, & tie.
  • Biometric profile need not be unique among large database of users, but only need be unique among relatively small number of proximal users. would not require that user's of such a communication system submit information about their voice or their appearance to a database or to other users.
  • Methods #15 & #16 Data to Image Mapping
  • Methods #15 and #16 do not depend on the user's device receiving images of other users from a DPS or other users' devices. Instead, it is the user's own device which generates any necessary images of other users.
  • each image generated in Methods #15 and #16 by the user's own device may contain more than one person. Following is a description of the user's experience using these methods. Afterward, more technical descriptions will be given.
  • the user points the camera on her device at the person she would like to communicate with (see Figure 1). She instructs her device (by pressing a button, for example) to either capture a still image, or to begin displaying live video.
  • the camera generates an image of a person (or a group of people) from the user's point of view.
  • the user views either a still image or a live video image on her device's display.
  • Superimposed over the image of each person is a small graphic shape, a circle for example, which represents the location of that person's device. The user selects the person with whom she wants to communicate by tapping with a stylus the circle superimposed over that person's image.
  • Each circle is associated with the device ID# and network address of the device belonging to the user whose image lies underneath the circle.
  • the user's device then initiates communication with the device of the selected person — either by sending a regular or Discreet message, or by initiating some other form of communication such as, for example, an instant messaging session, a telephone call, or a videophone call.
  • mapping position data onto an image there are two alternative techniques for accomplishing this task: (1) mapping position data onto an image, and (2) focusing both light radiation from the target person and also data-carrying radiation from the target person's device onto the same imaging sensor (or two different imaging sensors and then overlay the data captured on each sensor).
  • the means of associating a graphic symbol (a circle, for example) that is linked to data (device ID# and network address, for example) with a particular portion of an image (likeness of a target person, for example) is accomplished by mapping position data received from another person's device onto the display of the user's device.
  • the process of mapping of objects that exist in 3 -dimensional space onto the two-dimensional display of a user's device requires the following factors: (a) the position of the user's device, (b) the position of the target device(s), (c) the orientation of the user's device, (d) the focal length of the device's camera lens, (e) the size of the camera's image sensor, and (f) the pixel density of the sensor.
  • the last three factors (d, e, and f) are properties of the user's camera and are either fixed quantities, or at least, in the case of the lens's focal length, known quantities easily output from the camera.
  • an infrastructure is required (1) to determine the precise location of each device with location coordinates which are valid at least locally, and (2) to provide time- synchronization to all devices (at least locally) to sufficient accuracy
  • Time synchronization is necessary in order to take into account movement by either the user or potential target persons. If the location history of each device is stored for a trailing period of about 5-seconds (or similar period of time short enough so that only a manageable amount of memory is required, yet long enough so that all devices are able to respond to a request for information within that time period), then the locations of all users may be determined for the moment an image is captured.
  • Each device stores its own location data, or alternatively, the location data for all local devices is stored by a single third-party DPS. If a user targets a person by capturing a still image, then when the user presses a button to capture the image, his device broadcasts [to other devices within a specific domain, where "specific domain" can be defined in any one of a variety of ways, for example, (a) any user which receives the broadcast, (b) any user with location coordinates within a designated quadrant relative to the user, etc.] its own device ID and network address accompanied by a request for other devices to transmit their position coordinates for a specified moment within the past five seconds (or other pre-determined trailing period).
  • potential target devices When potential target devices receive this broadcasted request, they respond by transmitting to the network address of the requesting device (a) their device ID#, (b) their network address, and (c) their position coordinates for the time specified in the request.
  • the request for position information is instead directed to the third-party DPS.
  • the DPS then provides the requested position information of all eligible devices along with the associated device ID's and network addresses.
  • this technique requires that each device have the capability of accurately determining its own orientation in three-dimensional space, factor (c). Specifically, the information required is the orientation of the device's camera ⁇ horizontally (direction as it is projected onto a horizontal plane), vertically (the degree in which its orientation deviates from the horizontal), and “roll” (the degree to which the device is rotated about the axis defined by the direction that the device's camera is pointing).
  • the technology for a device to determine its own orientation currently exists, and it is irrelevant to this invention which technology is employed as long as it delivers the required output.
  • One adequate form of the required output describes the camera orientation with three angles: ( ⁇ , ⁇ , ⁇ ), where ⁇ is the degree that the camera is rotated to the left in a horizontal plane from a reference direction; ⁇ is the degree that the camera is tilted up or down from the horizontal; and ⁇ is the degree that the camera is rotated in a clockwise direction about the axis defined by the direction it is pointing.
  • Figure 2 illustrates two users in 3-dimensional space described by an x,y,z coordinate system in which the z-dimension represents the vertical dimension and the x and y coordinates describe the user's location with respect to the horizontal plane.
  • the locations of Device #1 and Device #2 are represented by the coordinates ⁇ 1 ' ⁇ ' z i / and fa > ⁇ - z i ) ⁇ respectively. (More precisely, the location coordinates represent the location of each device's image sensor.)
  • User #1 points his device in the general direction of User #2 and captures an image at a particular moment in time, t.
  • his device broadcasts its own device E) and network address and a request to nearby devices to send their position coordinates at time t along with their device ID'S and network addresses.
  • User #2's device (Device #2, in User #2's bag) responds to this request by transmitting the requested position coordinates fa ' y-i • 2 2 ) s device ID#, and network address to Device #1.
  • Device #1 In order for Device #1 to represent on its display the location of Device #2 superimposed over the image of User #2, it must also have (in addition to the location coordinates of Device #2) its own location coordinates ⁇ I' ⁇ i' ⁇ i/ and the orientation of its camera in space ( ⁇ , ⁇ , ⁇ ). These values are returned by the location system employed and the device orientation system employed, respectively.
  • Figure 3 illustrates the same two users represented from an overhead viewpoint projected against the horizontal plane. The direction in which the camera is pointed in the horizontal plane is specified by a vector which is rotated ⁇ degrees counterclockwise from the direction of the positive x-axis.
  • the Z-axis represents the vertical dimension
  • the horizontal axis represents the vector from Device #1 to Device #2 projected onto the x-y plane.
  • Figure 5 illustrates the display of Device #1.
  • the camera has been rotated ⁇ degrees in a clockwise direction about the axis defined by the direction the camera is pointing. This results in the rotation of the image in the display " ⁇ degrees in a counterclockwise direction.
  • X 0 cos ⁇ ( X 0 cos ⁇ + y 0 sin ⁇ ) + Z 0 sin 6
  • y ⁇ cos ⁇ (- X 0 sin ⁇ +y 0 cos ⁇ ) + sin v[z 0 cos ⁇ ⁇ (x 0 cos ⁇ +y 0 sin ⁇ )sin ⁇ ]
  • zj j -sin ⁇ (- X 0 sin ⁇ + y Q cos ⁇ ) + cos ⁇ [z 0 cos ⁇ - (x 0 cos ⁇ + J 0 sin ⁇ )sin ⁇ ]
  • This method is the same as Method #15 with the exception that it uses a different technique for associating a graphic symbol (a circle, for example), which is linked to data (Device TD and network address, for example), with a particular portion of an image (likeness of a target person, for example).
  • the technique used here is that each device broadcasts a signal which is directional and has a limited ability to penetrate solid objects (clothing, for example) — the best frequencies being in the gigahertz to sub-infrared range.
  • the lens of the camera focuses this data-carrying radiation together with the visible light-frequency radiation onto the same image sensor.
  • the lens of the camera focuses this data-carrying radiation together with the visible light-frequency radiation onto the same image sensor.
  • Intermingled with elements of the image sensor which are sensitive to light radiation are other elements which are sensitive to the data-transmitting wavelengths. These other elements are able to receive and decode data and also tag each signal with the place on the sensor in which it is received.
  • each of these elements in the image sensor is required to be able to receive and channel data from independent data streams as there may be more than one device "appearing" on the sensor which is transmitting its data.
  • Each data stream is indexed and stored with the pixel number which receives the data. Because the data to be transmitted is very small - one device ID or network address - the time of transmission from the onset of the signal to the end of the signal is too short to result in any significant "blurring" across pixels.
  • a variation of this method is to focus the light radiation and the data- transmitting radiation onto two separate sensors. Using this variation it is necessary to associate the relative positions on each of the sensors so that for any given pixel on the data sensor, the corresponding location on the image sensor can be calculated, and thus a geometric shape can be displayed at that position superimposed over the image.]
  • Method #17 Determine Exact Direction to Target by Touching Target in Image
  • This method involves a two-stage method of expressing distinguishing factors of the target person/vehicle, and combines some of the techniques introduced in Methods 10, 11 and 16.
  • a user (User #1 using Device #1) expresses position by pointing a camera.
  • User #1 expresses a combination of visual appearance and position by touching the image of the target within the image displayed on his or her terminal.
  • User #1 points the camera in his or her terminal at the target to acquire an image - either a captured still image or a live video image.
  • the direction the camera is pointing can be determined.
  • the object in the center of the image on the viewing screen assuming accurate calibration, lies precisely in the direction that the camera is pointing.
  • objects not in the center of the image will lie in a different direction corresponding to the degree of displacement from the center.
  • Method #15 the precise deviation in direction from the camera direction can be calculated for each point on the image.
  • the terminal will sample the direction and position of the camera at the same moment the screen is touched to use in its calculation of the target's direction.
  • the terminal will sample and store with the image the direction and position of the camera at the time the image was captured. In that way, the orientation of the camera may be changed after the image is captured but before the target is selected. Assuming that the target has not moved, even if the user has moved, the determination of the direction vector from the user's previous position to the target will still be valid.
  • This method has the additional capability of determining the position of the target in the following way: assuming the target does not move, if User #1 moves even a small amount and repeats the procedure of defining a vector to the same target from a different position, the position of the target can be determined as the intersection of the two vectors.
  • the determination of position could also be accomplished by combining this method with a distance measuring technology (a type of radar, for example).
  • the position of the target would simply be the distance of the nearest object in the specified direction from the position of the user.
  • - Device #1 forwards the target vector (the vector pointing from its own position toward the target's position) to a DPS.
  • the DPS independently determines the positions (using GPS or some other means) of all proximal users, and then determines which of those positions lie along the position vector specified by Device #1 and is the closest to Device #1. Knowing the ID's and network addresses of all devices in the network, the DPS then provides Device #1 with the means to communicate with the target by providing either the target's ID, or address, or temporarily assigned E), or alternatively, by simply forwarding Device #1 's initial communication (which could include Device #l's ID and address) to the target device.
  • - Device #1 broadcasts its address, its position, and the direction vector of its intended target. Each terminal receiving the broadcast determines its own position and responds if it lies on the specified vector.
  • All proximal devices send their positions and addresses to Device #1 in response to a broadcasted request.
  • Device #1 determines the address of the nearest device that is positioned along the direction vector to the intended target.
  • Device #1 forwards the target's position to a DPS, which associates the position with the same independently determined position of a device whose address is known to the DPS.
  • the DPS then provides Device #1 with the target's ID, or address, or temporarily assigned ID, or alternatively, simply forwards Device #l's initial communication (which could include Device #1 's ID and address) to the target device.
  • Device #1 broadcasts the position of its intended target. Each receiving terminal determines its own position and responds if its position is the same as the specified target position. - All proximal devices send their positions and addresses to Device #1 in response to a broadcasted request. Device #1 then determines the address of the device that reports a position that is the same as the determined target position.
  • Method #1 This method is similar to Method #1 with one important difference: Instead of a user's device (Device #1) receiving images of other users in perceptual proximity from their devices, only device ID's and/or network addresses are received from other users' devices. The images of those other users are received from a data processing system (DPS).
  • DPS data processing system
  • Device #1 broadcasts a request for device ID's and/or network address with a signal strength sufficient to reach all devices within perceptual proximity. If this request includes Device #l's device ID and/or network address, then the devices receiving this request may either send the requested information in an addressed transmission to Device #1, or alternatively, devices may respond by simply broadcasting the requested information.
  • all devices constantly, or intermittently (for example, once per second), broadcast their device ID and/or network address with signal strength necessary to reach other devices within perceptual proximity.
  • Device #1 would then obtain the device ID's and/or network addresses of other devices in perceptual proximity simply by "listening".
  • the device ID's and/or network addresses obtained by Device #1 are then transmitted to a data processing system with a request for an image(s) of each of the associated users.
  • the data processing system then transmits to Device
  • Device #1 the requested images paired with their respective device ID's and/or network addresses.
  • the user (User #1) of Device #1 views the images received and selects the image which corresponds to the intended target person or target vehicle, thus selecting the target's device ID and/or network address.
  • Device #1 can then initiate addressed communication with the target person/vehicle.
  • the means of transmission can be either direct (device to device) or indirect (via a network).
  • the communication may or may not be mediated by a DPS.
  • ALTERNATIVE 1 Identical method with one exception: a different distinguishing characteristic of the target is used - voice quality - instead of appearance. Instead of User #1 viewing a series of images sent from the DPS (each linked to an ID/address), User #1 listens to a series of voice samples sent from the DPS (each linked to an ID/address). User #1 selects the voice sample that is most similar to the sound of the target's voice, thus at the same time selecting the ID/address of the target.
  • ALTERNATIVE 2 Identical method with one exception: a different distinguishing characteristic of the target is used - relative position - instead of appearance.
  • User #1 views a 2-dimensional floor map sent from the DPS.
  • the map displays the positions of all other users in the perceptual proximity such that each user is represented by a small symbol.
  • An ID/address is associated with each symbol.
  • User #1 selects the symbol that corresponds to the position of the target, thus at the same time selecting the ID/address of the target.
  • This method is a variation of the previous method (Method #18), differing only in the manner in which the user's device (Device #1) obtains the device ID's and network addresses of other devices in perceptual proximity of the user (User #1).
  • the ID/addresses are obtained from RFID tags (active or passive) that represent other user's devices and that may or may not be physically incorporated within the devices they represent.
  • Device #1 transmits a non-directional signal interrogating all RFID tags within perceptual proximity of User #1. In response to this interrogation, all RFID tags transmit (broadcast) the device ID and/or network address of the devices they represent. Device #1 thus receives the RFID transmissions carrying the device ID and/or network addresses of all devices in perceptual proximity. From this point on, Method #19 is identical with Method #18.
  • Method #20 Proximal Devices Transmit IP's and/or Addresses Directly to DPS Instead of to the Proximal Requesting Device This method is identical to Method #18 with the only difference being the manner in which the DPS obtains the ID/addresses of the proximal devices to
  • Device #1 In this method, as in Method #18, Device #1 broadcasts a request for device ID's and/or network address with a signal strength sufficient to reach all devices within perceptual proximity. In this Broadcasted request, Device #1 includes its own device ID/address and a "Request Event ID", a number which uniquely identifies this particular attempt at a Perceptually Addressed communication from this particular user. When proximal devices receive this broadcasted request, instead of sending their ED/addresses to Device #1 as they did in Method #18, they send their ED/addresses to the DPS along with Device #l's ID/address and the Request Event ID. For each of the
  • the DPS sends to Device #1 a representation of a distinguishing characteristic (image, voice sample, or position - depending upon the configuration of the system) of the user of that device paired with that device's ID/address. From this point on, this method is identical to Method #18.
  • This method is similar to Method #12 in that the user (User #1) of a terminal (Device #1) captures an image of the target with a camera on his or her terminal and then transmits that image, or the portion of the image that includes only the specific target of interest, to a DPS. But in Method #12 the image needs to contain enough information about the target's visual features, and the analysis of the image needs to be sufficiently thorough, that the person in the image can be distinguished among the many (possibly thousands or millions) other users in the DPS's database. In contrast, the current method assumes knowledge of which other people/vehicles are in the user's perceptual proximity.
  • the image of the target submitted by Device #1 need not contain as much information about the target's visual features and the analysis of the submitted image need not be as thorough because the result of the analysis need only discriminate among the relatively few people/vehicles present.
  • Device #1 instead of the DPS, would analyze the captured image of the target to produce a biometric profile, and then transmit to the DPS the biometric profile instead of the image on which it is based.
  • the DPS uses GPS or some other method to determine the location of user's, and determines which user's are within a predetermined radius of User #l.
  • Device #1 scans the RFID tags of proximal devices to obtain the ID/addresses of their associated terminals. Then Device #1 forwards the ID/addresses of proximal users to the DPS.
  • This method consists of the following steps:
  • User #1 captures an image of a target that he or she wants to communicate with 2. If necessary, User #1 crops the image to include only the target.
  • Device #1 either produces a biometric profile of the target image and transmits this profile to a DPS, or Device #1 transmits the target image itself to the DPS. 4.
  • the DPS acquires the ID/addresses of all other users in Device #1 's perceptual proximity using one of the methods outlined above.
  • the DPS compares the image (or its biometric profile) of the target received from Device #1 to the images (or their biometric profiles) of the other users present which the DPS has stored in a database along with their ID/addresses.
  • the DPS determines which proximal user has image (or biometric profile of an image) that is most similar to the image (or biometric profile of an image) submitted by Device #1.
  • the DPS facilitates communication between Device #1 and its target (for example, by forwarding a communication attached to the submitted image, by transmitting to Device #1 the ID/address of its target, or by communicating the ID/address of Device #1 to the target, or by some other means).
  • ADVANTAGES The DPS does not need to positively identify the submitted image, but only find the greatest similarity among the other users present. - Protects confidentiality of users in that it is not require them to allow strangers to access images of themselves. In addition, depending upon the method used for the DPS to know the ID/addresses of proximal users, it is possible for the instigating user (Device #1) to not have access to any
  • This method is identical to the previous method (Method #21) with the only exception being that a different distinguishing characteristic of the target is used - voice quality - instead of visual appearance.
  • the target's voice is recorded by a microphone on the user's Device (Device #1).
  • Device #1 transmits the captured voice sample (or biometric profile of the voice sample created on Device #1) to a DPS.
  • the DPS determines which other users are in the perceptual proximity to Device #1, and compares their voice samples to the sample submitted by Device #1.
  • the DPS facilitates communication between the two devices.
  • the target's relative position is determined by the user's Device (Device #1). Any of the previously describe techniques for expressing the relative position of a target will suffice: determining the direction vector of the target from the Device #l's position, determining both the direction and distance of the target from the Device #1 's position, or determining the absolute position of the target.
  • Device #1 transmits the relative position of the target to a DPS.
  • the DPS determines which other users are in the perceptual proximity to Device #1 by using one of the following techniques:
  • Methods #18, #19 or #20 in which the ID's or addresses of proximal users are reported to the DPS: a. Device #1 broadcasts its ID/address along with a request to other devices in the perceptual proximity that their ID/addresses are sent to Device #1. Then Device #1 forwards the ID/addresses of proximal users to the DPS. b. Device #1 receives the broadcasted ID/addresses of other devices in its perceptual proximity, then forwards those ID/addresses to the DPS. c.
  • Device #1 broadcasts its ID/address, a Request Event ID, and a request to other devices in the perceptual proximity that their ID/addresses are sent directly to the DPS attached to Device #l's ID/address and the Request Event ID. d. Device #1 scans the RFID tags of proximal devices to obtain the
  • Device #1 forwards the ID/addresses of proximal users to the DPS.
  • the DPS then independently determines the exact positions of each of the user's which were reported to be in Device #l 's perceptual proximity. It compares each of those positions to the position (or range of positions) reported by Device #1 as being the location of the target. The DPS determines which user is closest to the target position submitted by Device #1 and then facilitates communication between those two devices.
  • Method #24 Identification on DPS in which Alternatives Limited to Proximal Users, but Image, Voice Samples, or Position Submitted by Each Proximal Device
  • This method description applies to all three types of distinguishing characteristics: image, voice quality, and position.
  • This method is the same as the previous three methods in that the instigating user, User #1, submits to a DPS the distinguishing factor of the target, and the DPS also acquires the ID/addresses of all devices in User #l's perceptual proximity to facilitate the comparison and matching process.
  • the distinguishing characteristic is an image or a voice sample, that information was stored with the devices ID/address in a database on the DPS; and if the distinguishing characteristic was a position, that information was independently determined by the DPS for every device in the perceptual proximity.
  • the distinguishing characteristic is submitted independently by each device that is in the perceptual proximity of
  • Method #25 Directional Broadcast to Determine Sub-group of Users in Perceptual Proximity
  • This method is a variation of all methods in which a DPS determines the identity of a target by comparing a sample of the target submitted by the instigating user (User #1) with the distinguishing characteristics of all users determined to be in User #1 's perceptual proximity.
  • This variation concerns the method by which it is determined which users are to be included in this comparison process. It is advantageous for this group to be as small as possible to reduce network bandwidth used, to reduce the DPS processing time, and to increase accuracy of identification. More specifically this method is a variation of the following methods: a. Device #1 broadcasts its ID/address along with a request to other devices in the perceptual proximity that their ID/addresses are sent to Device #1. Then Device #1 forwards the ID/addresses of proximal users to the DPS. b. Device #1 broadcasts its ID/address, a Request Event ID, and a request to other devices in the perceptual proximity that their
  • ID/addresses are sent directly to the DPS attached to Device #l 's ID/address and the Request Event ID.
  • Device #1 scans the RFID tags of proximal devices to obtain the ID/addresses of their associated terminals. Then Device #1 forwards the ID/addresses of proximal users to the DPS.
  • Method #26 Identification on User's Device in which Image, Voice Samples, or Position Submitted by Each Proximal Device This Method is identical to Method #24 with the important exception that all of the functions in that method that were performed by a DPS are here performed by the instigating device, Device #1.
  • Device #1 In this method the user, User #1, either captures an image, a voice sample, or makes some determination of position of the target using a previously described method.
  • Device #1 broadcasts a request to other devices in User # l's perceptual proximity requesting either an image (or biometric profile thereof), or voice sample (or biometric profile thereof), or position be forwarded with an accompanying ID/address to Device #l's ID/address.
  • Device #1 After receiving user images (or voice samples, or positions) from each device in the perceptual proximity, Device #1 compares those images (or voice samples, or positions) with the captured image (or voice sample, or position) to determine the best match. After determining the best match, Device #1 then has identified the ID/address associated with the target device.
  • This method is novel in the manner in which it allows the user to express two distinguishing characteristics of the target simultaneously - image and position.
  • the user's device constructs a virtual landscape by placing images of other proximal users together in a composite image according to where they would appear in relation to each other if viewing them in reality.
  • Device #1 In response to a request from the user's device, Device #1, the user's device receives (from either a DPS, or each proximal device, or a combination of both sources) the ID/address, user image, and position of each device in Device #l's perceptual proximity. Device #1 then arranges each of the images on the display in a position that approximates where they would appear in the user's field of vision. For example, if person #2 is slightly to the right (from User #1 's point of view) of person #1, and person #3 is much further to the right and further in the distance, then the display of Device #1 would show the image received from the device of person #2 displayed slightly to the right of the image of person #1.
  • the image of person #3 would be much further to the right on the display and also much smaller, indicating distance.
  • the user is given the ability to scroll the display to the right or the left via the user interface (for example, pressing one button to scroll to the right and a different button to scroll to the left) to allow User #1 to view the images of all other proximal users in a 360 degree radius.
  • User #1 selects the target recipient of a communication by selecting the image of that target (by tapping the image, or toggling from image to image, etc.).
  • the image is associated with that user's ID/address, and thus Device #1 has the capability of initiating communications with the target.
  • Method #28 User Selection of Both Image and Voice Quality Simultaneously
  • This method is similar to the previous method in that it allows the user to express two distinguishing characteristics of the target simultaneously; but in this case the two distinguishing characteristics are image and voice quality.
  • the user's device displays a series of images, and associated with each image is a sample of that person's voice.
  • the user, User #1 is able to hear the associated voice by via the user interface by, for example, selecting an image and pressing a "play voice" button.
  • This method is identical to the previous method in the manner in which the distinguishing characteristics are collected for either a DPS or from other proximal devices, and in the way communication is initiated.
  • This method is similar to the previous method in that it allows the user to express two distinguishing characteristics of the target simultaneously; but in this case the two distinguishing characteristics are voice quality and position.
  • the user's device constructs a virtual "soundscape" by placing voice samples of other proximal users together in a composite according to what direction they would appear to come from in relation to each other if hearing them in reality.
  • the user's device (Device #1) plays a series of voice samples, the order changing according to whether the user is scrolling to the right or to the left.
  • the user, User #1 is able to hear the associated voice via the user interface by, for example, pressing a "move left” button or a "move right” button.
  • This method is identical to the previous method in the manner in which the distinguishing characteristics are collected for either a DPS or from other proximal devices, and in the way communication is initiated. If User #1 wishes to target a particular target, he or she selects the voice sample of the target (by playing the voice sample, for example) and communicating via the user interface (pressing a button, for example) that communications should be initiated. The image is associated with that user's ID/address, and thus
  • Device #1 has the capability of initiating communications with the target.
  • Method #30 Addressing with a Visible Alphanumeric String
  • the most obvious examples of visibly displayed strings of alphanumeric characters which are associated with people are sports jerseys and license plates.
  • the alphanumeric string is associated with an address using a database stored either on a DPS (in which case a user's device sends the message, along with the alphanumeric string of the intended recipient, to the DPS, which looks up the alphanumeric string in its database and forwards the communication to the associated address) or on the user's device (in which case the user's device associates the alphanumeric string with an address and initiates communication directly with the target person's address).
  • a user can express an alphanumeric string associated with a target person:
  • the user can enter the alphanumeric string directly into her device. Some examples of this are: using a keyboard; freehand writing with a stylus and translated (on users device or on a DPS) into a digital representation with handwriting recognition software; or verbally pronouncing each character in the string while voice recognition software translates (on users device or on a DPS) into a digital representation.
  • the user can capture an image of the alphanumeric string with a camera on her device, and then use optical character recognition (OCR) to translate into a digital representation of the alphanumeric string.
  • OCR optical character recognition
  • OCR can be performed either on the user's device or on a DPS.
  • Method #1 A method analogous to Method #1 in which, in response to a broadcasted request, all proximal terminals send to the user's terminal their ID/address paired with the alphanumeric string displayed by their user. The user then selects the alphanumeric string presented which is the same as the alphanumeric string seen on the intended target.
  • SUGGESTED SECURITY FEATURES • A user has the ability to ban all future communications from any particular user. This consists of a permanent non-response to all transmissions from that user's device.
  • User #1 has the ability to instruct her device (Device #1) to issue an "erase” command to any other device (Device #2) at any time, as long as
  • Device #1 has Device #2's device ID# and network address. This erase command causes the erasure of User #l's image, device ID# and network address from Device #2. But at the same time, Device #2's information is also erased from Device #1. • There is no capability of exporting from a device any information about other users. • All communications between devices is encrypted with a common system- wide key to prevent non-system devices from eavesdropping. This key is periodically changed, and the new key is automatically propagated and installed from device to device, whenever devices communicate. Devices retain previous keys in order to be able to communicate with other devices that have not yet been updated. The ability to automatically install a new encryption key is guarded by a system- wide password stored in the firmware of all devices, and invoked in all legitimate encryption key updates.
  • Perceptual Addressing gives people a socially discreet way of communicating with another person they don't know, unencumbered by the appropriateness of the social situation or specific other people who may also be present.
  • Perceptual Addressing gives one person a convenient way of communicating with an unknown person in situations in which other means of communication may not be possible such as, for example, when the other person is in another car on the road, or sitting several seats away during a lecture or performance.
  • Perceptual Addressing allows team members to communicate with each other based upon spatial position without the need to know which person is in which position, or without the need to broadcast messages. This may be useful in either military or civilian operations.
  • Perceptual Addressing would facilitate communication in public situations in which an individual is responsible for dealing with the public and needs to communicate with specific other people without knowing anything about them. For example, a police officer often needs to tell specific individuals to slow their vehicles, turn their headlights on, step back from the street, etc.

Abstract

A method of sending a message from a first wireless electronic device to a second wireless electronic device is provided. One or more distinguishing characteristics of a user of a second wireless electronic device within the perceptual proximity of the first wireless electronic device are specified. A message is sent from the first electronic device viewable in the second electronic device based on the second electronic device's matching the distinguishing characteristic identified in the first electronic device. In a further example, the message is viewable in the second electronic device only upon an expression of interest from a user of the second electronic device in communicating with the user of the first electronic device.

Description

WIRELESS COMMUNICATIONS WITH PROXIMAL TARGETS
IDENTIFIED VISUALLY, AURALLY, OR POSITIONALLY
CLAIM OF PRIORITY
The present application claims priority to previous patent applications by Charles Martin Hymes, including provisional US patent application "WIRELESS COMMUNICATIONS WITH PROXIMAL TARGETS
IDENTIFIED VISUALLY, AURALLY, OR POSITIONALLY", filed April 12, 2005, PCT Patent Application, "WIRELESS COMMUNICATIONS WITH VISUALLY-IDENTIFIED TARGETS", deposited 2/28/2005; US Patent Application, "WIRELESS COMMUNICATIONS WITH VISUALLY- IDENTIFIED TARGETS", deposited 2/19/2005; US provisional Patent Application, "WIRELESS COMMUNICATIONS WITH VISUALLY- IDENTIFIED TARGETS", deposited 2/19/2005; and US provisional Patent Application, "DEVICE AND SYSTEM FOR WIRELESS COMMUNICATIONS WITH VISUALLY IDENTIFIED TARGETS", deposited 9/24/2004, which applications are incorporated by reference.
SPECIFICATION
In the following detailed description of example embodiments of the invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific sample embodiments in which the invention may be practiced. These example embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, and other changes may be made without departing from the substance or scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the invention is defined only by the appended claims. More specifically, any description of the invention, including descriptions of specific order of steps, necessary or required components, critical steps, and other such descriptions do not limit the invention as a whole, but rather describe only certain specific embodiments among the various example embodiments of the invention presented herein. Further, terms may take on various definitions and meanings in different example embodiments of the invention. Any definition of a term used herein is inclusive, and does not limit the meaning that a term may take in other example embodiments or in the claims.
OVERVIEW
The primary purpose of some embodiments of this invention is to facilitate communication between people that are within "perceptual proximity" of each other, i.e. they are physically close enough to each other that one person can perceive the other, either visually or aurally. (The term, "recognition proximity", has been used in previous descriptions of perceptual proximity.) This invention is partially embodied in the form of a small mobile device, either its own dedicated device, or as enhanced functionality to other mobile devices such as, for example, PDA's (personal digital assistants) or cellular telephones. The device may include a small digital camera which can record both single still images as well as video images, the means to enter text and record audio, the ability to transfer information (text, audio, image, or video) from a computer to the device as an alternative to entering information directly into the device, the ability to display text or images and playback audio or video, a programmable microprocessor, and memory storage and retrieval functions — all commonly available features on today's cellular telephones and PDA's. In addition, the device may have additional hardware capabilities.
This invention works by providing a user of the invention with the ability to communicate electronically (text, voice, image, or video) to specific other individuals (or vehicles - automobiles, motorcycles, etc.) in his or her physical environment that have been identified visually or aurally, but for whom contact information (telephone number, email address, etc.) may not be known. It is expected that the primary application of the invention will be to facilitate romantic relationships, although other social, business, civic or military applications can be foreseen.
Because this invention allows one person to contact another electronically with only the knowledge of the other person given by the sight or sound of that person in the environment, this capability is referred to as "Perceptual Addressing". In previous patent applications this capability has also been referred to as "Spatial Addressing" or "Visual Addressing".
Definition: Perceptual Addressing
Perceptual Addressing is the ability for one person, User #1, to establish an electronic communications channel with another person, User #2, that User §1 can perceive in his or her physical environment but for whom no contact information is known.
Example #i of Perceptual Addressing:
Bob is riding a crowded city bus. At the next stop he notices an attractive woman, Sarah, get on the bus and make eye contact with him before taking a seat a few rows in front of him. He knows he will never see her again unless he acts immediately. So using his handheld device, he sends to her a brief message which includes a photo of himself as well as his contact information ~ the telephone number of his mobile phone. He does this by first taking a picture of her with his mobile phone; then, viewing the image of her on the display of his phone, he circles her face and presses the send button. He immediately receives a confirmation message saying "message delivered".
He hopes she will respond to his message.
Example #2 of Perceptual Addressing:
John is driving from Seattle to San Francisco. It is getting late, and he needs to find a hotel for the night. He is driving behind a pickup truck and thinks that perhaps the driver can recommend a place to stay nearby. He takes out his mobile device and presses a button. He sees two license plate numbers displayed. He selects the one that corresponds to the truck in front of him. He then holds his device to his ear. Just then, Pete, in the pickup truck in front of John, picks up his mobile device and sees on its display, "call from license plate # AYK-334" . Pete then presses a button and says "how's it going?" They proceed to have a brief cellular telephone conversation in which John asks about hotels in the area and Pete makes a couple of recommendations.
Modes of Communication
Once a communications channel between terminals is established, Perceptual
Addressing is agnostic with respect to the form of the subsequent communications. These communications can be in the form of individual messages sent from one person to another or can be in the form of live interactive audio and/or video communications; and the content of these communications can consist of any form of media including text, images, video, and voice. Devices in the same vicinity can communicate with each other via direct device-to-device transmission, or alternatively, devices can communicate via a wireless connection to a network — the internet, cellular telephone network, or some other network. The communication may also be mediated by a remote data processing system (DPS).
This invention is compatible with a wide variety of methods of transmitting information and, depending upon the method of Perceptual Addressing used, may include more than one mode of transmission. The modes of wireless transmission could include various technologies, frequencies and protocols — for example, radio frequency (RF), infrared (IR), Bluetooth, Ultra Wide Band (UWB), WiFi (802.11) or any other suitable wireless transmission technology currently known or yet to be invented. In addition to wireless means of transmission, non-wireless means may also be used if practical.
Note Regarding Methods of Determining Spatial Position: Several methods of Perceptual Addressing depend upon the ability to determine the spatial position of users and their potential targets. These methods sometimes require the use of a spatial position measurement technology that allows the determination of position to an accuracy of several centimeters. The particular method of determining position is not central to this invention and any method currently known or yet to be invented that meet the basic criteria would suffice. (As an example, the method described in the paper "A High Performance Privacy-Oriented Location System" in Proceedings of the First IEEE International Conference on Pervasive Computing and Communications (PerCom2003), pages 216-223, would be adequate. Other positioning systems, for example, incorporating GPS, WiFi, UWB, RF triangulation, infrared, ultrasound, RFDD, or any other technologies which would allow the position of each device to be accurately determined within several centimeters would also be adequate.
THEORY OF PERCEPTUAL ADDRESSING
There are two essential, non-sequential tasks that are central to Perceptual Addressing.
1. The user of a device embodying the present invention specifies one target person or target vehicle, out of potentially many possible target persons/vehicles in the user' s perceptual proximity, by expressing one or more of the target's distinguishing characteristic(s) . Perceptual proximity is here defined as a range of physical distances such that one person is in the perceptual proximity of another person if he or she can distinguish that person from another person using either the sense of sight or the sense of hearing. A distinguishing characteristic is a characteristic of the target person or target vehicle, as experienced by the user, that distinguishes the target person or target vehicle from other people or vehicles in the user's perceptual proximity. There are three types of distinguishing characteristics of a target person or target vehicle: visual appearance, spatial position relative to the user and other objects in the observer's perceptual field, and voice quality.
The user of this invention specifies the target by expressing his or her perception of the distinguishing characteristic in one of two ways: (1) Direct 1 expression of a distinguishing characteristic of the target person/vehicle, or (2) Selection from presented descriptions of distinguishing characteristics of people/vehicles in the user's perceptual proximity. Examples of Direct Expression are: (a) the user expresses the target's relative position by pointing the camera on his or her device and capturing an image of the target; or (b) the user expresses the appearance of a license plate number by writing that number. Examples of Selection are: (a) the user selects one representation of position, out of several representations of position that are presented, that is most similar to the way the user perceives the target's position; (b) the user selects one image out of several presented that is most similar to the appearance of the target; (c) the user selects one voice sample out of several presented that is most similar to the sound of the target's voice.
The selection of a target person based upon distinguishing characteristics can occur in one or more stages, each stage possibly using a different distinguishing characteristic. Each stage will usually reduce the pool of potential target people/vehicles until there is only one person/vehicle left - the target person/vehicle.
2. An association is made between the expression of the distinguishing characteristic(s) of the target person/vehicle and the address of the target's telecommunications terminal.
Examples of this association: (a) The act of pointing a camera (integrated in a user's device) at a target person (to capture biometric data) associates the relative position of the target person (distinguishing characteristic) as expressed by the user with the biometric profile of the target person. Then using a database, the biometric profile is found to be associated with the address of the target's terminal, (b) A data processing system sends to the user's device ten images linked with ten addresses often people in a user's perceptual proximity. The user compares his or her visual experience of the target person (distinguishing characteristic) with his or her visual experience of each of the ten images displayed on his or her device, and then expresses his or her experience of the appearance of the visual appearance of the target by choosing an image that produces the most similar visual experience. Because the ten images were already associated with ten telecommunication addresses, by selecting the image of the target, an association can immediately be made to the target's address, (c) A user points a camera at a target person and takes a picture, thus associating the experienced relative position of the target (distinguishing characteristic) with the captured image. But because there are several people in the image just captured, the user circles the portion of the image that produces a visual experience that is most similar to the experience of viewing the face of the target person (distinguishing characteristic). The image or the target person's face is subjected to a biometric analysis to produce a biometric profile. This profile is then found to be associated with the target person's telecommunications address in a database.
This associative process may occur on the user's terminal, on the terminals of other users, on a data processing system, or any combination. Once the correct address of the intended recipient has been determined, the Perceptual Addressing task has been completed. There are no restrictions on the varieties of subsequent communication between terminals.
METHODS OF PERCEPTUAL ADDRESSING
Following are descriptions of several different methods of Perceptual Addressing. These methods may be used alone or in combination.
Method #1: Non-Directional Transmission
A non-directional signal is broadcast to all devices in perceptual proximity. The signal contains the device ID# and network address of the transmitting device (Device #1) as well as a request for all receiving devices to send their own device ID#'s and addresses as well as a thumbnail image (or voice sample) of their user to the requesting device. The user initiating the request
(User #1) reviews all the images (or voice samples) received, and then by selecting the image (or voice sample) of the person that she is trying to contact (User #2), the user is actually selecting the address of User #2's device (Device #2). With this method a user will receive as many images (or voice samples) as there are users in the area. The advantages of this method are: a) it doesn't require that the user be particularly close to the target; and b) it is currently viable everywhere because it doesn't require the existence of a location technology infrastructure. This method consists of the following steps:
( 1 ) User #1 sees (or hears) someone, User #2, to whom she wants to communicate, and instructs her device using the device interface (she presses a button, for example) to contact all other devices in the perceptual proximity and obtain images (or voice samples) of their users.
( 2 ) User #l's device (Device #1) then broadcasts a non-directional unaddressed transmission to all other devices within range. The transmission includes User #l's device ID# and network address, as well as a request that images (or voice samples) be sent to Device #1.
( 3 ) Each device receiving the request responds automatically (without user awareness) with a transmission addressed to Device #1, sending its own device ID# and network address, as well as an image (or voice sample) of its user. (Only Device #1 will receive these transmissions as other devices will ignore an addressed transmission if it is not addressed to them.)
(4 ) User #1 reviews the images (or voice samples) received from all devices and selects the image (or voice sample) of User #2, thereby selecting Device #2's device ID# and network address. ( 5 ) Device #1 can now initiate communications with User #2 using
Device #2's network address.
Method #2: Non-Directional Transmission to Devices within a Limited Radius This method is identical to Method #1 with the modification that the initial request for user images (or voice samples) is made with a more limited signal strength, requiring User #1 to be within a few feet of the target person (User #2), thus limiting the number of other devices that will be contacted, and in turn limiting the number of images (or voice samples) that will be received. There are two different user options for how to control signal strength: (a) the user specifies a desired radius of effectiveness (selection may be made in terms of a unit of distance, "5 feet", for example, or in terms of general ranges, "far", "medium", and "close", for example) which then determines the signal strength; or (b) the user specifies the maximum number of people
(selection may be made in terms of specific numbers of people, "3 people", for example, or in terms of general numbers of people, "very few", "some", or "many", for example) that should be contacted: the signal strength then starts at a very low level and increases until the maximum number of people have been contacted (as measured by the number of responses received). The initial transmission requests device ID#'s, network addresses, and associated images (or voice samples). User #1 then selects the image (or voice sample) corresponding to the intended recipient, thus selecting the device ID# and network address of the correct target. This method consists of the following steps:
(1) User #1 sets the default transmission distance on her device ("5 feet", for example).
(2) User #1 sees (or hears) someone, User #2, to whom she wants to communicate. She walks to within five feet of User #2 and instructs her device (Device #1) via its user interface (she presses a button, for example) to contact all other devices within five feet, and obtain device ΓD#'S, network addresses, and images (or voice samples) from each device. (Alternatively, User #1 could have controlled the signal strength of the initial broadcasted request by specifying the maximum number of people that should be contacted so that her device gradually increased signal strength until the maximum number of responses is received. If she had sent the maximum equal to one person, the result would be that only the person closest to her would be contacted.)
(3) Device #1 broadcasts a non-directional transmission to other devices with enough signal strength to effectively reach approximately 5 feet, under "normal" conditions. The transmission includes Device #l's device ID# and network address, as well as a request for the device ID#, network address, and image (or voice sample) from other devices. (4) Each device receiving the request responds with a transmission addressed to the device making the request, sending its own device ID# and network address as well as an image (or voice sample) of its user.
(5) Device #1 receives the device ΓD#'S, network addresses, and images (or voice samples) from all other users in the area.
(6) User #1 selects the image (or voice sample) of User #2, thereby selecting User #2's device ED# and network address.
(7) Device #1 can now initiate communications with User #2 using Device #2' s network address.
Method #3: Non-Directional Transmission to Devices within a Specified
Radius
This method is identical to Method #2 with the modification that the number of other users contacted is not governed by modulating signal strength, but rather by measuring the distance between users and requiring that the recipient of the initial communication is within a specified distance from the requesting device. This feature allows the user (User #1 using Device #1) to limit the number of other devices that are contacted, and therefore limit the number of images (or voice samples) that will be received. There are two different user options for how to regulate the radius of contact (i.e. the distance from the user beyond which another person will not be contacted): (a) User #1 specifies a desired radius of effectiveness (selection may be made in terms of a unit of distance, "5 feet", for example, or in terms of general ranges, "far", "medium", and "close", for example); or (b) User #1 specifies the maximum number of people (selection may be made in terms of specific numbers of people, "3 people", for example, or in terms of general numbers of people, "few", "some", or "many", for example) that should be contacted: the radius of contact then starts at a small distance and increases until the specified number of people have been contacted (as measured by the number of responses received), or until a maximum distance has been reached
(approximately corresponding to the limit of the user's perceptual proximity), ha configuring this system, the distance between terminals can be measured by either the instigating terminal or the receiving terminals. If measured by the receiving terminals, then they will respond only if the distance is within the communicated radius of contact. The particular method of measuring the distance between terminals is not central to this method, but one that will suffice is for each terminal to determine its own position (via GPS or some other means) and then to compare with the reported position of the other terminal. The initial broadcasted transmission reports Device #1 's ID, address, position, and radius of contact; and requests of receiving terminals within the radius of contact their device ΓD#'S, network addresses, and associated images (or voice samples). Receiving devices respond with the requested information if within the radius of contact. After receiving the requested images (or voice samples), User #1 then selects the image (or voice sample) corresponding to the intended target person or vehicle, thus selecting the device ID# and network address of the correct target.
Method #4: Directional Transmission to Other Users' Devices This method is identical to Method #1 except that instead of Device #1 making an initial transmission in all directions, the transmission is focused in a relatively narrow beam toward the target person (User #2), thus reducing the number of other users contacted by the transmission, while at the same time allowing User #1 to be at a relative distance from User #2. The transmission uses frequencies in the range of 100 GHz to sub-infrared in order to balance the dual needs of creating a highly directional transmission from a small handheld device with the need to penetrate barriers (such as clothing and bodies) between the transmitting device and receiving device. This method consists of the following steps: (1) User #1 sees someone, User #2, to whom she wants to communicate.
She aims her device (Device #1) at User #2. User #1 instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2 and obtain device ID#'s, network addresses, and images of those users. (2) User #l's device (Device #1) sends a directional transmission to all other devices in the target user's direction. The transmission includes Device #l's device ED# and network address, as well as a request that images be sent to User #1. (3) Each device receiving the transmission responds with a transmission addressed to Device #1, sending its own device ID# and network address, as well as an image of its user.
(4) Device #1 receives device ID#'s, network addresses, and images from all other local users in the direction ofUser #2.
(5) From the images received, User #1 selects the image of User #2, thereby selecting the device ID# and network address of User #2's device, Device #2.
(6) Device #1 can now initiate communications with User #2 using Device #2 ' s network address .
Method #5: Directional Transmission to RFID tags
As an alternative to configuring a directional transmission that will penetrate obstructions, the emphasis is placed on a high frequency highly directional beam (infrared, for example) without regard for its penetration properties. It involves the use of one or more tiny Radio Frequency Identification Tags (RFID) tags clipped onto the outside of clothing of each user which, when scanned by the devices of other users, transmits the device ID# of the target user's own device to the interrogating device. In order to scan the RFID tag(s) of a target user, devices have highly directional scanning capability using a high-frequency signal (infrared, for example). User #1 points her device (Device #1) toward the person of interest (User #2). Then, depending on how highly focused the scan and how accurate the aim of User #1, the beam will contact the RFID tags of one or more individuals, including User #2, which will then transmit device ID#(s) back to Device #1. Device #1 then sends a non-directional transmission addressed to each of the devices contacted. The transmission contains User #l's device ID# and network address, and also a request for an image of the other users. After images are received from all the devices contacted, User #1 selects the image of the intended recipient, User #2, thus addressing a communication to only that individual. With this method a line of sight is required between User #11S device and the RFID tags of other users, and there is a range limitation as to how far passive RFID tags can transmit back to the scanning device. This method consists of the following steps: (1) User #1 sees someone, User #2, to whom she wants to communicate. She aims her device (Device #1) at User #2. User #1 instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2. (2) Device #1 transmits a high-frequency (infrared, for example) directional signal in the direction of User #2. This signal, containing Device #l's device E)# makes contact with the RFID tags of one or more users.
(3) Each RFID tag which receives the transmission from Device #1 then makes a transmission addressed to Device #l's device ID# and containing the device ID# of its user.
(4) Device #1 receives the device ID#'s from all RFID tags contacted and then sends a non-directional transmission addressed to each of those device ID#'s. These transmissions include Device #l's device ID# and network address as well as a request for an image of the user. If any of the other devices cannot be contacted with a direct transmission because they are now out of the immediate area, or for some other reason, then a transmission is made to the device's network address.
(5) Each device receiving a request for an image then transmits a user's image to Device #1.
(6) Device #1 receives all user images and displays them. User #1 selects the image of the user she intended to contact, User #2, thereby selecting Device #2's device ID# and network address.
(7) Device #1 can now initiate communications with User #2 using Device #2' s network address.
Method #6: Non-Directional Transmission to RFID tags This method is identical to the previous method (Method #5) with two important differences: (a) The transmission to scan the target person's RFID tag is non-directional; (b) Because the scanning is non-directional, scanning must be very short range, hi order to select the person of interest, User #1 must stand very close to User #2 when activating the scanning transmission. It is also important that User #1 makes sure that there are not any other users within scanning distance. Method #7: Directional Transmission to Intermediate RFID tags Similar to Method #5, RFID tags are worn by users who receive highly directional high-frequency transmissions from User #l's device (Device #1). But instead of transmitting a high frequency signal back to the Device #1, the
RFID tag converts the incoming signal to a relatively low frequency radio frequency (RF) signal (that easily penetrates clothing and bodies) and then transmits this RF signal to its owner's device (at most only two or three feet away) by addressing it with the device's device ID#. As this signal carries Device #l's Device ID, network address, and a request for a user image, after receiving the signal the target device makes a non-directional transmission addressed Device #1, sending its own device ID#, network address, and an image of its user. User #1 then needs only select the image of the person she intended to contact, User #2, in order to address subsequent transmissions to that person. Because the RFID tags do not transmit back to the initiating device, this solution does not have the range limitations of the previous method, although it still requires a line of sight between the device of the sender and the RFlD tag of the receiver. This method consists of the following steps: (1) User #1 sees someone, User #2, to whom she wants to communicate.
User #1 aims her device (Device #1) at User #2 and instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2.
(2) Device #1 transmits a high-frequency (infrared, for example) directional signal in the direction of User #2. This signal, containing
Device #l's device ID# makes contact with the RFID tags of one or more users.
(3) Each RFID tag contacted then transforms the signal to a much lower RF frequency and then transmits the same information, addressed to its user's device ID#. A low power transmission is adequate as the signal has to travel only a few feet (for example, from the RFID tag on the target person's lapel to the device in the target person's pocket). (4) After receiving the transmission, the receiving device makes a transmission addressed to Device #l's device ID# which includes the recipient device's device ID# as well as an image of the recipient.
(5) Device #1 will receive and display one image for every device contacted. User #1 selects the image of the user she intended to contact, User #2, thereby selecting Device #2's device ID# and network address.
(6) Device #1 can now initiate communications with User #2 using Device #2's network address.
Method #8: DPS Managed Image Identification
This method is similar to Method #1 with the exception that the images of nearby users, instead of being sent from the nearby devices themselves, are sent from a data processing system (DPS) which also mediates communication between devices. The DPS of this application has access to location information of all users (using GPS or some other means) as well as a database of all users containing their addresses, device ID#'s3 and facial images. Upon request the DPS is able to send images of proximal users within a pre-defined distance to a requesting device. This method consists of the following steps:
(1) User #1 sees someone, User #2, to whom she wants to communicate, and instructs her device, using the device interface (she presses a button, for example), to contact the DPS and request images of other users currently in her proximity. (2) User #l's device (Device #1) then transmits a request to the DPS. The transmission includes User #l's device ID# and network address, as well as a request that images be sent to Device #1.
(3) The DPS retrieves the necessary location information and determines which other users are within viewing distance of User #1. The DPS then transmits the images of those other users along with their associated device ID#'s to Device #1.
(4) User #1 reviews the images received and selects the image of User #2, thereby selecting Device #2's device ID#. (5) Device #1 initiates an addressed communication to Device #2 via the DPS by specifying Device #2's device ID#.
Method #9: Location Identification & Spatial Mapping In this method each user's device determines its own location coordinates periodically (at least once per second is recommended), and broadcasts periodically (at least once per second is recommended) those coordinates, along with the device's device ID#, to other devices sharing this application in the perceptual proximity. (It would also be an acceptable solution for a centralized system to track the location of all devices and transmit to all devices the locations, device IDtf's and network addresses of all devices local to each device, updating that information periodically.) It is necessary for devices to have periodically updated position information about all local devices in order to take into account the motion of users. It should be noted that location coordinates need not be globally valid ~ locally valid coordinates are sufficient. Each device is therefore aware of the positions of all other devices nearby - both their device ID#'s and location coordinates. Devices then have the information necessary to display a two dimensional self-updating map of all other users in the perceptual proximity in which each user is represented by a small symbol. A device ID# and network address is associated with each symbol so that a user need only select the symbol associated with a particular person to address a transmission to that person.
To contact a person of interest, User #1 first views the map on her device and compares the configuration of symbols on the map with the configuration of people before her. She then selects the symbol on the map which she believes corresponds to the intended recipient. Her device (Device #1) then makes a transmission to User #2's device (Device #2) containing Device #l's device TD# and network address and a request for an image of User #2. Device #2 then transmits an image of User #2 to Device #1. User #1 then compares the image received to the actual appearance of the person she intended to contact. If she determines that she has contacted the correct person, then she instructs her device via the user interface to initiate communications with User #2. If, on the other hand, the image that User #1 received does not correspond to the person that User #1 intended to contact, then User #1 may select another symbol which could possibly represent the person she wants to contact.
[An alternate version of this method would not require the constant periodic updating of position information during periods in which there are no users in a local area performing perceptual addressing functions. Instead, this same process would operate only when initiated by a user via the user interface of his or her device (pressing a button, for example). Upon initiation, Device #1 would detennine its own position (periodically for the next several minutes) and also broadcast a request for positions and addresses of all other devices in the vicinity. Upon receiving this request, each device would determine its own position (periodically for the next several minutes) and also broadcast (periodically for the next several minutes) its position and address. The rest of this alternate method is the same as the original method. The advantage of this alternate method is that it would save energy and bandwidth for devices not to be determining and broadcasting position when it is not needed or used. The disadvantage is that there is a short delay between the time User #1 initiates the positioning process and the time all user's positions are displayed on her device.
Yet another alternate version entails the above alternate method with the following changes: All devices maintain time synchronization to one-second accuracy by means of periodic time broadcasts via a network from a DPS. All devices constantly update their position ~ at least once per second ~ and record what position they are at each point in time. This data is saved for a trailing time period, 10 seconds for example. Then, when a device makes a request of other devices for positions and network addresses, the request specifies the precise time for which position information is sought. Using this second alternative method then, devices only transmit their positions when there is a request for position information, yet there is no inaccuracy in position information introduced as a result of potential movement of each user between the time the request for position is made and the time each device assesses its own position.] The advantages of this method are (a) it doesn't require a user to draw attention to himself or herself by aiming his or her device at another person; (b) it can precisely target just one person at a time; (c) it doesn't depend on making a "line-of-sight" connection; and (d) there are no range limitations other than the target person be in the same general vicinity. This method consists of the following steps:
(1) All devices periodically (at least once per second is recommended) determine their own position coordinates and broadcast those coordinates along with their device ID#'s to other devices in the perceptual proximity.
(2) User #1 's device (Device #1) receives frequently updated location information from all other devices in its perceptual proximity.
(3) User #1 sees someone, User #2, to whom she wants to communicate.
(4) User #1 instructs her device via its user interface (presses a button, for example) to display a 2-dimensional map of the locations of all other devices in the perceptual proximity in relation to itself. Each of the other devices are represented on the display by a small symbol (which can potentially represent useful distinctions such as the sex of the user, or whether the user is participating in the same "application" such as "dating", or "business networking", etc.).
(5) The user selects the symbol on the display of her device which she believes corresponds to User #2, thereby selecting the device ID# of User #2's device (Device #2). If the user is not operating her device in a "confirmation mode", then at this point addressed communications are initiated with User #2 which includes the transmission of an image of User #1, Device #l's device ID#, and Device #l's network address.
(6) If the User #1 does wish to operate her device in a "confirmation mode", then Device #1 makes a transmission addressed to the target device that includes its own device ID#, network address, and a request for an image of the target user.
(7) Device #2 responds by sending a transmission addressed to Device #1 that includes its own device ID#, network address, and an image of User #2. (8) User #1 views the image of User #2 on her display to confirm that it is the person she intended to contact.
(9) If the image received corresponds to the person she intended to contact, then she instructs her device (by pressing the "send" button, for example) to initiate an addressed communication to the target device. Device #1 also sends an image of User #1, Device #l's device ID#, and Device #l's network address to Device #2.
(10) If the image received from Device #2 does not correspond to the target user, then User #1 has the option of selecting a different symbol which could potentially belong to the target individual. If there is no symbol that corresponds to the target individual, then that individual either does not have a device which shares the same application, or that device is disabled, or that device is set in an "invisible mode" in which either it is not accepting communications at all, or it is not accepting communications from that particular sender.
Method #10: Virtual Beaming
This method is similar to method (6) except that it employs a different user interface, "virtual beaming", for selecting which devices will be contacted. In addition to incorporating the location technology of Method #9 (with the additional provision that absolute direction must be incorporated into the position coordinates returned by the positioning system ~ for example, given two position coordinates it must be possible to determine which position is further North and which position is further West), it also incorporates direction technology such as, for example, a digital flux-gate compass and/or a gyroscopic compass. Instead of a user targeting a person of interest by selecting a symbol on her display which she thinks corresponds to that person, she targets the person of interest by pointing her device at him and instructing her device via the user interface (pressing a button, for example) to contact that person.
Using the direction technology incorporated into the device in combination with the position technology already discussed, it can be determined with simple geometry which target individuals are positioned within a narrow wedge (in either two or three dimensions, depending on the sophistication of the positioning information) extending out from the user's position in the direction she is pointing her device: User #1 's device (Device #1) has already received information as to her own position and also the device IDWs and position coordinates of all other devices in the perceptual proximity. The direction that User #l's device was pointing when she targeted the person of interest can be represented as the "target vector", which begins at User #1 's position and extends in the direction determined by the direction technology in her device. For position information in 3-dimensions, a target volume can then be defined as the volume between four vectors, all extending from User #1 's position — two lying in a horizontal plane and the other two lying in a vertical plane. In the horizontal plane, one vector lies X degrees counterclockwise to the target vector, and the other vector X degrees clochλdse to the target vector, where X is a small value (5 degrees is recommended) which can be adjusted by the user. In the vertical plane, one vector extends in a direction X degrees above the target vector, and the other vector X degrees below the target vector
When User #1 points her device and "presses the button", Device #1 then makes an addressed transmission to all other users within the target area (or volume). The transmission includes Device #l's device ID# and network address, and a request for an image of the recipient. After the images are received, the user then selects the image of the person (and the corresponding device ID# and network address) she is interested in. Further communication is addressed solely to the selected device.
One advantage of this method is that the user is not required to read a map on her device, trying to make an accurate correspondence between the person she is interested in and the corresponding symbol on her display. This is of particular value when the target individual is moving. Another advantage is that obstructions between the user and the target person are not an issue when targeting: a user may hold the device within a coat pocket or bag when targeting an individual. The only disadvantage in comparison with Method #9 is that the initial request for an image possibly may be made to more than one target device.
This method consists of the following steps: (1) AU devices periodically (at least once per second is recommended) determine their own position coordinates and broadcast those coordinates along with their device ID#'s to other devices in the perceptual proximity.
(2) User #l's device (Device #1) receives frequently updated location information from all other devices in its perceptual proximity.
(3) User #1 sees someone, User #2, to whom she wants to communicate. She aims her device (Device #1) at User #2. User #1 instructs her device via its user interface (she presses a button, for example) to contact all other devices in the direction of User #2 and obtain images of those users.
(4) Device #1 determines which of the positions reported by other devices lie in the target area defined by its own position and the direction it was pointing when User #1 instructed her device to initiate contact. If there was only one device in the target area, then Device #1 is now able to communicate with that device using its network address.
If more than one device is in the target area, then Device #1 must determine which of those devices is the intended target. User #1 can either repeat the same process, hoping that there will be only one person in the target area the second time, or hoping that only one person will appear in both the first and second attempts. Alternatively, User #1 can use a different distinguishing factor - appearance - to determine which of the addresses obtained belong to the intended target. Following is the later procedure:
(5) Device #1 makes a transmission addressed to all devices in the target area as defined above. The transmission includes Device #l's device ID# and network address, and a request that user images be sent to Device #1. (6) Each device receiving the transmission responds with a transmission addressed to Device #1, sending its own device ID# and network address, as well as an image of its user.
(7) Device #1 receives images from all users in the target area. (8) From the images received, User #1 selects the image corresponding to
User #2, thereby selecting the device TD# and network address of User #2's device, Device #2.
Method #11: Addressing with Spatial Position via a DPS In this method, User #1 notices the person to which she wants to send a message, User #2, and with her device, Device #1, determines the precise distance and direction that User #2 is from her own position. This can be accomplished with any compass and distance measuring capabilities (for example, a flux gate compass and an ultrasonic or laser distance sensor) built into Device #1. Device #1 then transmits a message, along with the relative position of the intended target, to a DPS with instructions to forward the message to whatever device is at the specified position. The DPS has access to absolute positions of all users (via GPS or some other means) and can easily calculate the absolute position indicated by adding the submitted relative position to Device #1 's absolute position. The DPS then determines which user is nearest to the calculated position of the target and forwards the message to that user.
[Variation: Device #1 has access to its own absolute position (via GPS or some other means), and with the known relative position of the target person, is then able to calculate the absolute position of the target person. This being the case, Device #1 submits to the DPS the targets absolute position, rather than the target's position relative to itself.]
Method #12: Visual Biometric Addressing via a DPS
This method generally involves capturing an image of the target person's face, analyzing the image to produce a unique biometric profile, and then associating the biometric profile with a similar biometric profile and address in a database. The image analysis can be performed on either (1) the user's device or (2) on a data processing system (DPS). In the first case, the user's device (Device #1) would send its own ID/address, any message, and the biometric profile to the DPS, where the biometric profile would be matched with a biometric profile stored in a database along with an associated address, and then facilitate communication with that address (forward a message or report the address to Device #1, for example), hi case (2), the user's device would send its own ID/address, any message, and the captured image to the DPS. The DPS would then analyze the image; match the resulting biometric profile to a biometric profile and address stored in its database; and facilitate communication with that address.
There are several types of biometric profiles that this method could be applied to: facial recognition, outer (external) ear recognition, and retinal pattern, for example. The retinal analysis would require a specialized camera for that purpose to be integrated into users' devices. However this invention is agnostic as to the specifics of what kind of biometric analysis is used, whether it is current or future biometric technology. The method of using a visually obtained biometric "signature" to address a message remains the same, hi all of the above variations, the user selects the intended target person by aiming the user's device at the target person and capturing an image.
Method #13: Auditory Biometric Addressing via a DPS
This method is analogous to Method #12, but instead of using an image of person's face to address a communication, it uses a person's distinct vocal characteristics as a means of determining the target person's address. First, a voice sample needs to be collected. This can be done by the user moving close to the intended target and recording a voice sample when the target is speaking. Sound recording and editing features can easily be incorporated into small devices and this is existing technology. Alternatively, a directional microphone integrated into the user's device could be aimed at the target person for recording their speech. (It may be easier for a blind person to aim a microphone than to maneuver close to the target.) After the voice sample is collected it can be analyzed either on the user's device or on a DPS. If analyzed on the user's device, the message along with the biometric profile can be sent to the DPS, where the biometric profile will be matched with a biometric profile that is stored in a database along with an address. Once the association is made to the address, the message is then forwarded to the target person. Alternatively, if the voice sample is analyzed on the DPS, then the user's device sends the message along with the voice sample itself to the DPS. The DPS then converts the voice sample to a biometric profile, finds a match for the biometric profile using a database, associates the biometric profile with an address, and then forwards the communication to that address.
Method #14: Addressing Directly to Target Terminals Using Image, Voice Quality, or Position
This method is analogous to the three previous methods (Method #'s 11, 12, and 13) in which the information describing the distinguishing characteristic was sent to a DPS where it was associated with an address, and then forwarded to that address. However in this method, the information describing the distinguishing factor is not sent to a DPS, but rather, it is broadcast to all proximal terminals. Each terminal receiving the broadcast compares the expression of the distinguishing characteristic with the distinguishing characteristics of its user. If there is a match, then the terminal accepts the communication and responds if appropriate. For example, User #1 using Device #1 expresses a distinguishing characteristic of a target person (captures an image of the target's face and transforms this image into a biometric profile) and broadcasts this information together with Device #1 's ID/address and a brief message. Device #2, along with several other devices, receives the broadcast from Device #1. Device #2 has stored in its memory the biometric profile of the image of its user's (User #2) face. It compares the two biometric profiles. If they do not match then it ignores the communication from Device #1. If they do match, then it responds according to User #2's wishes.
This method has three main variations — one for each type of distinguishing characteristic which is used to specify the target person or vehicle. The distinguishing characteristics of targets may be expressed by User #1 as described in Method #'s 11, 12, and 13. This method consists of the following steps:
1. User #1 using Device #1 captures an image or a voice sample or the target person/vehicle, or else determines the position of the target using techniques described in Method #'s 11, 12, and 13.
2. Device #1 broadcasts message, Device #l's ID/address, and captured image of target (or biometric abstraction thereof) or voice sample of target (or biometric abstraction thereof), or position of target.
3. If a raw image or voice sample is broadcast, then receiving devices analyze them to create a biometric profile. Receiving devices then compare the features of biometric profile or position w/features of their user's biometric profile or position. A device with a close enough match receives the communication. It knows the address of the sender and can respond if appropriate. VARIATION: Another distinguishing characteristic related to appearance of the target person/vehicle may be used. Because it is only necessary to distinguish the target from other people or vehicles in the perceptual proximity of User #1, the level of specificity required in expressing the characteristics of the target is less stringent than if the target was to be distinguished from millions of other people in a database. In addition, the profiles stored on each terminal describing their user may be updated frequently - possibly daily, ensuring a higher degree of similarity than if the information was kept on a DPS and updated less frequently. These two preceding factors allow for another type or category of visual profile of the target — one that is descriptive of the visual quality of their clothing. For example, a color profile, or pattern profile, or contrast profile could be created which would allow for adequate specificity, could be obtained from any angle, and would not require facial information to be obtained. ADVANTAGE: Easier to frequently update image of self stored on own device, so can compare with captured images with even temporary features such as color of shirt, jacket, & tie. Biometric profile need not be unique among large database of users, but only need be unique among relatively small number of proximal users. Would not require that user's of such a communication system submit information about their voice or their appearance to a database or to other users.
Methods #15 & #16: Data to Image Mapping In contrast with some previous methods involving images for the selection of the target person, Methods #15 and #16 do not depend on the user's device receiving images of other users from a DPS or other users' devices. Instead, it is the user's own device which generates any necessary images of other users. In addition, in contrast with these previous methods, each image generated in Methods #15 and #16 by the user's own device may contain more than one person. Following is a description of the user's experience using these methods. Afterward, more technical descriptions will be given.
In order to use these methods, the user points the camera on her device at the person she would like to communicate with (see Figure 1). She instructs her device (by pressing a button, for example) to either capture a still image, or to begin displaying live video. The camera generates an image of a person (or a group of people) from the user's point of view. The user views either a still image or a live video image on her device's display. Superimposed over the image of each person (only if that person is a user of the application) is a small graphic shape, a circle for example, which represents the location of that person's device. The user selects the person with whom she wants to communicate by tapping with a stylus the circle superimposed over that person's image. (Other user interfaces are compatible with this invention: for example, the user could select the desired circle by toggling from circle to circle by turning a dial on her device). Each circle is associated with the device ID# and network address of the device belonging to the user whose image lies underneath the circle. The user's device then initiates communication with the device of the selected person — either by sending a regular or Discreet message, or by initiating some other form of communication such as, for example, an instant messaging session, a telephone call, or a videophone call. In order to achieve this operation, it must be possible to associate the device ID# and/or network address of a target person's device with the image of that person as represented on the display of a user's device. There are two alternative techniques for accomplishing this task: (1) mapping position data onto an image, and (2) focusing both light radiation from the target person and also data-carrying radiation from the target person's device onto the same imaging sensor (or two different imaging sensors and then overlay the data captured on each sensor).
Method #15: Data to Image Mapping — Mapping Position Data onto an
Image
The means of associating a graphic symbol (a circle, for example) that is linked to data (device ID# and network address, for example) with a particular portion of an image (likeness of a target person, for example) is accomplished by mapping position data received from another person's device onto the display of the user's device.
The process of mapping of objects that exist in 3 -dimensional space onto the two-dimensional display of a user's device requires the following factors: (a) the position of the user's device, (b) the position of the target device(s), (c) the orientation of the user's device, (d) the focal length of the device's camera lens, (e) the size of the camera's image sensor, and (f) the pixel density of the sensor. The last three factors (d, e, and f) are properties of the user's camera and are either fixed quantities, or at least, in the case of the lens's focal length, known quantities easily output from the camera.
In order to acquire the position data, factors (a) and (b), an infrastructure is required (1) to determine the precise location of each device with location coordinates which are valid at least locally, and (2) to provide time- synchronization to all devices (at least locally) to sufficient accuracy
(approximately 1/10 second accuracy is recommended for most situations). Time synchronization is necessary in order to take into account movement by either the user or potential target persons. If the location history of each device is stored for a trailing period of about 5-seconds (or similar period of time short enough so that only a manageable amount of memory is required, yet long enough so that all devices are able to respond to a request for information within that time period), then the locations of all users may be determined for the moment an image is captured.
Each device stores its own location data, or alternatively, the location data for all local devices is stored by a single third-party DPS. If a user targets a person by capturing a still image, then when the user presses a button to capture the image, his device broadcasts [to other devices within a specific domain, where "specific domain" can be defined in any one of a variety of ways, for example, (a) any user which receives the broadcast, (b) any user with location coordinates within a designated quadrant relative to the user, etc.] its own device ID and network address accompanied by a request for other devices to transmit their position coordinates for a specified moment within the past five seconds (or other pre-determined trailing period). When potential target devices receive this broadcasted request, they respond by transmitting to the network address of the requesting device (a) their device ID#, (b) their network address, and (c) their position coordinates for the time specified in the request. Alternatively, if the position data is stored on a third- party DPS, when the user captures an image, the request for position information is instead directed to the third-party DPS. The DPS then provides the requested position information of all eligible devices along with the associated device ID's and network addresses. The technology to accomplish both position and synchronization functions currently exists, and it is irrelevant to this invention which location and synchronization technologies are used as long as they deliver the required information.
Additionally, this technique requires that each device have the capability of accurately determining its own orientation in three-dimensional space, factor (c). Specifically, the information required is the orientation of the device's camera ~ horizontally (direction as it is projected onto a horizontal plane), vertically (the degree in which its orientation deviates from the horizontal), and "roll" (the degree to which the device is rotated about the axis defined by the direction that the device's camera is pointing). The technology for a device to determine its own orientation currently exists, and it is irrelevant to this invention which technology is employed as long as it delivers the required output. One adequate form of the required output describes the camera orientation with three angles: (φ, θ, ψ), where φ is the degree that the camera is rotated to the left in a horizontal plane from a reference direction; θ is the degree that the camera is tilted up or down from the horizontal; and ψ is the degree that the camera is rotated in a clockwise direction about the axis defined by the direction it is pointing.
Following is a description of how the position of a target person's device may be mapped onto the display of a user's device.
Figure 2 illustrates two users in 3-dimensional space described by an x,y,z coordinate system in which the z-dimension represents the vertical dimension and the x and y coordinates describe the user's location with respect to the horizontal plane. The locations of Device #1 and Device #2 are represented by the coordinates ^1 'Λ ' zi / and fa > Λ - zi ) } respectively. (More precisely, the location coordinates represent the location of each device's image sensor.) User #1 points his device in the general direction of User #2 and captures an image at a particular moment in time, t. Simultaneously, his device broadcasts its own device E) and network address and a request to nearby devices to send their position coordinates at time t along with their device ID'S and network addresses. User #2's device (Device #2, in User #2's bag) responds to this request by transmitting the requested position coordinates fa ' y-i • 22 ) s device ID#, and network address to Device #1.
In order for Device #1 to represent on its display the location of Device #2 superimposed over the image of User #2, it must also have (in addition to the location coordinates of Device #2) its own location coordinates ^I'^i'^i/ and the orientation of its camera in space (φ, θ, ^). These values are returned by the location system employed and the device orientation system employed, respectively. Figure 3 illustrates the same two users represented from an overhead viewpoint projected against the horizontal plane. The direction in which the camera is pointed in the horizontal plane is specified by a vector which is rotated φ degrees counterclockwise from the direction of the positive x-axis. In Figure 4, the Z-axis represents the vertical dimension, and the horizontal axis represents the vector from Device #1 to Device #2 projected onto the x-y plane. The degree to which the camera orientation deviates from the horizontal is represented by the angle, θ. Figure 5 illustrates the display of Device #1. The camera has been rotated ψ degrees in a clockwise direction about the axis defined by the direction the camera is pointing. This results in the rotation of the image in the display "φ degrees in a counterclockwise direction.
In the device display in Figure 5 is shown the image of User #2 as well as a circle indicating the location of User #2's device. The position coordinates, x v p and v (given in units of pixels from the center point of the display), specify the placement of the circle in the display and are determined as follows.
Figure imgf000031_0001
and
Figure imgf000031_0002
where
P s = total number of horizontal pixels on the image sensor
P v = total number of vertical pixels on the image sensor
S. * = width of the image sensor y = height of the image sensor 3 = focal length of the camera lens
and
X0 = cos θ( X0 cos φ+ y0 sin φ) + Z0 sin 6 yϋ = cos^(- X0 sin φ+y0 cos φ) + sin v[z0 cos θ ~ (x0 cos ^ +y0 sin ^)sin ø] zjj = -sin ψ(- X0 sin ^ + yQ cos ^) + cos^[z0 cos θ - (x0 cos φ + J0 sin ^)sin θ]
where
X0 = X2 - X1
Z0 = Z2 - Z1
Note that a simpler version of this technique is possible which uses 2- dimensional rather than 3 -dimensional position analysis. In this simpler version, the user's device does not have information as to the elevation of the other user's device. It only knows its location in the horizontal plane. Thus, instead of a geometric shape appearing on the user's display at a point which corresponds to the point that the other user's device would appear if it was visible, a narrow vertical bar appears on the display which intersects the same point. The system is the same in all other respects. This simpler level of complexity comes at little cost. The only situation that would confound a 2- dimensional system is when two potential targets are in the same horizontal direction from the user's perspective, but one target is directly above or below the other.
Method #16: Data to Image Mapping — Focusing Data Signals onto an Image Sensor
This method is the same as Method #15 with the exception that it uses a different technique for associating a graphic symbol (a circle, for example), which is linked to data (Device TD and network address, for example), with a particular portion of an image (likeness of a target person, for example). The technique used here is that each device broadcasts a signal which is directional and has a limited ability to penetrate solid objects (clothing, for example) — the best frequencies being in the gigahertz to sub-infrared range.
The lens of the camera focuses this data-carrying radiation together with the visible light-frequency radiation onto the same image sensor. [There are several lens materials that have the same index of refraction for both light radiation and other wavelengths in the range under discussion.] Intermingled with elements of the image sensor which are sensitive to light radiation are other elements which are sensitive to the data-transmitting wavelengths. These other elements are able to receive and decode data and also tag each signal with the place on the sensor in which it is received.
Because it is not important to determine shape from incoming sub-infrared radiation, but merely position, lower resolution, and hence lower pixel density is required for elements that are sensitive to these data-transmitting wavelengths. However, each of these elements in the image sensor is required to be able to receive and channel data from independent data streams as there may be more than one device "appearing" on the sensor which is transmitting its data. Each data stream is indexed and stored with the pixel number which receives the data. Because the data to be transmitted is very small - one device ID or network address - the time of transmission from the onset of the signal to the end of the signal is too short to result in any significant "blurring" across pixels.
A variation of this method is to focus the light radiation and the data- transmitting radiation onto two separate sensors. Using this variation it is necessary to associate the relative positions on each of the sensors so that for any given pixel on the data sensor, the corresponding location on the image sensor can be calculated, and thus a geometric shape can be displayed at that position superimposed over the image.] Method #17: Determine Exact Direction to Target by Touching Target in Image
This method involves a two-stage method of expressing distinguishing factors of the target person/vehicle, and combines some of the techniques introduced in Methods 10, 11 and 16. In the first stage a user (User #1 using Device #1) expresses position by pointing a camera. In the second stage User #1 expresses a combination of visual appearance and position by touching the image of the target within the image displayed on his or her terminal.
Initially, User #1 points the camera in his or her terminal at the target to acquire an image - either a captured still image or a live video image. Using any type of accurate compass technology the direction the camera is pointing can be determined. Thus the object in the center of the image on the viewing screen, assuming accurate calibration, lies precisely in the direction that the camera is pointing. But objects not in the center of the image will lie in a different direction corresponding to the degree of displacement from the center. Using the same known mathematical methods described in Method #15, the precise deviation in direction from the camera direction can be calculated for each point on the image. Thus for any given image displayed on the user's terminal, assuming the precise direction the camera was pointing
(when it produced that image) is known, it can be determined what direction vector from the user corresponds to every point in the image. (This assumes that certain properties of the camera are known such as the size and pixel density of the image sensor and the focal length of the lens.) Thus, if a user touches a target person in an image displayed on his or her terminal, then the terminal can determine the precise direction of the target from the user's position.
If the user is viewing a live image when he or she designates a target by touching the screen, then the terminal will sample the direction and position of the camera at the same moment the screen is touched to use in its calculation of the target's direction. However, if the user is viewing a captured and stored image of the target, then it will be necessary for the terminal to sample and store with the image the direction and position of the camera at the time the image was captured. In that way, the orientation of the camera may be changed after the image is captured but before the target is selected. Assuming that the target has not moved, even if the user has moved, the determination of the direction vector from the user's previous position to the target will still be valid.
This method has the additional capability of determining the position of the target in the following way: assuming the target does not move, if User #1 moves even a small amount and repeats the procedure of defining a vector to the same target from a different position, the position of the target can be determined as the intersection of the two vectors.
The determination of position could also be accomplished by combining this method with a distance measuring technology (a type of radar, for example). The position of the target would simply be the distance of the nearest object in the specified direction from the position of the user.
Given this method for expressing the targets direction from a specified position, or alternately the targets position, there are a number of methods that can be used to associate these known quantities with the address of the target's terminal.
Following are a sampling of methods by which the direction from the user's position could be associated with a target's address: - Device #1 forwards the target vector (the vector pointing from its own position toward the target's position) to a DPS. The DPS independently determines the positions (using GPS or some other means) of all proximal users, and then determines which of those positions lie along the position vector specified by Device #1 and is the closest to Device #1. Knowing the ID's and network addresses of all devices in the network, the DPS then provides Device #1 with the means to communicate with the target by providing either the target's ID, or address, or temporarily assigned E), or alternatively, by simply forwarding Device #1 's initial communication (which could include Device #l's ID and address) to the target device. - Device #1 broadcasts its address, its position, and the direction vector of its intended target. Each terminal receiving the broadcast determines its own position and responds if it lies on the specified vector.
- All proximal devices send their positions and addresses to Device #1 in response to a broadcasted request. Device #1 then determines the address of the nearest device that is positioned along the direction vector to the intended target.
Following are a sampling of methods by which the position of the target could be associated with a target's address:
- Device #1 forwards the target's position to a DPS, which associates the position with the same independently determined position of a device whose address is known to the DPS. The DPS then provides Device #1 with the target's ID, or address, or temporarily assigned ID, or alternatively, simply forwards Device #l's initial communication (which could include Device #1 's ID and address) to the target device.
- Device #1 broadcasts the position of its intended target. Each receiving terminal determines its own position and responds if its position is the same as the specified target position. - All proximal devices send their positions and addresses to Device #1 in response to a broadcasted request. Device #1 then determines the address of the device that reports a position that is the same as the determined target position.
Method #18: Determine Which Devices Are in Perceptual Proximity by
Sending or Receiving Broadcast
This method is similar to Method #1 with one important difference: Instead of a user's device (Device #1) receiving images of other users in perceptual proximity from their devices, only device ID's and/or network addresses are received from other users' devices. The images of those other users are received from a data processing system (DPS).
There are two main variations of this method. In the first variation, Device #1 broadcasts a request for device ID's and/or network address with a signal strength sufficient to reach all devices within perceptual proximity. If this request includes Device #l's device ID and/or network address, then the devices receiving this request may either send the requested information in an addressed transmission to Device #1, or alternatively, devices may respond by simply broadcasting the requested information.
In the second variation, all devices constantly, or intermittently (for example, once per second), broadcast their device ID and/or network address with signal strength necessary to reach other devices within perceptual proximity. Device #1 would then obtain the device ID's and/or network addresses of other devices in perceptual proximity simply by "listening".
The device ID's and/or network addresses obtained by Device #1 are then transmitted to a data processing system with a request for an image(s) of each of the associated users. The data processing system then transmits to Device
#1 the requested images paired with their respective device ID's and/or network addresses. The user (User #1) of Device #1 views the images received and selects the image which corresponds to the intended target person or target vehicle, thus selecting the target's device ID and/or network address.
Device #1 can then initiate addressed communication with the target person/vehicle. The means of transmission can be either direct (device to device) or indirect (via a network). The communication may or may not be mediated by a DPS.
ALTERNATIVE 1 : Identical method with one exception: a different distinguishing characteristic of the target is used - voice quality - instead of appearance. Instead of User #1 viewing a series of images sent from the DPS (each linked to an ID/address), User #1 listens to a series of voice samples sent from the DPS (each linked to an ID/address). User #1 selects the voice sample that is most similar to the sound of the target's voice, thus at the same time selecting the ID/address of the target. ALTERNATIVE 2: Identical method with one exception: a different distinguishing characteristic of the target is used - relative position - instead of appearance. Instead of User #1 viewing a series of images sent from the DPS (each linked to an ID/address), User #1 views a 2-dimensional floor map sent from the DPS. The map displays the positions of all other users in the perceptual proximity such that each user is represented by a small symbol. An ID/address is associated with each symbol. User #1 selects the symbol that corresponds to the position of the target, thus at the same time selecting the ID/address of the target.
Method #19: Determine Which Devices Are in Perceptual Proximity by Scanning RFID Tags
This method is a variation of the previous method (Method #18), differing only in the manner in which the user's device (Device #1) obtains the device ID's and network addresses of other devices in perceptual proximity of the user (User #1). In this method the ID/addresses are obtained from RFID tags (active or passive) that represent other user's devices and that may or may not be physically incorporated within the devices they represent. In order to obtain other devices' device ID's and network addresses, Device #1 transmits a non-directional signal interrogating all RFID tags within perceptual proximity of User #1. In response to this interrogation, all RFID tags transmit (broadcast) the device ID and/or network address of the devices they represent. Device #1 thus receives the RFID transmissions carrying the device ID and/or network addresses of all devices in perceptual proximity. From this point on, Method #19 is identical with Method #18.
Method #20: Proximal Devices Transmit IP's and/or Addresses Directly to DPS Instead of to the Proximal Requesting Device This method is identical to Method #18 with the only difference being the manner in which the DPS obtains the ID/addresses of the proximal devices to
Device #1. In this method, as in Method #18, Device #1 broadcasts a request for device ID's and/or network address with a signal strength sufficient to reach all devices within perceptual proximity. In this Broadcasted request, Device #1 includes its own device ID/address and a "Request Event ID", a number which uniquely identifies this particular attempt at a Perceptually Addressed communication from this particular user. When proximal devices receive this broadcasted request, instead of sending their ED/addresses to Device #1 as they did in Method #18, they send their ED/addresses to the DPS along with Device #l's ID/address and the Request Event ID. For each of the
ID/addresses reported to the DPS by Device #1 's proximal devices, the DPS sends to Device #1 a representation of a distinguishing characteristic (image, voice sample, or position - depending upon the configuration of the system) of the user of that device paired with that device's ID/address. From this point on, this method is identical to Method #18.
Method #21 : Visual Identification on DPS in which Alternatives Limited to
Proximal Users
This method is similar to Method #12 in that the user (User #1) of a terminal (Device #1) captures an image of the target with a camera on his or her terminal and then transmits that image, or the portion of the image that includes only the specific target of interest, to a DPS. But in Method #12 the image needs to contain enough information about the target's visual features, and the analysis of the image needs to be sufficiently thorough, that the person in the image can be distinguished among the many (possibly thousands or millions) other users in the DPS's database. In contrast, the current method assumes knowledge of which other people/vehicles are in the user's perceptual proximity. Consequently, the image of the target submitted by Device #1 need not contain as much information about the target's visual features and the analysis of the submitted image need not be as thorough because the result of the analysis need only discriminate among the relatively few people/vehicles present. (A variation on this method is that Device #1, instead of the DPS, would analyze the captured image of the target to produce a biometric profile, and then transmit to the DPS the biometric profile instead of the image on which it is based.)
There are two general methods for determining the identities of the people/vehicles in the User #1 's perceptual proximity: (1) The DPS uses GPS or some other method to determine the location of user's, and determines which user's are within a predetermined radius of User #l.
(2) Techniques applied in Methods #18, #19 or #20 in which the ID's or addresses of proximal users are reported to the DPS: a. Device #1 broadcasts its ID/address along with a request to other devices in the perceptual proximity that their ID/addresses are sent to Device #1. Then Device #1 forwards the ID/addresses of proximal users to the DPS. b. Device #1 receives the broadcasted ID/addresses of other devices in its perceptual proximity, then forwards those ID/addresses to the DPS. c. Device #1 broadcasts its ID/address, a Request Event ID, and a request to other devices in the perceptual proximity that their ID/addresses are sent directly to the DPS attached to Device #1 's
ID/address and the Request Event ID. d. Device #1 scans the RFID tags of proximal devices to obtain the ID/addresses of their associated terminals. Then Device #1 forwards the ID/addresses of proximal users to the DPS.
There are several types of visual biometric profiles that this method could be applied to: facial recognition, outer (external) ear recognition, and retinal pattern, for example. The retinal analysis would require a specialized camera for that purpose to be integrated into users' devices. However this invention is agnostic as to the specifics of what kind of biometric analysis is used, whether it is current or future biometric technology. The method of using a visually obtained biometric "signature" to address a message remains the same. In all of the above variations, the user selects the intended target person by aiming the user's device at the target person and capturing an image.
This method consists of the following steps:
1. User #1 captures an image of a target that he or she wants to communicate with 2. If necessary, User #1 crops the image to include only the target.
3. Device #1 either produces a biometric profile of the target image and transmits this profile to a DPS, or Device #1 transmits the target image itself to the DPS. 4. The DPS acquires the ID/addresses of all other users in Device #1 's perceptual proximity using one of the methods outlined above.
5. The DPS compares the image (or its biometric profile) of the target received from Device #1 to the images (or their biometric profiles) of the other users present which the DPS has stored in a database along with their ID/addresses. The DPS determines which proximal user has image (or biometric profile of an image) that is most similar to the image (or biometric profile of an image) submitted by Device #1.
6. The DPS facilitates communication between Device #1 and its target (for example, by forwarding a communication attached to the submitted image, by transmitting to Device #1 the ID/address of its target, or by communicating the ID/address of Device #1 to the target, or by some other means).
ADVANTAGES: - The DPS does not need to positively identify the submitted image, but only find the greatest similarity among the other users present. - Protects confidentiality of users in that it is not require them to allow strangers to access images of themselves. In addition, depending upon the method used for the DPS to know the ID/addresses of proximal users, it is possible for the instigating user (Device #1) to not have access to any
ID/addresses.
Method #22: Voice Quality Identification on DPS in which Alternatives Limited to Proximal Users
This method is identical to the previous method (Method #21) with the only exception being that a different distinguishing characteristic of the target is used - voice quality - instead of visual appearance. Instead of the target's image being captured by a camera, the target's voice is recorded by a microphone on the user's Device (Device #1). Device #1 transmits the captured voice sample (or biometric profile of the voice sample created on Device #1) to a DPS. In the same way as the previous method, the DPS determines which other users are in the perceptual proximity to Device #1, and compares their voice samples to the sample submitted by Device #1.
Once the best match is determined, the DPS facilitates communication between the two devices.
Method #23: Position Identification on DPS in which Alternatives Limited to Proximal Users
This method is identical to the previous two methods (Methods #21 and #22) with the only exception being that a different distinguishing characteristic of the target is used - position - instead of visual appearance or voice quality. Instead of the target's image being captured by a camera, the target's relative position is determined by the user's Device (Device #1). Any of the previously describe techniques for expressing the relative position of a target will suffice: determining the direction vector of the target from the Device #l's position, determining both the direction and distance of the target from the Device #1 's position, or determining the absolute position of the target.
Device #1 transmits the relative position of the target to a DPS. The DPS determines which other users are in the perceptual proximity to Device #1 by using one of the following techniques:
Techniques applied in Methods #18, #19 or #20 in which the ID's or addresses of proximal users are reported to the DPS: a. Device #1 broadcasts its ID/address along with a request to other devices in the perceptual proximity that their ID/addresses are sent to Device #1. Then Device #1 forwards the ID/addresses of proximal users to the DPS. b. Device #1 receives the broadcasted ID/addresses of other devices in its perceptual proximity, then forwards those ID/addresses to the DPS. c. Device #1 broadcasts its ID/address, a Request Event ID, and a request to other devices in the perceptual proximity that their ID/addresses are sent directly to the DPS attached to Device #l's ID/address and the Request Event ID. d. Device #1 scans the RFID tags of proximal devices to obtain the
ID/addresses of their associated terminals. Then Device #1 forwards the ID/addresses of proximal users to the DPS.
The DPS then independently determines the exact positions of each of the user's which were reported to be in Device #l 's perceptual proximity. It compares each of those positions to the position (or range of positions) reported by Device #1 as being the location of the target. The DPS determines which user is closest to the target position submitted by Device #1 and then facilitates communication between those two devices.
Method #24: Identification on DPS in which Alternatives Limited to Proximal Users, but Image, Voice Samples, or Position Submitted by Each Proximal Device This method description applies to all three types of distinguishing characteristics: image, voice quality, and position. This method is the same as the previous three methods in that the instigating user, User #1, submits to a DPS the distinguishing factor of the target, and the DPS also acquires the ID/addresses of all devices in User #l's perceptual proximity to facilitate the comparison and matching process. But in these previous methods, if the distinguishing characteristic was an image or a voice sample, that information was stored with the devices ID/address in a database on the DPS; and if the distinguishing characteristic was a position, that information was independently determined by the DPS for every device in the perceptual proximity. In contrast, with this method, the distinguishing characteristic is submitted independently by each device that is in the perceptual proximity of
Device #1. It is done so at the same time each device reports with its ID/address that it is in User #1 's perceptual proximity, or alternately, after the DPS or Device #1 determines which devices are in the proximity and then prompts them (transmits a request) to submit the appropriate distinguishing characteristic to the DPS. AU other aspects of this method are identical with the previous three methods (Methods #21, #22, and #23).
Method #25: Directional Broadcast to Determine Sub-group of Users in Perceptual Proximity
This method is a variation of all methods in which a DPS determines the identity of a target by comparing a sample of the target submitted by the instigating user (User #1) with the distinguishing characteristics of all users determined to be in User #1 's perceptual proximity. This variation concerns the method by which it is determined which users are to be included in this comparison process. It is advantageous for this group to be as small as possible to reduce network bandwidth used, to reduce the DPS processing time, and to increase accuracy of identification. More specifically this method is a variation of the following methods: a. Device #1 broadcasts its ID/address along with a request to other devices in the perceptual proximity that their ID/addresses are sent to Device #1. Then Device #1 forwards the ID/addresses of proximal users to the DPS. b. Device #1 broadcasts its ID/address, a Request Event ID, and a request to other devices in the perceptual proximity that their
ID/addresses are sent directly to the DPS attached to Device #l 's ID/address and the Request Event ID. c. Device #1 scans the RFID tags of proximal devices to obtain the ID/addresses of their associated terminals. Then Device #1 forwards the ID/addresses of proximal users to the DPS.
Instead of broadcasting or scanning omni directionally, in this method, Device #1 broadcasts or scans directionally, by any method, in the general direction of the target, thus eliminating from consideration those devices that are in the opposite direction. In this way fewer potential targets are considered by the DPS . Method #26: Identification on User's Device in which Image, Voice Samples, or Position Submitted by Each Proximal Device This Method is identical to Method #24 with the important exception that all of the functions in that method that were performed by a DPS are here performed by the instigating device, Device #1. Whether this method functions with image, voice sample, or position is primarily a function of the system which is employed, but it is possible that some combination of all of those distinguishing characteristics could be used in the same application and that a user may have the option of choosing which distinguishing characteristic to express.
In this method the user, User #1, either captures an image, a voice sample, or makes some determination of position of the target using a previously described method. At the same time, Device #1 broadcasts a request to other devices in User # l's perceptual proximity requesting either an image (or biometric profile thereof), or voice sample (or biometric profile thereof), or position be forwarded with an accompanying ID/address to Device #l's ID/address. After receiving user images (or voice samples, or positions) from each device in the perceptual proximity, Device #1 compares those images (or voice samples, or positions) with the captured image (or voice sample, or position) to determine the best match. After determining the best match, Device #1 then has identified the ID/address associated with the target device.
Method #27: User Selection of Both Image and Position Simultaneously From a Composite Image
This method is novel in the manner in which it allows the user to express two distinguishing characteristics of the target simultaneously - image and position. The user's device constructs a virtual landscape by placing images of other proximal users together in a composite image according to where they would appear in relation to each other if viewing them in reality.
In response to a request from the user's device, Device #1, the user's device receives (from either a DPS, or each proximal device, or a combination of both sources) the ID/address, user image, and position of each device in Device #l's perceptual proximity. Device #1 then arranges each of the images on the display in a position that approximates where they would appear in the user's field of vision. For example, if person #2 is slightly to the right (from User #1 's point of view) of person #1, and person #3 is much further to the right and further in the distance, then the display of Device #1 would show the image received from the device of person #2 displayed slightly to the right of the image of person #1. The image of person #3 would be much further to the right on the display and also much smaller, indicating distance. The user is given the ability to scroll the display to the right or the left via the user interface (for example, pressing one button to scroll to the right and a different button to scroll to the left) to allow User #1 to view the images of all other proximal users in a 360 degree radius. User #1 selects the target recipient of a communication by selecting the image of that target (by tapping the image, or toggling from image to image, etc.). The image is associated with that user's ID/address, and thus Device #1 has the capability of initiating communications with the target.
Method #28: User Selection of Both Image and Voice Quality Simultaneously This method is similar to the previous method in that it allows the user to express two distinguishing characteristics of the target simultaneously; but in this case the two distinguishing characteristics are image and voice quality. The user's device displays a series of images, and associated with each image is a sample of that person's voice. The user, User #1, is able to hear the associated voice by via the user interface by, for example, selecting an image and pressing a "play voice" button. This method is identical to the previous method in the manner in which the distinguishing characteristics are collected for either a DPS or from other proximal devices, and in the way communication is initiated. If User #1 wishes to target a particular target, he or she selects the image/voice sample of the target (by tapping the image, for example) and communicating via the user interface (pressing a button, for example) that communications should be initiated. The image/voice sample is associated with that user's ID/address, and thus Device #1 has the capability of initiating communications with the target. Method #29: User Selection of Both Voice Quality and Position Simultaneously
This method is similar to the previous method in that it allows the user to express two distinguishing characteristics of the target simultaneously; but in this case the two distinguishing characteristics are voice quality and position. The user's device constructs a virtual "soundscape" by placing voice samples of other proximal users together in a composite according to what direction they would appear to come from in relation to each other if hearing them in reality. The user's device (Device #1) plays a series of voice samples, the order changing according to whether the user is scrolling to the right or to the left. The user, User #1, is able to hear the associated voice via the user interface by, for example, pressing a "move left" button or a "move right" button. This method is identical to the previous method in the manner in which the distinguishing characteristics are collected for either a DPS or from other proximal devices, and in the way communication is initiated. If User #1 wishes to target a particular target, he or she selects the voice sample of the target (by playing the voice sample, for example) and communicating via the user interface (pressing a button, for example) that communications should be initiated. The image is associated with that user's ID/address, and thus
Device #1 has the capability of initiating communications with the target.
Method #30 (previously Method #13): Addressing with a Visible Alphanumeric String The most obvious examples of visibly displayed strings of alphanumeric characters which are associated with people are sports jerseys and license plates. Using this method, the alphanumeric string is associated with an address using a database stored either on a DPS (in which case a user's device sends the message, along with the alphanumeric string of the intended recipient, to the DPS, which looks up the alphanumeric string in its database and forwards the communication to the associated address) or on the user's device (in which case the user's device associates the alphanumeric string with an address and initiates communication directly with the target person's address). There are several distinct ways that a user (User #1 using Device #1) can express an alphanumeric string associated with a target person: (a) The user can enter the alphanumeric string directly into her device. Some examples of this are: using a keyboard; freehand writing with a stylus and translated (on users device or on a DPS) into a digital representation with handwriting recognition software; or verbally pronouncing each character in the string while voice recognition software translates (on users device or on a DPS) into a digital representation. (b) The user can capture an image of the alphanumeric string with a camera on her device, and then use optical character recognition (OCR) to translate into a digital representation of the alphanumeric string. OCR can be performed either on the user's device or on a DPS. (c) A method analogous to Method #1 in which, in response to a broadcasted request, all proximal terminals send to the user's terminal their ID/address paired with the alphanumeric string displayed by their user. The user then selects the alphanumeric string presented which is the same as the alphanumeric string seen on the intended target.
SUGGESTED SECURITY FEATURES • A user has the ability to ban all future communications from any particular user. This consists of a permanent non-response to all transmissions from that user's device.
• Users don't have direct access to device ID#'s or network addresses of other users. They address communications to other users by selecting the image of the user with which they wish to communicate.
• Images, device ID#'s, and network addresses of other users self-delete within a short time (measured in seconds or minutes) if not used to send a message.
• User #1 has the ability to instruct her device (Device #1) to issue an "erase" command to any other device (Device #2) at any time, as long as
Device #1 has Device #2's device ID# and network address. This erase command causes the erasure of User #l's image, device ID# and network address from Device #2. But at the same time, Device #2's information is also erased from Device #1. • There is no capability of exporting from a device any information about other users. • All communications between devices is encrypted with a common system- wide key to prevent non-system devices from eavesdropping. This key is periodically changed, and the new key is automatically propagated and installed from device to device, whenever devices communicate. Devices retain previous keys in order to be able to communicate with other devices that have not yet been updated. The ability to automatically install a new encryption key is guarded by a system- wide password stored in the firmware of all devices, and invoked in all legitimate encryption key updates.
ADVANTAGES OF THIS SYSTEM OVER PRIOR ART
1 . It allows a user to conveniently send an electronic message (text, voice, image, or video) to a specific person without knowing that person's name or contact information. There is no other technology that offers this capability.
2 . Perceptual Addressing gives people a socially discreet way of communicating with another person they don't know, unencumbered by the appropriateness of the social situation or specific other people who may also be present.
3 . Perceptual Addressing gives one person a convenient way of communicating with an unknown person in situations in which other means of communication may not be possible such as, for example, when the other person is in another car on the road, or sitting several seats away during a lecture or performance.
4 . Perceptual Addressing allows team members to communicate with each other based upon spatial position without the need to know which person is in which position, or without the need to broadcast messages. This may be useful in either military or civilian operations. 5 . Perceptual Addressing would facilitate communication in public situations in which an individual is responsible for dealing with the public and needs to communicate with specific other people without knowing anything about them. For example, a police officer often needs to tell specific individuals to slow their vehicles, turn their headlights on, step back from the street, etc.
SUMMARY
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that a variety of arrangements which are calculated to achieve the same purpose may be substituted for the specific embodiments shown. This application is intended to cover any adaptations or variations of the invention. It is intended that this invention be limited only by the claims, and the full scope of equivalents thereof.

Claims

Claims
1. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: identifying a distinguishing characteristic of a user of a second wireless electronic device; estimating a distance between the first wireless device and the second wireless device within the perceptual proximity of the first wireless device; and sending a message from the first electronic device viewable in the second electronic device based on the second electronic device's matching the estimated distance from the first wireless device and matching the distinguishing characteristic identified in the first electronic device.
2. The method of sending a message of claim 1, wherein the distinguishing characteristic comprises at least one of voice, physical appearance, and location.
3. The method of sending a message of claim 1, wherein estimating a distance is performed by a first mobile device user.
4. The method of sending a message of claim 1, wherein estimating a distance is performed electronically.
5. The method of sending a message of claim 1, wherein estimating a distance electronically comprises using at least one of ultrasonic, infrared, laser, and optical ranging.
6. The method of sending a message of claim 1, wherein estimating a distance comprises estimating a number of people within the range from the first wireless electronic device and the second wireless electronic device.
7. The method of sending a message of claim 1, wherein sending a message comprises sending a conditional message viewable in the second electronic device only upon an expression of interest in the second electronic device in communicating with the first electronic device.
8. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: identifying a distinguishing characteristic of a user of a second wireless electronic device; estimating a distance and direction from the first wireless device to the second wireless device within the perceptual proximity of the first wireless device; and sending a message from the first electronic device viewable in the second electronic device based on the second electronic device's matching the estimated distance and direction from the first wireless electronic device and matching the distinguishing characteristic identified in the first electronic device.
9. The method of sending a message of claim 8, wherein viewability in the second electronic device based on the second electronic device's matching the estimated distance and direction from the first wireless electronic device comprises comparing absolute positions of the first and second wireless electronic devices.
10. The method of sending a message of claim 9, wherein absolute position of at least one of the first and second wireless electronic devices is derived from at least one of Global Positioning System (GPS), triangulation, and by position relative to a device with a known absolute position.
11. The method of sending a message of claim 8, wherein the distinguishing characteristic comprises at least one of voice, physical appearance, and location.
12. The method of sending a message of claim 8, wherein estimating at least one of the distance and direction is performed by a first mobile device user.
13. The method of sending a message of claim 8, wherein estimating at least one of the distance and direction is performed electronically.
14. The method of sending a message of claim 13, wherein estimating a distance electronically comprises using at least one of ultrasonic, infrared, laser, and optical ranging.
15. The method of sending a message of claim 13, wherein estimating a direction electronically comprises using a flux gate compass.
16. The method of sending a message of claim 8, wherein sending a message comprises sending a conditional message viewable in the second electronic device only upon an expression of interest in the second electronic device in communicating with the first electronic device.
17. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: identifying at least one distinguishing characteristic of a user of a second wireless electronic device; and sending a message from the first electronic device viewable in the second electronic device based on the second electronic device's matching the at least one distinguishing characteristic identified in the first electronic device, wherein matching the at least one distinguishing characteristic is determined in the second electronic device.
18. The method of sending a message of claim 17, wherein the message sent from the first electronic device to the second electronic device comprises the identified distinguishing characteristic information.
19. The method of sending a message of claim 17, wherein the sent message comprises a conditional component viewable in the second electronic device only upon receiving an expression of interest in the second electronic device in communicating with the first electronic device.
20. The method of sending a message of claim 17, wherein the distinguishing characteristic comprises at least one of voice, physical appearance, and position.
21. A method of receiving a message from a first wireless electronic device in a second electronic device, comprising: receiving a message from the first electronic device in the second electronic device, the message comprising information identifying at least one distinguishing characteristic of a user of a second wireless electronic device; and making the message from the first electronic device viewable in the second electronic device based on the second electronic device's matching the at least one distinguishing characteristic identified in the first electronic device, wherein matching the at least one distinguishing characteristic is determined in the second electronic device.
22. The method of receiving a message of claim 21, wherein the received message comprises a conditional component viewable in the second electronic device only upon receiving an expression of interest in the second electronic device in communicating with the first electronic device.
23. The method of receiving a message of claim 21, wherein the distinguishing characteristic comprises at least one of voice, physical appearance, and position.
24. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: photographing a user of the second electronic device within the perceptual proximity of the first electronic device using the first electronic device; identifying a user of the second electronic device in the photograph; and sending a message from the first electronic device viewable in the second electronic device based on the second electronic device's matching at least one of the photograph of the user of the second electronic device and the physical location of the user of the second electronic device.
25. The method of sending a message of claim 24, further comprising estimating at least one of a direction and distance to the photographed user of the second electronic device, and further using at least one of the estimated direction and distance data to determine the physical location of the user of the second electronic device.
26. The method of sending a message of claim 25, wherein estimating a direction to the photographed user of the second electronic device comprises using a flux gate compass.
27. The method of sending a message of clam 26, wherein estimating a direction to the photographed user of the second electronic device comprises using the flux gate compass in combination with the image of the user of the second electronic device in the photograph.
28. The method of sending a message of claim 25, wherein estimating a distance to the photographed user of the second electronic device comprises using at least one of ultrasonic, laser, infrared, and optical ranging.
29. The method of sending a message of claim 24, wherein the user identifies the user of the second electronic device in the photograph using at least one of a touchscreen, cursor controls, a joystick, a keypad, and a switch.
30. The method of sending a message of claim 24, wherein the physical location of the user of the second device is determined via a vector indicating the direction from the first electronic device to the second electronic device.
31. The method of sending a message of claim 30, wherein at least one of a server and the second electronic device determine whether the second electronic device lies within the vector relative to the first electronic device.
32. The method of sending a message of claim 30, wherein a server determines whether the second electronic device lies within the vector relative to the first electronic device based on the vector data and relative position data of one or more electronic devices within the first electronic device's perceptual proximity.
33. The method of sending a message of claim 32, wherein the relative position data of one or more electronic devices within the first electronic device's perceptual proximity is determined via a Global Positioning System (GPS).
34. The method of sending a message of claim 30, wherein at least one of the first and second wireless electronic devices determines whether the second electronic device lies along the vector relative to the first electronic device.
35. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: identifying a distinguishing characteristic of a user of a second wireless electronic device; receiving a device identifier of the second wireless electronic device; and sending a message from the first electronic device viewable in the second electronic device based on the second electronic device's device identifier and matching the distinguishing characteristic identified in the first electronic device.
36. The method of sending a message of claim 35, further comprising sending a request for device identifiers from the first wireless electronic device to other wireless electronic devices within the first wireless electronic device's perceptual proximity.
37. The method of sending a message of claim 35, wherein the received device identifier of the second wireless device is a broadcast device identifier.
38. The method of sending a message of claim 35, wherein the received device identifier of the second wireless device is addressed to the first wireless electronic device.
39. The method of sending a message of claim 35, further comprising periodically broadcasting a device identifier from at least one of the first and second wireless electronic devices to other wireless electronic devices within the broadcaster's perceptual proximity.
40. The method of sending a message of claim 35, wherein the device identifier is a radio frequency identification (RFID) identifier.
41. The method of sending a message of claim 35, wherein the distinguishing characteristic is a voice sample.
42. The method of sending a message of claim 35, wherein the distinguishing characteristic is a physical characteristic.
43. The method of sending a message of claim 42, wherein the physical characteristic is compared only to physical characteristics of users of other wireless electronic devices in the first electronic wireless device's perceptual proximity.
44. The method of sending a message of claim 35, wherein the distinguishing characteristic is a photograph.
45. The method of sending a message of claim 44, wherein the photograph is compared only to photographs of users of other wireless electronic devices in the first electronic wireless device's perceptual proximity.
46. The method of sending a message of claim 45, wherein the second electronic wireless device is within the first wireless electronic device's perceptual proximity.
47. The method of sending a message of claim 46, wherein perceptual proximity is determined based on receipt of a signal in one of the first and second wireless electronic devices sent only to other wireless devices; the signal strength limiting receipt to other wireless devices within the signal sender's perceptual proximity.
48. The method of sending a message of claim 46, wherein perceptual proximity is determined based on Global Positioning System (GPS) coordinates.
49. The method of sending a message of claim 46, further comprising reporting to a server the devices within the perceptual proximity of a wireless electronic device.
50. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: identifying a distinguishing characteristic of a user of a second wireless electronic device; determining one or more wireless electronic devices within the first wireless electronic device's perceptual proximity; sending information relating to the identified distinguishing characteristic and the one or more wireless devices within the first wireless device's perceptual proximity to a server; and sending a message to the server to be delivered to the second electronic wireless device upon identification in the server of the second electronic wireless device based on the sent information relating to the identified distinguishing characteristic and the one or more wireless devices within the first wireless device's perceptual proximity to a server, and further based on a database of distinguishing characteristic data of wireless device users.
51. The method of sending a message of claim 50, wherein the distinguishing characteristic is a voice sample.
52. The method of sending a message of claim 50, wherein the distinguishing characteristic is a physical characteristic.
53. The method of sending a message of claim 50, wherein the distinguishing characteristic is a photograph.
54. The method of sending a message of claim 50, wherein the sent distinguishing characteristic information is compared only to distinguishing characteristic information of users of other wireless electronic devices in the first electronic wireless device's perceptual proximity from the database.
55. The method of sending a message of claim 50, wherein the distinguishing characteristic is physical position.
56. The method of sending a message of claim 55, wherein physical position comprises an absolute position comprising a Global Positioning System (GPS) position or other absolute position data.
57. The method of sending a message of claim 55, wherein the physical position comprises a vector.
58. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: identifying a distinguishing characteristic of a user of a second wireless electronic device; determining one or more wireless electronic devices within the first wireless electronic device's perceptual proximity; sending information relating to the identified distinguishing characteristic and the one or more wireless devices within the first wireless device's perceptual proximity to a server; and sending a message to the server to be delivered to the second electronic wireless device upon identification in the server of the second electronic wireless device based on the sent information relating to the identified distinguishing characteristic and the one or more wireless devices within the first wireless device's perceptual proximity to a server, and further based on a comparison of the distinguishing characteristic data to distinguishing characteristic data submitted by other wireless electronic devices within the first wireless electronic device's perceptual proximity.
59. The method of sending a message of claim 58, wherein the distinguishing characteristic data submitted by other electronic wireless devices comprises data submitted directly from the other wireless electronic devices to the server.
60. The method of sending a message of claim 58, wherein the first wireless device's perceptual proximity is limited by an estimated distance from the first wireless electronic device to the second wireless electronic device.
61. The method of sending a message of claim 58, wherein the first wireless device's perceptual proximity is limited by an estimated direction from the first wireless electronic device to the second wireless electronic device.
62. A method of forwarding a message in a server, comprising: receiving a message from a first wireless electronic device; receiving information identifying other wireless devices within the first wireless device's perceptual proximity; receiving a distinguishing characteristic identifying an intended message recipient, the intended recipient comprising a user of a second wireless electronic device within the first wireless device's perceptual proximity; determining based on the distinguishing characteristic the second wireless electronic device's identity; and forwarding the received message to the identified second wireless electronic device.
63. The method of forwarding a message in a server of claim 62, wherein the message received from the first wireless device is a conditional message viewable in the second wireless electronic device only upon an expression of interest in the second wireless electronic device in communicating with the first wireless electronic device.
64. The method of forwarding a message in a server of claim 62, wherein the information identifying other wireless devices within the first wireless device's perceptual proximity comprises physical position data received from the wireless devices.
65. The method of forwarding a message in a server of claim 62, wherein the distinguishing characteristic is a voice sample.
66. The method of forwarding a message in a server of claim 62, wherein the distinguishing characteristic is a physical characteristic.
67. The method of forwarding a message in a server of claim 62, wherein the distinguishing characteristic is a photograph.
68. The method of forwarding a message in a server of claim 62, wherein determining based on the distinguishing characteristic comprises comparing the distinguishing characteristic data to distinguishing characteristic data for wireless electronic device users stored in a database.
69. The method of forwarding a message in a server of claim 68, wherein the sent distinguishing characteristic information is compared only to distinguishing characteristic information of users of other wireless electronic devices in the first electronic wireless device's perceptual proximity from the database.
70. The method of forwarding a message in a server of claim 62, wherein the distinguishing characteristic is physical position.
71. The method of forwarding a message in a server of claim 62, wherein the first wireless device's perceptual proximity is limited by an estimated distance from the first wireless electronic device to the second wireless electronic device.
72. The method of forwarding a message in a server of claim 62, wherein the first wireless device's perceptual proximity is limited by an estimated direction from the first wireless electronic device to the second wireless electronic device.
73. A method of sending a message from a first wireless electronic device to a second wireless electronic device, comprising: receiving in the first wireless electronic device images of a plurality of wireless device users having wireless electronic devices in the first wireless device's perceptual proximity; displaying the received images of a plurality of wireless device users arranged to represent the physical relative positions of the plurality of wireless electronic devices; receiving in the first wireless electronic device a user selection of a second electronic wireless device from among the wireless electronic devices represented by the displayed images.
74. The method of sending a message of claim 73, further comprising sending a message from the first electronic device viewable in the second electronic device selected by the user.
75. The method of sending a message of claim 73, wherein the message comprises a conditional message visible in the second wireless electronic device only upon an expression of interest in the second wireless electronic device in communicating with the first electronic device.
76. The method of sending a message of claim 73, wherein the first wireless electronic device receives the images of a plurality of wireless device users from the plurality of other wireless electronic devices.
77. The method of sending a message of claim 73, wherein the first wireless electronic device receives the images of a plurality of wireless device users from a server.
78. The method of sending a message of claim 73, wherein receiving a user selection of a second electronic wireless device comprises receiving a touchscreen actuation in the region of a displayed user image.
79. The method of sending a message of claim 73, wherein receiving a user selection comprises receiving user input identifying one of the displayed images via at least one of a keypad, a button, and a switch.
PCT/US2006/013633 2005-04-12 2006-04-12 Wireless communications with proximal targets identified visually, aurally, or positionally WO2006110803A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US67076205P 2005-04-12 2005-04-12
US60/670,762 2005-04-12

Publications (2)

Publication Number Publication Date
WO2006110803A2 true WO2006110803A2 (en) 2006-10-19
WO2006110803A3 WO2006110803A3 (en) 2007-12-13

Family

ID=37087663

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/013633 WO2006110803A2 (en) 2005-04-12 2006-04-12 Wireless communications with proximal targets identified visually, aurally, or positionally

Country Status (1)

Country Link
WO (1) WO2006110803A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2116072A4 (en) * 2007-02-23 2015-07-29 Motorola Mobility Llc Method and system for context based communication in communication networks
US9591133B2 (en) 2009-12-30 2017-03-07 Motorola Solutions, Inc. Method and apparatus for determining a communication target and facilitating communications based on an object descriptor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5191613A (en) * 1990-11-16 1993-03-02 Graziano James M Knowledge based system for document authentication
US5748780A (en) * 1994-04-07 1998-05-05 Stolfo; Salvatore J. Method and apparatus for imaging, image processing and data compression
US7103344B2 (en) * 2000-06-08 2006-09-05 Menard Raymond J Device with passive receiver

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5191613A (en) * 1990-11-16 1993-03-02 Graziano James M Knowledge based system for document authentication
US5748780A (en) * 1994-04-07 1998-05-05 Stolfo; Salvatore J. Method and apparatus for imaging, image processing and data compression
US7103344B2 (en) * 2000-06-08 2006-09-05 Menard Raymond J Device with passive receiver

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2116072A4 (en) * 2007-02-23 2015-07-29 Motorola Mobility Llc Method and system for context based communication in communication networks
US9591133B2 (en) 2009-12-30 2017-03-07 Motorola Solutions, Inc. Method and apparatus for determining a communication target and facilitating communications based on an object descriptor

Also Published As

Publication number Publication date
WO2006110803A3 (en) 2007-12-13

Similar Documents

Publication Publication Date Title
US8014763B2 (en) Wireless communications with proximal targets identified visually, aurally, or positionally
US20080051033A1 (en) Wireless communications with visually- identified targets
US11348480B2 (en) Augmented reality panorama systems and methods
WO2005086502A1 (en) Wireless communications with visually-identified targets
US20060256008A1 (en) Pointing interface for person-to-person information exchange
US20190238719A1 (en) Electronic Devices and Methods for Blurring and Revealing Persons Appearing in Images
FI115943B (en) Arrangement for presenting information on a monitor
CN105450736A (en) Method and device for establishing connection with virtual reality
CN112788359B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN116998170A (en) System and method for ultra-wideband applications
JP6359704B2 (en) A method for supplying information associated with an event to a person
US10845921B2 (en) Methods and systems for augmenting images in an electronic device
CN108551420B (en) Augmented reality device and information processing method thereof
WO2019236171A1 (en) Methods and devices for identifying multiple persons within an environment of an electronic device
WO2006110803A2 (en) Wireless communications with proximal targets identified visually, aurally, or positionally
WO2020083178A1 (en) Digital image display method, apparatus, electronic device, and storage medium
CN114356182B (en) Article positioning method, device, equipment and storage medium
CN111130985A (en) Incidence relation establishing method, device, terminal, server and storage medium
CN110532474B (en) Information recommendation method, server, system, and computer-readable storage medium
US20210064131A1 (en) Gaze initiated interaction technique
CN112818240A (en) Comment information display method, comment information display device, comment information display equipment and computer-readable storage medium
US20230300458A1 (en) Electronic Device Systems for Image Sharing
EP4250744A1 (en) Display terminal, communication system, method for displaying, method for communicating, and carrier means
US20230368399A1 (en) Display terminal, communication system, and non-transitory recording medium
US20230316680A1 (en) Discovery of Services

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06740895

Country of ref document: EP

Kind code of ref document: A2