WO2017155740A1 - Système et procédé de reconnaissance automatisée d'un client de transport - Google Patents

Système et procédé de reconnaissance automatisée d'un client de transport Download PDF

Info

Publication number
WO2017155740A1
WO2017155740A1 PCT/US2017/019959 US2017019959W WO2017155740A1 WO 2017155740 A1 WO2017155740 A1 WO 2017155740A1 US 2017019959 W US2017019959 W US 2017019959W WO 2017155740 A1 WO2017155740 A1 WO 2017155740A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
gesture
vehicle
autonomous vehicle
data
Prior art date
Application number
PCT/US2017/019959
Other languages
English (en)
Inventor
Jussi RONKAINEN
Jani Mantyjarvi
Mikko Tarkiainen
Marko Palviainen
Original Assignee
Pcms Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pcms Holdings, Inc. filed Critical Pcms Holdings, Inc.
Publication of WO2017155740A1 publication Critical patent/WO2017155740A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0833Tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • recognizing the car and customer may be difficult for several reasons.
  • cars often look fairly similar, and current methods of recognition require close visual range, as visual identification of the vehicle is attempted by the customer by trying to identify the driver from a photo, or the vehicle based on generic vehicle description and license plate, and the driver attempts to recognize the customer, typically based on a photograph (e.g., Uber, Lyft, etc.).
  • the recognition is uncoordinated and error prone.
  • AV autonomous vehicle
  • One embodiment takes the form of a method comprising an autonomous vehicle, while in a pickup zone for a passenger, sending a gesture-performance request to a user device of the passenger; the autonomous vehicle capturing sensor data in the pickup zone using at least one sensor of the autonomous vehicle; and the autonomous vehicle detecting, in the captured sensor data, that a particular gesture is being performed at a particular location within the pickup zone, and responsively proceeding to and stopping at the particular location.
  • One embodiment takes the form of a method comprising: receiving information regarding an intended pickup location of the passenger; causing the vehicle to move toward the intended pickup location; sending, to a user device of the passenger, a gesture-performance request, the gesture performance request indicating a specified gesture to be performed by the passenger; capturing sensor data by at least one sensor of the autonomous vehicle in proximity to the intended pickup location; detecting, in the captured sensor data; the specified gesture; determining an actual location of the passenger at a location the specified gesture was detected, and causing the autonomous vehicle to stop at the actual location of the passenger.
  • the method contains two phases.
  • the first phase is based on the activation of GPS-based approach of the AV towards a user.
  • the AV approaches the user by using more detailed GPS + proximity measures (and the user' s ID), and at the same time identifies the user by operating a fusion of car camera and/or radar-based (or other systems) gesture recognition (done by the AV), the user's device(s) based gesture recognition (done by a user's device - smartphone, smartwatch, or both, and shared to the AV) and identification of the correct user by combining gesture models of both the vehicle and user device gesture recognition in a synchronized manner.
  • user recognition is based on matchmaking (e.g., correlation, similarity, dissimilarity) between inertial sensor data from the user's device (e.g., accelerometer and/or magnetometer data, or the like), and a 3D view (stereo camera, LiDAR, laser scanning, etc.) of the vehicle.
  • matchmaking e.g., correlation, similarity, dissimilarity
  • inertial sensor data from the user's device
  • 3D view stereo camera, LiDAR, laser scanning, etc.
  • FIG. 1 depicts an embodiment of an overall architecture of an exemplary embodiment.
  • FIG. 2 depicts a sequence diagram for an embodiment of a user recognition process based on a predefined gesture.
  • FIG. 3 depicts a sequence diagram for an embodiment of a user recognition process based on inertial information provided by the user and 3D imaging performed by the vehicle.
  • FIG. 4A depicts an exemplary combination of user devices and possible gesture directions.
  • FIG. 4B depicts an exemplary freeform 3D hand gesture path of the exemplary combination of devices of FIG. 4 A.
  • FIG. 5 depicts an overhead view of a traffic intersection at a first phase, in accordance with an embodiment.
  • FIG. 6 depicts an overhead view of a traffic intersection at a second phase, in accordance with an embodiment.
  • FIG. 7 depicts an overhead view of a traffic intersection at a third phase, in accordance with an embodiment.
  • FIG. 8 depicts a method, in accordance with an embodiment.
  • FIG. 9 illustrates an exemplary wireless transmit/receive unit (WTRU) that may be employed in some embodiments.
  • WTRU wireless transmit/receive unit
  • FIG. 10 illustrates an exemplary network entity that may be employed in some embodiments.
  • the identification and recognition burden is on the passenger.
  • the passenger attempts to recognize/identify the make, picture and color of the car, registration plate, and/or the like.
  • a user may additionally honk the car's horn remotely by using mobile app, which may be useful when a user is searching for a car within a large parking lot.
  • the present disclosure provides methods and systems for an autonomous vehicle (AV) to recognize, identify and approach a correct passenger among multiple other passengers waiting for a pickup.
  • AV autonomous vehicle
  • the method contains two phases.
  • the first phase is based on the activation of a GPS-based approach of the AV towards a customer by recognition of a user- specific hand gesture.
  • the AV approaches the user by using more detailed GPS and proximity measures (and in some embodiments a user's ID), including identifying the user through a combination of AV camera and/or radar-based gesture recognition (done by the AV), the user's device(s) based gesture recognition (done by a user's device(s) - such as, in one example, a smartphone, a smart watch, or both, and shared to the AV) and identification of the correct user by combining gesture models of both AV and user device based gesture recognition in a synchronized manner.
  • Advantages of some of the embodiments disclosed herein may include, but are not limited to, the following.
  • Much of the burden of recognition is transferred from the user to the vehicle.
  • Systems and methods of the present disclosure work well in most weather conditions, for example bright sunlight where existing systems such as lighting-based cues on the vehicle would be difficult for a user to see, or in low light when the vehicle's lights make it hard for the user to see the vehicle, but the vehicle can see the user.
  • Some embodiments can be entirely automated, without the need for a human driver.
  • the temporal and feature-wise combination of inertial data (or gesture models) from a user waving their phone and/or wrist device (or other combination of one or more devices), and device ID combined with 3D sensing by the vehicle for identification purposes is simple and accurate, and does not need a predefined pattern.
  • the present use of 3D vision to recognize the user also gives an accurate location and orientation of the user relative to the vehicle.
  • Exemplary embodiments operate to detect the correct user by distinguishing between the actual user performing the identification gesture and a person mimicking the identification gesture.
  • FIG. 1 depicts a system architecture, in accordance with some embodiments.
  • FIG. l depicts the system architecture 100 that has is divided among an information level at the top, a service level in the middle, and a device level at the bottom.
  • the method is generally based on three main actors, as illustrated in FIG. 1 : a user and/or user devices 110, a vehicle 124, and a transportation service 114.
  • the user and their personal device(s) 110 may provide functionality, including but not limited to: a user interface, communications, sensors 112, and/or the like.
  • the sensors may include sensors capable of determining location, detecting speed, acceleration, and changes in magnetic levels, as some examples.
  • Different user devices 110 include smart phones, tablet computers, electronic accessories, such as smart watches, and the like.
  • the functionality of the user device may be divided among multiple user devices, such as a user having a both a smart phone and a smart watch, with each device capable to perform some or all of the aspects disclosed herein.
  • the user devices operates a transportation application 102.
  • the user interface may comprise, in one embodiment, a transportation application 102 ("App") for ordering transportation and providing related information such as location, number of passengers, etc.
  • the transportation application 102 processes notifications 104, match information associated with a provider 106, and various sensor services 108.
  • Information provided by the transportation application may include one or more of the following types of information: a personal recognition gesture to the transportation service; 3D inertial sensor data (or 3D gesture model data) for assistance to the vehicle in close-range recognition of the user; notification capability related to the recognition process; matchmaking information from the user to the vehicle; additional information that can help in recognition, such as magnetometer data to indicate rough compass orientation of the user; and/or the like.
  • providing matchmaking information may comprise the application aggregating sensor data (such as inertial data from an accelerometer in the user's device) into a 3D point vector (or spline format or gesture model feature data format and/or the like), and transmitting the timestamped data in fixed-length sequences (e.g., 0.5s) or in fixed gesture model format to the vehicle.
  • sensor data such as inertial data from an accelerometer in the user's device
  • a 3D point vector or spline format or gesture model feature data format and/or the like
  • transmitting the timestamped data in fixed-length sequences (e.g., 0.5s) or in fixed gesture model format to the vehicle.
  • the communications functionality at the user end may provide for rapid transfer of accelerometer data to the vehicle.
  • short-range communications may be preferred to maintain low latency, such as the V2P or Bluetooth connection 140.
  • the user's device may comprise sensors such an accelerometer, a gyroscope, a magnetometer, an image capture system, and/or the like to provide the raw data used in assisted recognition.
  • sensors such an accelerometer, a gyroscope, a magnetometer, an image capture system, and/or the like to provide the raw data used in assisted recognition.
  • a vehicle 124 may be a self-driving vehicle or autonomous vehicle, which may provide functionality including but not limited to: a transportation interface, a person tracking module 128, a sensor data service 138, and/or the like.
  • a transportation interface may comprise a transportation application ("App") 126, which may provide a connection to the transportation service, with the ability to request user profile(s) for the user and other persons within a given area and time window.
  • the vehicle may further comprise a person tracking system 128 that is able to recognize and then track the customer, including but not limited to modules such as: a hand tracking module 130 able to determine hand movements of persons in the vehicle's detection range; a gesture matching module 132 capable of observing the hand movements of all detected persons, able to recognize hand gestures, and correlate the movements with the user-provided gestures and distinguish them from gestures in use by other persons in the same area; a user-provided data matchmaking module 134 able to receive user-provided recognition data (e.g., inertial data or gesture model data with device ID and time) and correlate it (both temporal and gesture models) with the hand movements of all detected persons.
  • user-provided recognition data e.g., inertial data or gesture model data with device ID and time
  • a sensor data service 136 may provide the vehicle with sensor data for person tracking.
  • the sensor data service 136 may utilize 3D surroundings data, such as a 3D model provided of the vehicle's surroundings such as streets, buildings, cars, and persons, using stereo cameras, LiDAR, laser scanning, other imaging or modeling sensors or systems, and/or the like.
  • Exemplary technologies that can be used for gesture recognition include: 1) Inertial sensor based gesture recognition, using 3D accelerometers, magnetometers and/or gyroscopes; 2) Camera-based gesture recognition such as MS Kinect, Leap Motion 1&2, Singlecue by eyeSight Technologies; 3) Worn device (such as wrist worn devices) based gesture control of drones and other devices/systems; and/or the like.
  • one or more processing functions of the vehicle may occur at remote server in communication with the vehicle. Such embodiments may serve to reduce the required components of the vehicle by removing intensive processes from the physical vehicle.
  • one or more functions of the vehicle may be enabled by one or more "cloud" services in communication with the vehicle, such as the wide area communication 138 that is further in communication with the transportation application 102 on the user device 110 and the transportation application 116 of the transportation service 114.
  • a transportation service 114 may comprise a backend service which provides regular transportation ordering and dispatching services.
  • the service may include, but is not limited to: Information storage, such as user profile information and outstanding orders information; and functionality such as user profile management, user profile provisioning, provision of user gestures in an area, and regular order and dispatch services.
  • user profile information may include, but not be limited to, one or more personal vehicle calling gestures, where a personal vehicle calling gesture is a user-defined gesture for identifying himself among other customers waiting for transport at the same location.
  • the information storage may also contain outstanding orders information - with, at least, link to user profiles and pickup locations.
  • the transportation service 114 may utilize user profile management to handle user profile parameters, especially, providing the ability to record a user-defined gesture.
  • the transportation service may utilize user profile provisioning for direct individual users to vehicles picking up customers, including, in particular, the user-defined gesture.
  • the transportation service may include provisioning of user gestures in an area, such as enabling a search capability for possible gestures for customers awaiting transportation within a specified area and time window. This functionality enables an autonomous vehicle that has already been ordered by a user to recognize and exclude gestures being made by other users and thus reduces the likelihood of the vehicle stopping at the wrong user.
  • the transportation service may utilize regular order and dispatch functionality 118.
  • Entities may include, but are not limited to: one or more personal devices, the vehicle, apps in personal device(s) and in the vehicle, services (e.g., backend transportation service, etc.).
  • the following classes of messages are non-limiting exemplary classes of messages that may be communicated between entities at various stages. Additional content may be included in the listed messages, depending on the particular embodiment. Further, not all classes may be utilized in a particular embodiment, and/or one or more additional classes of messages may be included.
  • the transportation request message contains recognition information, which may comprise a recognition method or other method used for identifying the customer.
  • recognition method include but are not limited to: gesture, for gesture-based recognition; matchmaking, for recognition based on a combination of user-provided and vehicle observed data; and/or the like.
  • the Transportation Service 114 may further add data from a user profile to the transportation request prior to transmission to the vehicle, including gesture data, such as the user-defined gesture (e.g., as timestamped 3D vector points, or a spline or gesture model, and/or the like).
  • gesture data such as the user-defined gesture (e.g., as timestamped 3D vector points, or a spline or gesture model, and/or the like).
  • the vehicle may request a list of potentially used gestures at its designated pickup point.
  • the pickup point may be defined by one or more of a defined area (e.g., coordinates for pickup area, such as a rectangle), a time window (e.g., estimated time when the user will be picked up, such as a time range before and after the estimated pick up time).
  • a defined area e.g., coordinates for pickup area, such as a rectangle
  • a time window e.g., estimated time when the user will be picked up, such as a time range before and after the estimated pick up time.
  • the Transportation Service 114 may in response to the gesture request communicate to the vehicle a list of gestures for the requested area, time window, and/or the like, providing gesture data (personal or assigned gestures for known passengers awaiting pickup in the area), time windows if known (time ranges for the other known passengers' pickups), and/or the like.
  • Proximity Notification When the vehicle determines it is close to the user, it communicates with the user to provide: Direction of Arrival (e.g., the street and direction from where the vehicle is approaching the user); ETA (estimated time until the gesture recognition process is started); and/or the like.
  • Direction of Arrival e.g., the street and direction from where the vehicle is approaching the user
  • ETA estimated time until the gesture recognition process is started
  • the user's device may send a short-range signal, detection of which tells the vehicle that it is close to the user.
  • Said short-range signal may indicate a user's device ID, a proximity measure, and/or the like.
  • the vehicle may send a request for the user to start performing her personal or assigned gesture.
  • the request may in some embodiments include a reminder of the personal or assigned gesture (e.g., as a 3D representation, or user-specified graphic or text).
  • the reminder may comprise an audio clip, a video clip, and/or the like, either user provided or system generated.
  • the gesture perform request may include an option for the user to request a reminder as to the personal or assigned gesture.
  • the vehicle sends the message to the user upon recognizing the gesture.
  • the indication may contain, in some embodiments, a notification message displayed to the user indicating that the gesture has been recognized by the vehicle, and that the user may stop making the gesture.
  • Vehicle Location Notification The vehicle may generate a further notification of its position with respect to the user's position, once it has recognized the user.
  • the message may contain, depending on the capabilities of the end device: a schematic, text, a map, a bearing, and/or the like.
  • a schematic may comprise an illustration of the vehicle's position, and in some embodiments may show the vehicle's position with respect to nearby cars, such as showing nearby cars as simplified drawings of a style of vehicle (e.g., truck, sedan, coupe and the like) with their real colors.
  • Text may comprise a text based indication of the vehicle's location, such as "third car in the row," or the like.
  • a map may comprise a top view of the vehicle's surroundings, derived from the vehicle's 3D view, with the positions of the user and the vehicle indicated, and/or the like.
  • a bearing may comprise a compass bearing or other directional bearing to the vehicle, with respect to the user's current position (e.g., compass points to the vehicle as the user approaches the vehicle).
  • Device Motion Request The vehicle may request the user to start waving her device. On receiving the message, the user's device starts sending motion or gesture model data, and/or other relevant captured sensor data capable of representing a user's gesture.
  • the user's device may send motion or gesture data via the motion indication message.
  • the message may contain, but is not limited to: a timestamp, a motion sample, a sequence length, and/or the like.
  • a timestamp may comprise an accurate (e.g., GPS) timestamp of the start of a motion sequence.
  • a motion sample may comprise a description of the path of the user' s device for the duration of the sequence, and may comprise a list of 3D points (at a predefined sampling rate or with timestamps), or one or more splines, and/or the like.
  • the data may be gesture model data in a feature parameter format (which is suitable directly for gesture recognition and matchmaking).
  • the motion indication may further comprise a sequence length parameter, where the parameter denotes the length of the motion sequence, such as in embodiments where the length of the motion sequence is not predefined.
  • FIG. 2 depicts an example process, in accordance with an embodiment.
  • FIG. 2 depicts the process 200 that includes communications between and actions by a user device transportation application 202, a transportation service 204, and a vehicle transportation application 206.
  • a vehicle identifies the user by recognition of a predefined hand gesture.
  • the embodiment of FIG. 2 is generally based on the vehicle recognising a "secret code" given by the user, which may be hand gesture.
  • the "code” could be one or more of a body gesture, a stance, an arm motion, a movement of one or more user devices, and/or the like.
  • the gesture has previously been recorded by the user and is stored as a part of her user profile. In other embodiments, the user may provide the gesture when making the transportation request.
  • the user Prior to initiating the request for service, the user has installed the transportation service application 202 on her personal device.
  • the personal device may comprise a smartphone, smartwatch, PDA, and/or the like, wherein the personal device may further comprise inertial sensors, positioning capabilities, (short range) communication (proximity), and/or similar capabilities as known in the art.
  • the vehicle may comprise positioning capabilities, imaging system(s) (including but not limited to a camera, radar, a depth camera, video systems, image capture devices, and/or the like), (short range) communication capabilities, and/or similar capabilities as known in the art.
  • the user orders transportation to a given location of an intended pickup location by sending a transportation request 208 to the transportation service 204, and for further forwarding of the transportation request 210 to the vehicle transportation application 206 associated with the vehicle that is to pick up the user.
  • the transportation requests 208 and 210 include, but are not limited to: a recognition method and gesture data.
  • the transportation service 204 converts the transportation request 208, which includes user identification data, into the transportation request 210 that includes user profile data obtained by the transportation service 204.
  • the recognition method may include a user-defined gesture and an identifier (such as a device or user ID).
  • gesture data which may be based on the user's stored data, is communicated to a vehicle that accepts the transportation such that said vehicle receives the gesture.
  • the gesture data may comprise a 3D vector including timed sequence points, or a spline, or gesture model in feature format, or similar formats data as known in the art. After such information has been provided to the vehicle, the vehicle may begin traveling and relocates towards the user's location of the intended pickup at 212.
  • Gesture separation data when the estimated time to pickup is within a predefined window (e.g., one minute), the vehicle transmits a gesture request 214 to query the transportation service for any other gestures that may be used by other customers in the pickup area.
  • the gesture response 216 provides information used by the vehicle to increase the hit rate for recognizing the correct gesture.
  • the vehicle sends a proximity notification 220 to the user, indicating a direction of arrival and/or an estimated time of arrival.
  • the direction of arrival may indicate the direction from which the vehicle is approaching to assist the user in orienting herself towards the arriving vehicle (e.g., side of the street to be on).
  • the estimated time of arrival may be calculated by GPS methods, and in some embodiments may factor in traffic conditions, travel conditions, and/or the like.
  • This proximity information may be transmitted via a short-range proximity signal, for example by Bluetooth.
  • the user's device provides close-range proximity data 222 that is received (at 224) and used by the vehicle to gain finer-grained location information as to the proximity of the user.
  • Gesture recognition start As the vehicle determines a sufficiently close proximity (as determined by known location determination methods or by receipt of short-range proximity signals from the passengers user device), (in various embodiments, 30 meters, 40 meters, 50 meters, and/or similar distances), the vehicle sends the user's device a gesture-performance request 226 to perform the pre-set, specified gesture.
  • the request may include gesture information (e.g., as a user-given picture, text, or vector image, video clip, audio data, and/or the like) as a reminder of the expected gesture.
  • the reminder may comprise an audio clip, a video clip, and/or the like, either user provided or system generated.
  • the user receives the perform-gesture request 226 and begins to perform the pre-set specified gesture at 228.
  • the vehicle begins to scan all persons within the pickup zone or pickup area, and continuously performs: 1) hand gesture recognition conducted by using the vehicle's own recognition method suitable for the available sensors, such as camera/radar based sensors, laser scanners, 3D scanners, LiDAR scanners, stereo camera systems or the like; 2) receiving of user gesture model; 3) matchmaking of own gesture recognition result and received user gesture model. Once the matchmaking for the observed gesture is high, identification of the correct user has been made at 232, and the exact and actual location of the passenger pickup and orientation of the user is known.
  • Notification by vehicle to user of user recognition Once the user has been found by the matchmaking result, the vehicle sends a user-recognition indication or notification 234 to the user that it has recognized the user and that she can stop performing the gesture at 236. At this point, generally the vehicle will arrive and stop near the user's actual location at 238.
  • the vehicle may send a vehicle location notification 240 which may include a vehicle location description.
  • vehicle location notification 240 may include a vehicle location description.
  • the vehicle determines there is a chance of not being unambiguously recognized (e.g., the vehicle has to stop in a queue of other cars for pickup, instead of stopping right at the user), the vehicle can provide additional information, such as, but not limited to: the location of the vehicle in a queue, a map, directions from the user to the vehicle, and/or the like.
  • the user enters the vehicle.
  • the vehicle remains locked until the user has completed the motion recognition phase, or has provided authorization through a communication channel to prohibit unauthorized persons from entering the vehicle.
  • the location of the vehicle in a queue may be established via vehicle-to- vehicle (“V2V") communication and/or 3D vision / LiDAR data (or other systems as known in the art).
  • V2V vehicle-to- vehicle
  • the vehicle can determine information such as the number (and/or color, etc.) of cars between the user and the correct vehicle, and present it to the user as a text message or a schematic picture.
  • a schematic picture may graphically depict numerical, color, and/or other identifying information of cars between the user and the vehicle, and/or the like.
  • the vehicle may generate and communicate a simplified map with markers for user location and the vehicle, such as based on the vehicle's internal 3D map of the area, including other cars, people, and/or the like.
  • the vehicle performs the process 300 to perform identification by a combination of motion tracking/gesture recognition using the vehicle's 3D sensing capabilities (e.g., stereo camera, LiDAR, and/or the like), and data provided by a 3D accelerometer sensor (or the like) from the user's device(s), optionally augmented with additional sensor data such as from a gyroscope, a magnetometer, a compass or the like, and in some embodiments a device ID.
  • the vehicle's 3D sensing capabilities e.g., stereo camera, LiDAR, and/or the like
  • a 3D accelerometer sensor or the like
  • additional sensor data such as from a gyroscope, a magnetometer, a compass or the like, and in some embodiments a device ID.
  • the process 300 depicts communication between and actions associated with the user transportation application 302, a transportation service application 304, and a vehicle transportation application 306, that may be similar to the components of the process 200 of FIG. 2.
  • the recognition is based on matchmaking (e.g., correlation, similarity, dissimilarity, etc.) between inertial sensor data from the user's device (e.g., accelerometer and/or magnetometer data), and a 3D view (stereo camera, LiDAR, and/or the like) of the vehicle.
  • matchmaking e.g., correlation, similarity, dissimilarity, etc.
  • the recognition is based purely on matchmaking of vehicle-detected and user-provided movement data (e.g., via cross-correlation), there is no need for a predefined or repeating gesture.
  • the process, illustrated in FIG. 3, may generally comprise the following steps: [0069] Prior to initiating the request for service, the user has installed the transportation service application 302 on her personal device.
  • the personal device may comprise a smartphone, smartwatch, PDA, and/or the like, wherein the personal device further comprises inertial sensors, positioning capabilities, (short range) communication (proximity), and/or similar capabilities as known in
  • the vehicle may comprise the vehicle transportation application 306 that supports positioning capabilities, imaging system(s) (including but not limited to a camera, radar, a depth camera, video systems, image capture devices, and/or the like), (short range) communication capabilities, and/or similar capabilities as known in the art.
  • imaging system(s) including but not limited to a camera, radar, a depth camera, video systems, image capture devices, and/or the like
  • short range communication capabilities including but not limited to a camera, radar, a depth camera, video systems, image capture devices, and/or the like
  • similar capabilities including but not limited to a camera, radar, a depth camera, video systems, image capture devices, and/or the like.
  • the transportation order request 308 includes, but is not limited to, information identifying a recognition method.
  • the information identifying the recognition method may include indicators such as that the recognition method will be a specific type, such as matchmaking.
  • the information identifying the recognition method may further comprise a designated subtype for the given type of recognition method, such as a subtype of "correlation" for the recognition method type matchmaking.
  • other recognition methods and/or subtypes may be utilized, as will be clear to those of ordinary skill in the art.
  • the transportation request 308 is received by the transportation service 304 for forwarding the transportation request 310 to the vehicle that will pick up the user associated with the user transportation application 302.
  • the vehicle sends a proximity notification 314 to the user, indicating a direction of arrival and/or an estimated time of arrival.
  • the direction of arrival may indicate the direction from which the vehicle is approaching to assist the user in orienting herself towards the arriving vehicle (e.g., side of the street to be on).
  • the estimated time of arrival may be calculated by GPS methods, and in some embodiments may factor in traffic conditions, travel conditions, and/or the like.
  • the approach notification further comprises a request for the user's device to start providing close-range proximity data 316, which is used by the vehicle to get finer-grained information on the proximity of the user.
  • the approach notification may also communicate the required settings for close- range communication with the vehicle by the user's device(s), such as a network address, a short-range communication method, a vehicle identifier (ID), and/or the like.
  • the approach notification may further comprise a user prompt 320 via the user's device to initiate a user recognition process, such as by moving the user's personal device at 322.
  • the prompt may comprise a request with instructions for display to the user, such as a message (and/or graphical prompt) to "wave your device-carrying hand as if stopping a taxi," and/or the like.
  • a message and/or graphical prompt
  • FIG. 4A One exemplary embodiment of a requested motion is illustrated in FIG. 4A.
  • FIG. 4A depicts the user device 402, which may be moved in the leftward direction (404), the rightward direction (406), or up and down (408). The sequence of these movements may be part of the particular gesture. Additionally, the user device 402 may be rotated as a portion of a particular gesture.
  • the approach notification may prompt the user to perform a particular gesture motion, for example moving the user's device (e.g., smart phone) through the air in one or more of a circle, a figure eight, a square, a rectangle, a triangle, a polygon, a cross, a symbol, and/or the like.
  • a gesture motion a figure eight 410, is illustrated in FIG. 4B.
  • the user's device upon notification, may start sending continuous sensor data 324 of motion indications directly to the vehicle transportation application 306.
  • the device communicates continuous accelerometer data directly to the vehicle.
  • the sensor data may further comprise gyroscopic sensor data, image capture data, magnetometer sensor data, and/or the like.
  • the communicated sensor data may be transmitted in sequences of a known length (e.g., 0.5s, 0.25s, Is, and/or the like).
  • the sensor data may be communicated from the user's device to the vehicle indirectly.
  • the data is communicated in a predefined format, such as but not limited to: a start time; motion data; and/or the like.
  • Start time may comprise a timestamp for a sequence start time, and/or the like.
  • Motion data may comprise captured 3D motion data, a list of (timestamped) vector motion points, a 3D spline, and/or the like.
  • the communicated data may additionally comprise magnetometer data from a magnetometer sensor, for example the communicated data may include a rough compass heading of the user's device at the beginning of the sequence, to aid the vehicle in determining the user's orientation.
  • the communicated data may further include data captured by other sensors of the user's device.
  • the vehicle tracks the hand motions or other gestures of all people visible within the pickup zone at motion matching 326.
  • Hand motion or gesture recognition may be conducted by using the vehicle's own recognition method suitable for the available sensors at 328, such as camera/radar based sensors or the like.
  • the vehicle also receives the inertial sensor or gesture model data sent by the user's device.
  • the data received by the vehicle may further comprise the user's ID and/or the user's device ID.
  • the vehicle finds the person with hand-motion-to-accelerometer-data (and/or any other sensor data transmitted from the user's device) correlation above a given recognition accuracy threshold (e.g., >90% accuracy, >95% accuracy, etc.), identification of the correct user has been made, and the exact location and orientation of the user is known.
  • a given recognition accuracy threshold e.g., >90% accuracy, >95% accuracy, etc.
  • the vehicle compares the gestures of individuals in the monitored area captured by the vehicle's 3D sensing to the sensor data (and/or gesture model defined by the sensor data) received from the user's device.
  • the vehicle may buffer all detected hand motion or other gesture data long enough to cater for communication delays, accelerometer (and/or other sensor) data sequence length, and/or the like.
  • the vehicle sends a notification 330 to the user that it has recognized the user and that she can stop performing the gesture, which in some embodiments may comprise moving the user's personal device in a designated pattern. At this point, user stops the motion at 332 and the vehicle will arrive and stop near the user at 334.
  • the vehicle may send a car location notification which may include a vehicle location description.
  • the vehicle determines there is a chance of not being unambiguously recognized (e.g., the vehicle has to stop in a queue of other cars for pickup, instead of stopping right at the user)
  • the vehicle can provide additional information, such as, but not limited to: the location of the vehicle in a queue, a map, directions from the user to the vehicle, and/or the like.
  • the location of the vehicle in a queue may be established via vehicle-to- vehicle (“V2V”) communication and/or 3D vision or LiDAR data (or other systems as known in the art).
  • V2V vehicle-to- vehicle
  • the vehicle can determine information such as the number (and/or color, make, model, etc.) of cars between the user and the correct vehicle, and present it to the user as a text message or a schematic picture.
  • a schematic picture may graphically depict numerical, color, make, model, and/or other identifying information of cars between the user and the vehicle, and/or the like.
  • the vehicle may generate and communicate a simplified map with markers for user location and the vehicle, such as based on the vehicle's internal 3D map of the area, including other cars, people, and/or the like.
  • a user-specific gesture may be used to hail an AV.
  • a user may be online with the service, and the user's device may detect a need to call an AV.
  • a smartwatch may determine by GPS or other localization methodologies that the user is standing on a sidewalk and has stuck out their arm as if to hail a cab.
  • the user could simply perform a predefined gesture to initiate the ride request (e.g., user stands on sidewalk and waves phone in a figure eight pattern).
  • the service may communicate to nearby AVs an order request as well as motion data of the gesture being made by the user (e.g., inertial sensor based model and/or camera based model) to enable matchmaking.
  • the order request may be assigned to or accepted by an AV, which then carries out similar steps as discussed above to approach the user and conduct accurate matchmaking (e.g., arrive at the correct user).
  • the disclosed method permits an AV to better differentiate a user's predefined order action from common waving or other non-ride requesting gestures.
  • a person using the service and making gesture may be prioritised for an AV order over a person not using the service.
  • the user may perform 3D freeform gestures with a smart device in her hand.
  • the user has a single smart device.
  • the user may have a plurality of smart devices (e.g., smartphone and smart watch, in/on same or different hands).
  • the combination of two or more smart devices in a single hand of the user may permit improved capture of the user's gestures by the vehicle (such as by improved accuracy, elimination of user's with a single data source, and/or the like).
  • one or more smart devices in or on each hand or wrist of the user may permit improved capture of a user' s gestures, such as by enabling a wider selection of gestures for the user.
  • FIGS. 4A-B illustrate exemplary embodiments of user performed 3D freeform hand gestures.
  • vehicle 504 represents an automated vehicle
  • human shapes represent individuals.
  • FIG. 5 depicts the scene 500 that is an overhead view of a traffic intersection at a first phase.
  • the automated vehicle (504) is not able to see the user 506A with user device 506B, but is approaching the user pickup location based on location information.
  • the user is notified to perform a gesture which is shared to the AV 504.
  • both AV 504 and the user 506A are in communication with a transportation service 502 via communication links 508 and 510, respectively, as an intermediary to message transmission.
  • both AV 504 and user 506A may be transmitting location signals (e.g., dashed circles 512 and 514 around AV 504 and user 506 A, respectively).
  • the user 506A sends their location (such as by latitude and longitude in a message to the AV 504 by way of the user device 506B), and once the threshold range is entered, a message is sent to user 506A to start performing the gesture.
  • FIG. 6 depicts the scene 600 that is an overhead view of a traffic intersection at a second phase.
  • the AV 504 receives the gesture performed by the user. Also, as the AV 504 moves closes to the user 506A and the AV 504 changes to P2P and/or V2V proximity measure(s) via the communication link 602 to iterate together distance and direction to scan the actual user gesture with camera and/or radar. The user 506A then receives notification that the AV 504 is scanning for a gesture for identification, and continues making the gesture from the first phase.
  • the user 506A is performing a figure eight gesture 604 by moving the user device 506B in a figure eight pattern, and the AV 504 is scanning for the figure eight gesture.
  • the AV 504 is scanning both sides of the street for the gesture (e.g., as depicted by the dashed search cone 608), and eventually recognizes the figure eight gesture 604, thus finding the user 506 A performing the figure eight.
  • FIG. 7 depicts the scene 700 that is an overhead view of a traffic intersection at a third phase.
  • the AV 504 has identified the user 506A and proceeds to a pick-up location (e.g., in front of the user, in a queue of cars outside a location, etc.).
  • the AV 504 may also transmit a notification to the user that recognition has been successful, and that the user may stop making the gesture.
  • the user 506A continues performing the figure eight gesture by moving the user device 506B in the figure eight motion until the stop gesture message is received.
  • the AV 504 has turned around and is on the correct side of the street to pick up the user 506A.
  • the AV 504 continues scanning both sides of the street until the successful identification message is sent.
  • particular steps of the process may be altered or supplemented.
  • additional user information could be used in functions such as targeting notifications or vehicle visual messaging to the user (head tracking / orientation, body orientation, and/or the like).
  • a particular situation could include a notification from the vehicle to the user to "look to your left," and/or the like.
  • the vehicle could provide a compass or other bearing direction to the user with respect to the user's location and the vehicle's location, and the user could have a directional arrow displayed on the user's smartwatch (or on the user's smartphone or other device), with the arrow pointing in the direction of the vehicle from the user.
  • Chloe orders a ride from an autonomous vehicle taxi service to the train station where she's just arrived. As usual, she gets a rough estimate of when the car will arrive, and description of the car. She is, however, given the option to be recognized by gesture, which she accepts, and she records a gesture in the service. At her location, there are a number of people waiting for their cars from the same company. After a while, the taxi sends a notification to Chloe, indicating which direction the taxi is coming from, and that in a couple of minutes, she will be asked to wave her hand with her personal gesture at incoming taxis. Shortly thereafter, she is instructed to start waving the gesture.
  • the taxi recognizes Chloe' s hand gesture but determines that it has to stop at the end of a queue of cars picking up passengers.
  • the taxi sends a notification to Chloe saying that the taxi has noticed her and she can stop waving her hand.
  • the notification contains a local map of the immediate area as seen by the taxi at the very moment, containing symbols for both the location of the taxi and Chloe.
  • Chloe decides to walk to the taxi, using the local map as a guide. Chloe reaches the taxi by following the map, and is driven to her destination.
  • a user requests transportation service with indication of their pickup location/time.
  • the user specifies the pickup identification method as a gesture.
  • the gesture is recorded in the user's profile as a 3D vector including timed sequence points or in a gesture model feature format.
  • This info is relayed to the vehicle, in this embodiment an automated vehicle ("AV").
  • AV automated vehicle
  • the AV queries the service backend for any other gestures that may be used by customers in the same area.
  • the service informs the AV of other potential gestures in the area.
  • the AV sends a notification to the user indicating the direction of the AV's arrival (north vs south, east vs west, etc.) and the ETA.
  • the user's device (such as smartphone, wearable, etc.) provides short range proximity data (for example by Bluetooth, Wi-Fi, Dedicated Short Range Communications, and/or the like) to be seen and help the AV with fine granularity localization.
  • the AV sends the user's device a request to perform the gesture (and optionally a reminder of the gesture).
  • the AV then moves into a gesture scanning and matching mode for the pickup zone via its stereo camera, radar/Lidar, and/or the like.
  • the user receives the request and starts making the gesture.
  • the AV sends the user a notification of recognition, which may include a gesture stop indication.
  • the AV locates the exact location and orientation of the user, and optimally stops at a short distance from the user.
  • the AV indicates its stop location to the user, such as by processing the collected V2V information (e.g., number of cars between the AV and the user) by sending the user a text (e.g., "red car, third in line”) or a simple map.
  • FIG. 8 depicts an example method, in accordance with an embodiment.
  • FIG. 8 depicts the method 800 that includes sending a gesture-performance request 802, capturing sensor data of a designated pickup zone 804, detecting a gesture at a particular location 806, and stopping at the particular location 808.
  • an autonomous vehicle while in a designated pickup zone for a passenger, sends a gesture-performance request to a user device of the passenger.
  • the gesture-performance request may indicate to the passenger that they are to perform a gesture.
  • the autonomous vehicle receives a passenger-pickup request indicating a location of the pickup zone and the autonomous vehicle moves toward the pickup zone.
  • the autonomous vehicle sends the gesture-performance request is sent in response to determining that the autonomous vehicle is within a predetermined distance from the pickup zone. Determining that the autonomous vehicle is within the predetermined distance from the pickup zone may be based on the vehicle's location or receipt of a short-range proximity signal from the passenger's user device.
  • the gesture-performance request may further comprise an indication of a particular gesture the passenger is to perform.
  • the indication may be an image, a text description, a vector image, an audio recording, or a video recording of the particular gesture the passenger is to perform.
  • the autonomous vehicle sends to the user device an indication of the direction of arrival and an estimated time of arrival at the pickup zone.
  • the autonomous vehicle captures sensor data in the designated pickup zone using at least one sensor of the autonomous vehicle, and at 806, the autonomous vehicle detects the particular gesture is being performed at a particular location within the designated pickup zone.
  • the at least one sensor is a video camera, the sensor data is video data, and detecting the particular gesture comprises detecting the particular gesture in the video data.
  • the at least one sensor is a 3D sensor, the sensor data is 3D data, and detecting the particular gesture comprises detecting the particular gesture in the 3D data.
  • the at least one 3D sensor may comprise one or more of a stereo camera system, a laser scanning system, a 3D scanner, or a LiDAR system.
  • detecting a particular gesture comprises detecting the user device moving in the particular gesture or detecting the passenger moving in the particular gesture.
  • the particular gesture may be extracted from a previously stored user profile.
  • the autonomous vehicle proceeds to and stops at or near the particular location.
  • modules that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules.
  • a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation.
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non- transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
  • Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes or user devices, such as a wireless transmit/receive unit (WTRU) or other network entity.
  • WTRU wireless transmit/receive unit
  • FIG. 9 is a system diagram of an exemplary WTRU 902, which may be employed as a module in embodiments described herein.
  • the WTRU 902 may include a processor 918, a communication interface 919 including a transceiver 920, a transmit/receive element 922, a speaker/microphone 924, a keypad 926, a display/touchpad 928, a nonremovable memory 930, a removable memory 932, a power source 934, a global positioning system (GPS) chipset 936, and sensors 938.
  • GPS global positioning system
  • the processor 918 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 918 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 902 to operate in a wireless environment.
  • the processor 918 may be coupled to the transceiver 920, which may be coupled to the transmit/receive element 922. While FIG. 9 depicts the processor 918 and the transceiver 920 as separate components, it will be appreciated that the processor 918 and the transceiver 920 may be integrated together in an electronic package or chip.
  • the transmit/receive element 922 may be configured to transmit signals to, or receive signals from, a base station over the air interface 916.
  • the transmit/receive element 922 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 922 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples.
  • the transmit/receive element 922 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 922 may be configured to transmit and/or receive any combination of wireless signals.
  • the transmit/receive element 922 is depicted in FIG.
  • the WTRU 902 may include any number of transmit/receive elements 922. More specifically, the WTRU 902 may employ MTMO technology. Thus, in one embodiment, the WTRU 902 may include two or more transmit/receive elements 922 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 916.
  • the WTRU 902 may include two or more transmit/receive elements 922 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 916.
  • the transceiver 920 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 922 and to demodulate the signals that are received by the transmit/receive element 922.
  • the WTRU 902 may have multi-mode capabilities.
  • the transceiver 920 may include multiple transceivers for enabling the WTRU 902 to communicate via multiple RATs, such as UTRA and IEEE 802.11, as examples.
  • the processor 918 of the WTRU 902 may be coupled to, and may receive user input data from, the speaker/microphone 924, the keypad 926, and/or the display/touchpad 928 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 918 may also output user data to the speaker/microphone 924, the keypad 926, and/or the display/touchpad 928.
  • the processor 918 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 930 and/or the removable memory 932.
  • the non-removable memory 930 may include random- access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 932 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 918 may access information from, and store data in, memory that is not physically located on the WTRU 902, such as on a server or a home computer (not shown).
  • the processor 918 may receive power from the power source 934, and may be configured to distribute and/or control the power to the other components in the WTRU 902.
  • the power source 934 may be any suitable device for powering the WTRU 902.
  • the power source 934 may include one or more dry cell batteries (e.g., nickel -cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), and the like), solar cells, fuel cells, and the like.
  • the processor 918 may also be coupled to the GPS chipset 936, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 902.
  • location information e.g., longitude and latitude
  • the WTRU 902 may receive location information over the air interface 916 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 902 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
  • the processor 918 may further be coupled to other peripherals 938, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 938 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
  • sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module
  • FIG. 10 depicts an exemplary network entity 1090 that may be used in embodiments of the present disclosure.
  • network entity 1090 includes a communication interface 1092, a processor 1094, and non-transitory data storage 1096, all of which are communicatively linked by a bus, network, or other communication path 1098.
  • Communication interface 1092 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication, communication interface 1092 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 1092 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 1092 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi-Fi communications, and the like). Thus, communication interface 1092 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.
  • wireless communication interface 1092 may include the appropriate equipment and circuitry (perhaps including multiple transceivers)
  • Processor 1094 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.
  • Data storage 1096 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non-transitory data storage deemed suitable by those of skill in the relevant art could be used.
  • data storage 1096 contains program instructions 1097 executable by processor 1094 for carrying out various combinations of the various network- entity functions described herein.
  • Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Finance (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Mechanical Engineering (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Operations Research (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne des systèmes et des procédés de reconnaissance automatisée d'un client de transport. Un mode de réalisation prend la forme d'un procédé comprenant l'envoi par un véhicule autonome se trouvant dans une zone d'embarquement pour un passager, d'une requête de performance de geste à un dispositif d'utilisateur du passager ; la capture, par le véhicule autonome, des données de capteur dans la zone de capture à l'aide d'au moins un capteur du véhicule autonome ; et la détection, par le véhicule autonome, dans les données de capteur capturées, qu'un geste particulier est en cours d'exécution à un emplacement particulier à l'intérieur de la zone d'embarquement, et, en réponse, l'arrêt à l'emplacement particulier.
PCT/US2017/019959 2016-03-08 2017-02-28 Système et procédé de reconnaissance automatisée d'un client de transport WO2017155740A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662305301P 2016-03-08 2016-03-08
US62/305,301 2016-03-08

Publications (1)

Publication Number Publication Date
WO2017155740A1 true WO2017155740A1 (fr) 2017-09-14

Family

ID=58358855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/019959 WO2017155740A1 (fr) 2016-03-08 2017-02-28 Système et procédé de reconnaissance automatisée d'un client de transport

Country Status (1)

Country Link
WO (1) WO2017155740A1 (fr)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3480620A1 (fr) * 2017-11-02 2019-05-08 Aptiv Technologies Limited Procédé d'accès mains libres
EP3566022A4 (fr) * 2017-01-09 2020-01-08 nuTonomy Inc. Signalisation d'emplacement par rapport à un véhicule autonome et un passager
CN110789444A (zh) * 2018-08-03 2020-02-14 通用汽车环球科技运作有限责任公司 基于客户位置的外部人机界面显示调整
WO2020076536A1 (fr) * 2018-10-10 2020-04-16 Waymo Llc Panneaux intelligents pour véhicule autonome
EP3650295A1 (fr) * 2018-11-09 2020-05-13 Baidu Online Network Technology (Beijing) Co., Ltd. Procédé et appareil de commande de véhicule autonome
WO2020104647A1 (fr) * 2018-11-23 2020-05-28 Bayerische Motoren Werke Aktiengesellschaft Système d'aide à la conduite pour un véhicule à conduite automatisée et procédé pour guider un véhicule à conduite automatisée
US20200183415A1 (en) * 2018-12-10 2020-06-11 GM Global Technology Operations LLC System and method for control of an autonomous vehicle
US10740863B2 (en) 2017-01-09 2020-08-11 nuTonomy Inc. Location signaling with respect to an autonomous vehicle and a rider
USD894020S1 (en) 2018-12-13 2020-08-25 Waymo Llc Three-dimensional sign
WO2020193392A1 (fr) * 2019-03-28 2020-10-01 Volkswagen Aktiengesellschaft Procédé de guidage cible vers une personne cible, dispositif électronique de la personne cible et dispositif électronique du ramasseur ainsi que véhicule automobile
US11067986B2 (en) * 2017-04-14 2021-07-20 Panasonic Intellectual Property Corporation Of America Autonomous driving vehicle, method of stopping autonomous driving vehicle, and recording medium
EP3855406A4 (fr) * 2018-10-24 2021-12-01 Yamaha Hatsudoki Kabushiki Kaisha Véhicule à conduite autonome
US11365014B2 (en) * 2016-07-04 2022-06-21 SZ DJI Technology Co., Ltd. System and method for automated tracking and navigation
USD958245S1 (en) 2018-10-10 2022-07-19 Waymo Llc Three-dimensional sign
US20220250657A1 (en) * 2019-08-09 2022-08-11 Harman International Industries, Incorporated Autonomous vehicle interaction system
USD960722S1 (en) 2018-12-13 2022-08-16 Waymo Llc Three-dimensional sign
US11473923B2 (en) * 2017-10-26 2022-10-18 Toyota Jidosha Kabushiki Kaisha Vehicle dispatch system for autonomous driving vehicle and autonomous driving vehicle
US11496865B2 (en) 2017-05-05 2022-11-08 Pcms Holdings, Inc. Privacy-preserving location based services
US11543824B2 (en) 2018-10-09 2023-01-03 Waymo Llc Queueing into pickup and drop-off locations

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130155237A1 (en) * 2011-12-16 2013-06-20 Microsoft Corporation Interacting with a mobile device within a vehicle using gestures
KR20150006270A (ko) * 2013-07-08 2015-01-16 체이시로보틱스(주) 차량의 자동주차 장치
US20150160735A1 (en) * 2013-12-10 2015-06-11 Hyundai Motor Company System and method for recognizing user's gesture for carrying out operation of vehicle
US20150339928A1 (en) * 2015-08-12 2015-11-26 Madhusoodhan Ramanujam Using Autonomous Vehicles in a Taxi Service

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130155237A1 (en) * 2011-12-16 2013-06-20 Microsoft Corporation Interacting with a mobile device within a vehicle using gestures
KR20150006270A (ko) * 2013-07-08 2015-01-16 체이시로보틱스(주) 차량의 자동주차 장치
US20150160735A1 (en) * 2013-12-10 2015-06-11 Hyundai Motor Company System and method for recognizing user's gesture for carrying out operation of vehicle
US20150339928A1 (en) * 2015-08-12 2015-11-26 Madhusoodhan Ramanujam Using Autonomous Vehicles in a Taxi Service

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11365014B2 (en) * 2016-07-04 2022-06-21 SZ DJI Technology Co., Ltd. System and method for automated tracking and navigation
EP3566022A4 (fr) * 2017-01-09 2020-01-08 nuTonomy Inc. Signalisation d'emplacement par rapport à un véhicule autonome et un passager
US10740863B2 (en) 2017-01-09 2020-08-11 nuTonomy Inc. Location signaling with respect to an autonomous vehicle and a rider
US11067986B2 (en) * 2017-04-14 2021-07-20 Panasonic Intellectual Property Corporation Of America Autonomous driving vehicle, method of stopping autonomous driving vehicle, and recording medium
US11496865B2 (en) 2017-05-05 2022-11-08 Pcms Holdings, Inc. Privacy-preserving location based services
US11473923B2 (en) * 2017-10-26 2022-10-18 Toyota Jidosha Kabushiki Kaisha Vehicle dispatch system for autonomous driving vehicle and autonomous driving vehicle
EP3480620A1 (fr) * 2017-11-02 2019-05-08 Aptiv Technologies Limited Procédé d'accès mains libres
CN110789444A (zh) * 2018-08-03 2020-02-14 通用汽车环球科技运作有限责任公司 基于客户位置的外部人机界面显示调整
CN110789444B (zh) * 2018-08-03 2023-02-03 通用汽车环球科技运作有限责任公司 基于客户位置的外部人机界面显示调整
US11543824B2 (en) 2018-10-09 2023-01-03 Waymo Llc Queueing into pickup and drop-off locations
US11977387B2 (en) 2018-10-09 2024-05-07 Waymo Llc Queueing into pickup and drop-off locations
USD995631S1 (en) 2018-10-10 2023-08-15 Waymo Llc Three-dimensional sign
US10854085B2 (en) 2018-10-10 2020-12-01 Waymo Llc Smart signs for autonomous vehicles
USD958245S1 (en) 2018-10-10 2022-07-19 Waymo Llc Three-dimensional sign
CN112889082A (zh) * 2018-10-10 2021-06-01 伟摩有限责任公司 自主车辆的智能标志
USD958244S1 (en) 2018-10-10 2022-07-19 Waymo Llc Three-dimensional sign
US11443634B2 (en) 2018-10-10 2022-09-13 Waymo Llc Smart signs for autonomous vehicles
WO2020076536A1 (fr) * 2018-10-10 2020-04-16 Waymo Llc Panneaux intelligents pour véhicule autonome
US11983940B2 (en) 2018-10-24 2024-05-14 Yamaha Hatsudoki Kabushiki Kaisha Autonomous vehicle
EP3855406A4 (fr) * 2018-10-24 2021-12-01 Yamaha Hatsudoki Kabushiki Kaisha Véhicule à conduite autonome
US11269324B2 (en) * 2018-11-09 2022-03-08 Apollo Intelligent Driving Technology (Beijing) Co., Ltd. Method and apparatus for controlling autonomous vehicle
EP3650295A1 (fr) * 2018-11-09 2020-05-13 Baidu Online Network Technology (Beijing) Co., Ltd. Procédé et appareil de commande de véhicule autonome
CN112888614A (zh) * 2018-11-23 2021-06-01 宝马股份公司 用于自动驾驶车辆的驾驶辅助系统和用于引导自动驾驶车辆的方法
WO2020104647A1 (fr) * 2018-11-23 2020-05-28 Bayerische Motoren Werke Aktiengesellschaft Système d'aide à la conduite pour un véhicule à conduite automatisée et procédé pour guider un véhicule à conduite automatisée
CN111301413A (zh) * 2018-12-10 2020-06-19 通用汽车环球科技运作有限责任公司 用于控制自主车辆的系统和方法
US20200183415A1 (en) * 2018-12-10 2020-06-11 GM Global Technology Operations LLC System and method for control of an autonomous vehicle
USD960722S1 (en) 2018-12-13 2022-08-16 Waymo Llc Three-dimensional sign
USD958243S1 (en) 2018-12-13 2022-07-19 Waymo Llc Three-dimensional sign
USD894020S1 (en) 2018-12-13 2020-08-25 Waymo Llc Three-dimensional sign
WO2020193392A1 (fr) * 2019-03-28 2020-10-01 Volkswagen Aktiengesellschaft Procédé de guidage cible vers une personne cible, dispositif électronique de la personne cible et dispositif électronique du ramasseur ainsi que véhicule automobile
US20220250657A1 (en) * 2019-08-09 2022-08-11 Harman International Industries, Incorporated Autonomous vehicle interaction system

Similar Documents

Publication Publication Date Title
WO2017155740A1 (fr) Système et procédé de reconnaissance automatisée d'un client de transport
US11721098B2 (en) Augmented reality interface for facilitating identification of arriving vehicle
US11097690B2 (en) Identifying and authenticating autonomous vehicles and passengers
JP5871952B2 (ja) ナビゲーションのためのカメラ対応ヘッドセット
CN111524381B (zh) 一种停车位置的推送方法、装置、系统及电子设备
US20190228246A1 (en) Pickup Service Based on Recognition Between Vehicle and Passenger
CN110519555B (zh) 显示控制装置以及计算机可读存储介质
CN110686694A (zh) 导航方法、装置、可穿戴电子设备及计算机可读存储介质
US20200058220A1 (en) Electronic apparatus, roadside unit, and transport system
US20190147743A1 (en) Vehicle guidance based on location spatial model
CN110556022A (zh) 基于人工智能的停车场车位指示方法、装置及电子设备
CN112041862A (zh) 用于通过自主车辆进行乘客识别的方法和车辆系统
JP4059154B2 (ja) 情報送受信装置、情報送受信用プログラム
US11943566B2 (en) Communication system
US20210089983A1 (en) Vehicle ride-sharing assist system
CN110301133B (zh) 信息处理装置、信息处理方法和计算机可读记录介质
CN114755670A (zh) 用于协助驾驶员和乘客彼此定位的系统和方法
EP3591589A1 (fr) Identification et authentification de véhicules autonomes et de passagers
CN115484288B (zh) 智能寻车系统及寻车方法
WO2023005961A1 (fr) Procédé de positionnement de véhicules et dispositif associé
US20230171829A1 (en) Communication system
WO2023065810A1 (fr) Procédé et appareil d'acquisition d'images, terminal mobile et support d'enregistrement informatique
WO2019207749A1 (fr) Système informatique, procédé d'estimation de trajectoire de corps mobile et programme
CN113836963A (zh) 寻车方法、移动终端及寻车系统
TW202240475A (zh) 資訊處理裝置、資訊處理系統、資訊處理方法及記錄媒體

Legal Events

Date Code Title Description
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17711855

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17711855

Country of ref document: EP

Kind code of ref document: A1