CN115527021A - User positioning method and device - Google Patents

User positioning method and device Download PDF

Info

Publication number
CN115527021A
CN115527021A CN202211335346.8A CN202211335346A CN115527021A CN 115527021 A CN115527021 A CN 115527021A CN 202211335346 A CN202211335346 A CN 202211335346A CN 115527021 A CN115527021 A CN 115527021A
Authority
CN
China
Prior art keywords
user
information
target user
target
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211335346.8A
Other languages
Chinese (zh)
Inventor
郭朝斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunkong Zhixing Technology Co Ltd
Original Assignee
Yunkong Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunkong Zhixing Technology Co Ltd filed Critical Yunkong Zhixing Technology Co Ltd
Priority to CN202211335346.8A priority Critical patent/CN115527021A/en
Publication of CN115527021A publication Critical patent/CN115527021A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/84Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
    • G06V10/85Markov-related models; Markov random fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the specification discloses a user positioning method and device, which comprise the following steps: in videos collected by the road side equipment, if user authorization information displayed by a target user is detected, the target user is identified according to the user authorization information, and user characteristic information of the target user is extracted. And tracking and positioning the target user in the video collected by the road side equipment based on the user characteristic information to obtain the real-time position information of the target user. Therefore, the target user is identified, tracked and positioned through the video collected by the road side equipment based on the user authorization information displayed by the target user. Because the positioning accuracy based on the roadside device is generally higher than the satellite positioning accuracy, the user positioning method disclosed in the embodiments of the present specification can improve the positioning accuracy and the positioning accuracy for the user.

Description

User positioning method and device
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a user positioning method and device.
Background
The online taxi appointment, namely the short term of the online taxi appointment operation service, refers to the operation activities of establishing a service platform by relying on the internet technology, accessing qualified vehicles and drivers, and providing non-tour taxi appointment service by integrating supply and demand information. After the passenger initiates a network car booking taking request, the position information of the passenger, the starting place of the journey and the ending place is sent to a network car booking platform. The start of the journey is typically determined from the position information of the passenger. After the passenger's ride request and net appointment match successfully, the net appointment driver will navigate past the passenger based on the destination point at the origin.
In the prior art, the position information of the passenger is generally satellite positioning information acquired through a mobile phone, and the positioning error of the position information is generally several meters to more than ten meters, so that the deviation between the position of the passenger displayed on an interface of the network taxi appointment software and the actual position is very large, and the connection guidance cannot be well realized. Passengers and taxi appointment drivers often need to call to communicate with specific locations, and spend time searching for each other.
Therefore, a method for improving the positioning accuracy and precision of the user is needed.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present specification provide a user positioning method and device, so as to improve user positioning accuracy and accuracy.
An embodiment of the present specification provides a user positioning method, including:
the method comprises the steps of obtaining first video data of a to-be-detected video collected by road side equipment.
Determining user characteristic information of a target user showing user authorization information in first video data of the video to be detected; the user authorization information is information displayed when the target user needs to use the service related to the user position information.
And tracking the target user based on the user characteristic information to obtain first position information of the target user.
An embodiment of the present specification provides a user positioning device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
the method includes the steps of obtaining first video data collected by road side equipment.
Determining user characteristic information of a target user showing user authorization information in the first video data; the user authorization information is information that is presented when the user needs to use a service related to user location information.
And tracking the target user based on the user characteristic information to obtain first position information of the target user.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
the embodiment of the specification discloses a user positioning method and device, which comprise the following steps: in videos collected by the road side equipment, if user authorization information displayed by a target user is detected, the target user is identified according to the user authorization information, and user feature information of the target user is extracted. And tracking and positioning the target user in the video collected by the road side equipment based on the user characteristic information to obtain the real-time position information of the target user. Therefore, the target user is identified, tracked and positioned through the video collected by the road side equipment based on the user authorization information displayed by the target user. Because the positioning precision based on the roadside device is generally higher than the satellite positioning precision, the user positioning method disclosed by the embodiment of the specification has the positioning precision higher than the satellite positioning precision, so that the user positioning precision and accuracy can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a user positioning method according to an embodiment of the present disclosure.
Fig. 2 is a schematic view of an application scenario of a user positioning method provided in an embodiment of the present specification.
Fig. 3 is a schematic view of an application scenario for guiding a target vehicle to interface with a user based on the user positioning method provided in the embodiment of the present specification.
Fig. 4 is a schematic structural diagram of a user positioning apparatus corresponding to fig. 1 according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.
In practical application, accurate user position information may be needed in various scenes such as car booking, navigation, take-away express delivery and the like. In the prior art, user position information is generally satellite positioning information acquired through a mobile phone, but the error of satellite positioning is generally several meters to ten and several meters, so that the precision of the user position information cannot meet the use requirement.
In addition, when the user forgets to carry the handheld device or some special people are inconvenient to operate the handheld device, the user cannot be positioned and a service request can not be sent.
In order to solve the defects in the prior art, the scheme provides the following embodiments:
fig. 1 is a schematic flowchart of a user positioning method provided in an embodiment of the present specification.
From the program perspective, the execution subject of the flow may be a roadside device or a server connected to the roadside device (for example, one or more of cloud servers such as an edge cloud, a regional cloud, and a central cloud), and may also be an application program installed at the roadside device or the server. As shown in fig. 1, the process may include the following steps:
step 101: the method includes the steps of obtaining first video data collected by road side equipment.
In the embodiments of the present disclosure, the Roadside device may be a Roadside Unit (RSU) having only Roadside sensing capability, or may also be a Roadside Computing Unit (RCU) having Roadside edge Computing capability.
In an embodiment of this specification, the first video data is video data collected by a roadside device. The first video data can be used for extracting user characteristic information.
Step 103: determining user characteristic information of a target user showing user authorization information in the first video data; the user authorization information is information displayed when the target user needs to use the service related to the user position information.
In an embodiment of the present specification, the user authorization information user identifies a target user from video data collected by a roadside device. The user authorization information may include: at least one of characters and images displayed by the user through the portable device and action gestures made by the user. Specifically, the user authorization information in text form may include: a user name, a user unique identifier (UserID), an order number and the like can be used for determining characters of user identity information; the user authorization information in the form of an image may include: and generating a graph code (a bar code, a two-dimensional code and the like) which can be used for determining the identity information of the user based on the information such as the user name, the user unique identification (UserID), the order number and the like. The action gesture made by the user may include one or a set of designated actions, designated gestures, etc., and is not specifically limited herein as long as the target body gesture information can distinguish whether the user has a desire to initiate a service request.
In this embodiment, the target user may refer to a user who needs to use a service related to user location information, for example, the target user may be a user who takes a taxi, waits for taking a take-out, and is express, and may also be a user who navigates non-motor vehicles (bicycles and electric bicycles) and pedestrians, a group of users who use team navigation, and the like, which are not limited herein. The service may specifically refer to a service that needs to acquire user location information in a service process, such as a network car appointment service, a take-out service, an express delivery service, a navigation service, and the like.
In this embodiment of the present description, the user feature information may include color features, grayscale features, texture features, contours, optical flow features, corner features, and the like, and may also include face features, gait features, and the like, which is not specifically limited herein.
Step 105: and tracking the target user based on the user characteristic information to obtain first position information of the target user.
In this embodiment of the present specification, based on the user feature information, the target user is identified, tracked, and located from video data that may include the target user, and finally, first location information of the target user is obtained.
In the embodiment of the present specification, in a video collected by a roadside device, if user authorization information shown by a target user is detected, the target user is identified according to the user authorization information, and user feature information of the target user is extracted. And tracking and positioning the target user in the video collected by the road side equipment based on the user characteristic information to obtain the real-time position information of the target user. Therefore, the target user is identified, tracked and positioned through the video collected by the road side equipment based on the user authorization information displayed by the target user. Since the accuracy of positioning based on roadside equipment is generally higher than the satellite positioning accuracy, the user positioning method disclosed in the embodiments of the present specification has a positioning accuracy higher than the satellite positioning accuracy.
Based on the process in fig. 1, some specific embodiments of the process are also provided in the examples of this specification, which are described below.
Optionally, the user authorization information is graphical coded information carrying user identity information;
the determining the user characteristic information of the target user showing the user authorization information in the first video data specifically includes:
acquiring graphical coding information displayed by the target user in the first video data;
determining the user identity information of the target user by analyzing the graphic coding information;
and extracting user characteristic information of the target user from the first video data based on the graphical coding information and the user identity information.
In the embodiment of the present specification, the graphic coding information may include a two-dimensional code and a barcode; the graphics coding information may be generated by the road side device, a server connected to the road side device, or a device of a service provider, and sent to the target user equipment. The graphically encoded information may also be generated directly by the target user device.
In this embodiment, the user identity information may include one or more of a user name, a unique user identifier, a number of the service, and the like.
In the embodiment of the present specification, if pattern coding information is detected, the pattern coding information is analyzed to obtain an identification result for the pattern coding information. And if the identification result aiming at the graphic coding information comprises user identity information, determining that the user displaying the graphic coding information is the user corresponding to the user identity information (namely the target user). And determining the image of the target user showing the graphical coding information according to the user identity information from the first video data, and extracting the user characteristic information from the image.
In this embodiment, if the target user is located within a video capture range of multiple roadside devices at the same time, the user feature information may be extracted from multiple pieces of first video data captured by the roadside devices.
In this embodiment of the present specification, the process of extracting the user feature information may further include:
and tracking the target user in the video before the target user shows the graphical coding information in the first video data, and extracting user characteristic information according to a tracking result.
In this embodiment, the process of extracting the user feature information may further include: and updating and perfecting the user characteristic information according to a follow-up tracking result of the target user.
In this embodiment, according to an image of the target user when displaying the graphic coding information, the image of the target user is determined from the first video data, and then the user feature information of the target user is extracted. Therefore, the user characteristic information of the target user is extracted from the video image collected by the road side equipment.
Optionally, before acquiring the first video data collected by the roadside device, the method further includes:
acquiring second position information of the target user obtained based on target user equipment;
judging whether the target user is located within a video acquisition range of the road side equipment or not based on the second position information to obtain a first judgment result;
if the first judgment result shows that the target user is located within the video acquisition range of the road side equipment, sending authorization prompt information to the target user equipment; the authorization prompt message is used for prompting the user to display the user authorization message.
In this embodiment, the target user equipment may include a mobile phone, a wearable device, and other portable devices.
In this embodiment, the second location information refers to user location information obtained based on a target user equipment, and may include satellite positioning information and may also include UWB (Ultra Wide-Band) positioning information.
In this embodiment of the present specification, the authorization prompt information may carry user authorization information generated by the server, or carry an instruction that instructs the target user equipment to generate the user authorization information.
In the embodiment of the present specification, after receiving a service request of a user, according to second location information obtained based on target user equipment, it is determined whether the target user is located within a video acquisition range of the roadside device, and if so, the authorization prompt information is sent to the target user equipment, so that after the user shows the user authorization information according to the authorization prompt information, user feature information in the user authorization information is extracted.
Optionally, the obtaining a video tracking result for the target user based on the user feature information specifically includes:
according to a visual tracking algorithm, based on the user characteristic information and second video data, obtaining a video tracking result for the target user; the second video data is video data which can be collected in a pre-estimation mode and contains the target user.
The obtaining of the first position information of the target user based on the video tracking result specifically includes:
and obtaining first position information of the target user based on the video tracking result according to a visual ranging algorithm.
In an embodiment of this specification, the visual tracking algorithm is configured to determine, according to the user feature information, a location area or a position of the image of the target user from a video frame of the second video data. The visual tracking algorithm may specifically include: TLD (Tracking Learning Detection) algorithm, deep Learning algorithm, etc.
In this embodiment, the video tracking result may include a region or a position of the image of the target user in the second video data.
In an embodiment of the present specification, the visual ranging algorithm may be configured to determine first position information of the target user according to the video tracking result and pose information of image equipment in the roadside equipment. The visual ranging algorithm may include a monocular visual positioning algorithm, a binocular visual positioning algorithm, and a monocular visual positioning algorithm.
In the embodiment of the present specification, in the process of tracking and processing the target user, a plurality of candidate targets and corresponding confidence scores thereof may be confirmed from second video data according to the user feature information; at this time, position information corresponding to each candidate object may be calculated, and assist determination may be performed based on the historical position information of the object user (previously determined first position information) and/or the second position information of the object user (satellite positioning information, UWB positioning information, etc.). For example, the candidate target with the moving distance greater than the normal moving distance in the interval of two times may be screened out according to the last historical position information of the target user and the corresponding positioning time thereof, and the position information and the current time of the candidate target. The confidence score may also be modified based on historical location information and/or second location information.
In this embodiment of the present specification, for candidate targets in the second video data collected by multiple roadside devices, the confidence scores corresponding to the candidate targets may also be corrected based on the position information of the candidate targets.
Optionally, the tracking processing is performed on the target user based on the user characteristic information to obtain the first position information of the target user, and specifically includes:
determining position information of first road side equipment; the first road side equipment is road side equipment which acquires video data containing the target user;
determining second road side equipment based on the position information of the first road side equipment and the position relation between the road side equipment; the second road side equipment is predicted road side equipment capable of acquiring video data containing the target user;
acquiring second video data acquired by the second road side equipment;
based on a visual tracking algorithm, obtaining a video tracking result aiming at the target user according to the user characteristic information and the second video data;
and obtaining first position information of the target user based on the video tracking result according to a fusion algorithm.
In the embodiment of the present specification, the tracking processing performed on the target user may be a process of loop execution; and after the last cycle is finished, determining the first road side equipment which contains the video data of the target user and is acquired last time according to the video tracking result of the target user last time. The first roadside device may further include a roadside device that acquires first video data used for extracting the user feature information.
In this embodiment, the second roadside apparatus may include the first roadside apparatus and a third roadside apparatus around the first roadside apparatus. Specifically, the third road side device may refer to a road side device in which the target user may directly enter the video acquisition range of the first road side device from the video acquisition range of the first road side device without passing through the video acquisition ranges of other road side devices. The third road side device may also be a road side device whose distance from a certain first road side device is less than a preset distance.
In this embodiment, the third roadside apparatus may further include only a part of roadside apparatuses that satisfy the conditions related to the video capture range, or satisfy specific conditions. Specifically, according to the first position information of the target user determined last time, and the movement direction and the movement speed of the target user determined based on the video tracking result, roadside devices meeting the conditions are further screened to obtain the third roadside device.
In this embodiment of the present specification, the step of performing the tracking processing on the target user may also be executed by a road side device or a computing device directly connected to the road side device only. In this case, the method may further include: after the second road side device is determined, the user characteristic information of the target user is sent to the road side device, so that the road side device performs tracking processing based on the user characteristic information.
In the embodiment of the present specification, a second roadside device that is estimated to acquire video data including the target user is determined based on the position information of the first roadside device that acquires the video data including the target user and the position relationship between roadside devices, on one hand, the target user is prevented from being lost in the video data acquired by the roadside devices by determining the second roadside device, and on the other hand, the target user may not be tracked from the video data acquired by other roadside devices, so that the amount of calculation for tracking processing is reduced.
Optionally, the method further includes:
acquiring third position information of a target user, which is obtained based on target user equipment;
the obtaining the first position information of the target user based on the video tracking result according to the fusion algorithm further comprises:
and obtaining first position information of the target user based on the video tracking result and the third position information according to a fusion algorithm.
In this embodiment, the third location information may refer to user location information obtained based on the target user equipment, and may include satellite positioning information and may also include UWB positioning information.
In the embodiment of the present specification, the fusion algorithm may include a kalman filter algorithm and a hidden markov chain model.
In the embodiment of the present specification, the position information obtained based on the video tracking result and the third position information are processed by using a fusion algorithm to obtain the first position information of the target user.
Optionally, the user authorization information is target body posture information for initiating a service request;
before determining the user characteristic information of the target user who shows the user authorization information in the first video data, the method further includes:
judging whether the body posture information of the specified user in the first video data is the target body posture information or not to obtain a second judgment result;
if the second judgment result shows that the body posture information of the specified user in the first video data is the target body posture information, determining the specified user as a target user displaying the user authorization information;
determining the service request of the target user based on the target body posture information of the target user;
after the first position information of the target user is obtained based on the video tracking result, the method further includes:
responding to the service request of the target user based on the first position information.
In this embodiment, the target body posture information may include one or a group of designated actions, designated postures, designated gestures, and the like, and is not specifically limited herein, as long as the target body posture information can distinguish whether the user has a desire to initiate the service request.
In the embodiment of the present specification, a target body posture information may correspond to a certain service request. That is, if the user makes the target body posture information, it may be considered that the user initiates a service request corresponding to the target body posture information.
In this embodiment, the target body posture information may also be used to replace the graphical encoded information. Specifically, after the target user acquires the target body posture information, the target user makes a corresponding action or posture, and the user making the corresponding action or posture is determined as the target user according to the first video data.
In the embodiment of the present specification, the service request may be used to request a taxi, an alarm, a rescue, and the like.
In this embodiment of this specification, the responding to the service request of the target user may include providing, by the road side device or a server connected to the road side device, a request corresponding to the service request, and may further include forwarding the request to a service provider corresponding to the service request.
In this embodiment of the present specification, if a user making target body posture information is detected in the first video data, the user is determined as a target user who issues a service request corresponding to the target body posture information, and the service request of the target user is responded based on the video tracking result and the first position information. When a potential passenger forgets to carry the handheld device or some special people do not operate the handheld device conveniently, a specific gesture action or body language can be made to the roadside device to initiate a service request.
Optionally, before determining the user characteristic information of the target user who shows the user authorization information in the first video data, the method further includes:
acquiring a service request sent by the target user;
the method further comprises the following steps:
determining user image information of the target user based on a video tracking result; the video tracking result is obtained by tracking the target user based on the user characteristic information; the user image information comprises image information collected by the roadside device and used for identifying the target user;
and sending the first position information and the user image information to a service provider of the service request.
In this embodiment, the user image information may refer to a picture or a video for identifying or locating the target user, and may specifically include one or more of a picture or a video representing the face, clothing, and a current location or environment of the target user.
In this embodiment of the present specification, the first location information may be location information of the target user in real time, and the user image information may be a picture or a video of the target user in real time.
In this embodiment, the service provider of the service request may provide the service to the target user according to at least one of the first location information and the user image information. Specifically, the first location information may be used to plan a route from the location of the service provider of the service request to the location of the target user. The user image information may be used to find the target user when the service provider is in proximity to the target user.
Further, the first location information and the user image information may be used to generate a Virtual Reality (VR) image, an Augmented Reality (AR) image or a Mixed Reality (MR) image to guide a service provider of the service request to find the target user.
For example, if the service request is a car taking request, an Augmented Reality (AR) image or a Mixed Reality (MR) image may be generated based on the first location information and the user image information of the target user to guide the service provider (driver) to interface with the target user.
Fig. 2 is a schematic flowchart of a user positioning method provided in an embodiment of the present specification. Wherein 1 is roadside equipment; and 2, target user equipment. P0 is the position of the target user when the user authorization information is displayed; p1, P2, and P3 are positions where the target user is located at times T1, T2, and T3 after the user feature information is acquired, respectively. Fig. 3 is a schematic view of a scene for guiding a target vehicle based on the user positioning method provided in an embodiment of the present specification. The system comprises a road side device 1, a target user device 2, a target user needing transportation service 3, a server connected with the road side device 4 and a target vehicle providing the transportation service 5.
The following is described in detail with reference to fig. 2 and 3:
and the target user 1 initiates a riding request through the target user equipment 2 and uploads second position information of the target user.
After receiving the riding request, the server determines whether the target user is located within the video acquisition range of the roadside device based on the second position information, and if so, sends graphical encoding information (namely, an authorization code in fig. 3) to the target user device.
And the target user 1 displays the graphical coding information to road side equipment through the target user equipment 2.
The road side equipment acquires the first video data when the graphic coding information displayed by the target user is obtained, analyzes the graphic coding information, determines the user identity information of the target user, and extracts the user characteristic information; and the road side equipment tracks the target user based on the user characteristic information to obtain real-time tracking positioning data of the target user. The tracking and positioning data may include first location information and/or user image information, and specifically may include first location information (P1, P2, P3) of the target user at multiple times, such as T1, T2, T3, and user image information at multiple times, such as T1, T2, T3.
And the road side equipment transmits the tracking and positioning data of the target user to the target vehicle 5 through the server 4.
And the target vehicle 5 loads the target user to get on the vehicle according to the tracking positioning data so as to transport the target user 5 to a destination.
Optionally, the service request includes a riding request; the method further comprises the following steps:
determining vehicle position information of a target vehicle based on the roadside apparatus; wherein the target vehicle is to provide transport services to the target user in response to the ride request;
judging whether the target vehicle reaches the appointed position for carrying the target user or not based on the vehicle position information to obtain a third judgment result;
if the third judgment result shows that the target vehicle reaches the specified position, generating confirmation information for showing that the target vehicle reaches the specified position;
and sending the confirmation information to the target user equipment.
In the embodiment of the specification, the riding request can be used for requesting a network car appointment service. The designated location may refer to a starting point of a journey of the transportation service, that is, a connection point for receiving the target user.
In an embodiment of the present specification, the roadside device may determine the vehicle position information based on the acquired video data and the license plate information of the target vehicle; the Vehicle position information may also be acquired based on Vehicle to electrical networking (V2X) based on the roadside device.
In this embodiment, whether the target vehicle reaches the designated location for receiving the target user may be determined according to the vehicle location information and the designated location. In practice, when the target vehicle does not arrive at the designated location, the target vehicle may be directly loaded to the target user, that is, when the designated location is not consistent with the real-time location information (first location information) of the target user, if the vehicle location information is close to the first location information, it may be considered that the target vehicle arrives at the designated location where the target user is loaded.
In an embodiment of this specification, the road side device acquires vehicle position information of the target vehicle, and automatically confirms whether the target vehicle reaches a specified position where the target user is loaded. If the target vehicle is an unmanned vehicle and the designated position has no parking space, the unmanned vehicle can be parked at a position deviated from the fixed connection point, and the arrival can be automatically confirmed. If the target vehicle is a common vehicle, the automatic confirmation method can avoid the driver from having the randomness of the confirmation arrival operation, and further prevent the generation of user disputes caused by the randomness.
Optionally, after generating the confirmation information indicating that the target vehicle arrives at the designated location, the method further includes:
acquiring vehicle image information of the target vehicle based on the roadside apparatus;
and sending the vehicle position information and the vehicle image information to the target user equipment.
In this embodiment, after it is confirmed that the target vehicle reaches a specified location, the vehicle location information and the vehicle image information may be sent to the target user device to guide the target user to board the target vehicle.
In this embodiment, the vehicle position information and the vehicle image information may be used to generate a Virtual Reality (VR) image, an Augmented Reality (AR) image, or a Mixed Reality (MR) image, so as to guide the target user to board the target vehicle.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the method.
Fig. 4 is a schematic structural diagram of a user positioning apparatus corresponding to fig. 1 according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus 400 may include:
at least one processor 410; and the number of the first and second groups,
a memory 430 communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory 430 stores instructions 420 executable by the at least one processor 410 to cause the at least one processor 410 to:
the method includes the steps of obtaining first video data collected by road side equipment.
Determining user characteristic information of a target user showing user authorization information in the first video data; the user authorization information is information that is presented when the user needs to use a service related to user location information.
And tracking the target user based on the user characteristic information to obtain first position information of the target user.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus shown in fig. 4, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD) (e.g., a Field Programmable Gate Array (FPGA)) is an integrated circuit whose Logic functions are determined by a user programming the Device. A digital character system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development, but the original code before compiling is also written in a specific Programming Language, which is called Hardware Description Language (HDL), and the HDL is not only one kind but many kinds, such as abll (Advanced boot Expression Language), AHDL (alternate hard Description Language), traffic, CUPL (computer universal Programming Language), HDCal (Java hard Description Language), lava, lola, HDL, PALASM, software, rhydl (Hardware Description Language), and vhul-Language (vhyg-Language), which is currently used in the field. It will also be apparent to those skilled in the art that hardware circuitry for implementing the logical method flows can be readily obtained by a mere need to program the method flows with some of the hardware description languages described above and into an integrated circuit.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in purely computer readable program code means, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be regarded as a hardware component and the means for performing the various functions included therein may also be regarded as structures within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises that element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for locating a user, comprising:
acquiring first video data acquired by roadside equipment;
determining user characteristic information of a target user showing user authorization information in the first video data; the user authorization information is information displayed when the target user needs to use the service related to the user position information;
and tracking the target user based on the user characteristic information to obtain first position information of the target user.
2. The method of claim 1, wherein the user authorization information is graphically encoded information carrying user identity information;
the determining the user characteristic information of the target user showing the user authorization information in the first video data specifically includes:
acquiring graphical coding information displayed by the target user in the first video data;
determining the user identity information of the target user by analyzing the graphic coding information;
and extracting user characteristic information of the target user from the first video data based on the graphical coding information and the user identity information.
3. The method of claim 2, wherein the obtaining the first video data collected by the roadside device further comprises, prior to:
acquiring second position information of the target user obtained based on target user equipment;
judging whether the target user is located within a video acquisition range of the road side equipment or not based on the second position information to obtain a first judgment result;
if the first judgment result shows that the target user is located in the video acquisition range of the road side equipment, sending authorization prompt information to the target user equipment; the authorization prompt message is used for prompting the user to display the user authorization message.
4. The method of claim 1, wherein the tracking the target user based on the user characteristic information to obtain the first position information of the target user specifically comprises:
determining position information of first road side equipment; the first road side equipment is road side equipment which acquires video data containing the target user;
determining second road side equipment based on the position information of the first road side equipment and the position relation between the road side equipment; the second road side equipment is predicted road side equipment capable of acquiring video data containing the target user;
acquiring second video data acquired by the second road side equipment;
based on a visual tracking algorithm, obtaining a video tracking result aiming at the target user according to the user characteristic information and the second video data;
and obtaining first position information of the target user based on the video tracking result according to a fusion algorithm.
5. The method of claim 4, wherein the method further comprises:
acquiring third position information of a target user obtained based on target user equipment;
the obtaining of the first position information of the target user based on the video tracking result according to the fusion algorithm further includes:
and obtaining first position information of the target user based on the video tracking result and the third position information according to a fusion algorithm.
6. The method of claim 1, wherein the user authorization information is target body posture information for initiating a service request;
before determining the user characteristic information of the target user who shows the user authorization information in the first video data, the method further includes:
judging whether the body posture information of the specified user in the first video data is the target body posture information or not to obtain a second judgment result;
if the second judgment result shows that the body posture information of the designated user in the first video data is the target body posture information, determining the designated user as a target user displaying the user authorization information;
determining the service request of the target user based on the target body posture information of the target user;
after the first position information of the target user is obtained based on the video tracking result, the method further includes:
responding to the service request of the target user based on the first position information.
7. The method of claim 1, wherein prior to determining the user characteristic information of the target user exhibiting the user authorization information in the first video data, further comprising:
acquiring a service request sent by the target user;
the method further comprises the following steps:
determining user image information of the target user based on a video tracking result; the video tracking result is obtained by tracking the target user based on the user characteristic information; the user image information comprises image information which is collected by the road side equipment and can be used for identifying the target user;
and sending the first position information and the user image information to a service provider of the service request.
8. The method of claim 7, wherein the service request comprises a ride request; the method further comprises the following steps:
determining vehicle position information of a target vehicle based on the roadside apparatus; wherein the target vehicle is to provide transport services to the target user in response to the ride request;
judging whether the target vehicle reaches the appointed position for carrying the target user or not based on the vehicle position information to obtain a third judgment result;
if the third judgment result shows that the target vehicle reaches the specified position, generating confirmation information for showing that the target vehicle reaches the specified position;
and sending the confirmation information to the target user equipment.
9. The method of claim 8, wherein after generating the confirmation information indicating the arrival of the target vehicle at the designated location, further comprising:
acquiring vehicle image information of the target vehicle based on the roadside apparatus;
and sending the vehicle position information and the vehicle image information to the target user equipment.
10. A user positioning device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring first video data acquired by roadside equipment;
determining user characteristic information of a target user showing user authorization information in the first video data; the user authorization information is information displayed when the user needs to use the service related to the user position information;
and tracking the target user based on the user characteristic information to obtain first position information of the target user.
CN202211335346.8A 2022-10-28 2022-10-28 User positioning method and device Pending CN115527021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211335346.8A CN115527021A (en) 2022-10-28 2022-10-28 User positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211335346.8A CN115527021A (en) 2022-10-28 2022-10-28 User positioning method and device

Publications (1)

Publication Number Publication Date
CN115527021A true CN115527021A (en) 2022-12-27

Family

ID=84703331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211335346.8A Pending CN115527021A (en) 2022-10-28 2022-10-28 User positioning method and device

Country Status (1)

Country Link
CN (1) CN115527021A (en)

Similar Documents

Publication Publication Date Title
US10088846B2 (en) System and method for intended passenger detection
US10839217B2 (en) Augmented reality assisted pickup
CN106952303B (en) Vehicle distance detection method, device and system
US10977497B2 (en) Mutual augmented reality experience for users in a network system
CN107643084B (en) Method and device for providing data object information and live-action navigation
US11880899B2 (en) Proximity-based shared transportation reservations
US11682297B2 (en) Real-time scene mapping to GPS coordinates in traffic sensing or monitoring systems and methods
CN111967664A (en) Tour route planning method, device and equipment
Choi et al. Methods to detect road features for video-based in-vehicle navigation systems
CN112650772A (en) Data processing method, data processing device, storage medium and computer equipment
Xie et al. Iterative Design and Prototyping of Computer Vision Mediated Remote Sighted Assistance
CN114096996A (en) Method and apparatus for using augmented reality in traffic
WO2022152081A1 (en) Navigation method and apparatus
CN105608921A (en) Method and equipment for prompting public transport line in electronic device
CN115527021A (en) User positioning method and device
CN110955243A (en) Travel control method, travel control device, travel control apparatus, readable storage medium, and mobile device
KR20200095057A (en) Amusement execution system
JP6606779B6 (en) Information providing apparatus, information providing method, and program
CN111797658A (en) Lane line recognition method and device, storage medium and electronic device
EP4270327A1 (en) Method for counting passengers of a public transportation system, control apparatus and computer program product
Jadhav et al. Devise Road Sign Alert Detection for Vehicular Systems Using Fog Computing
WO2020073268A1 (en) Snapshot image to train roadmodel
WO2020073271A1 (en) Snapshot image of traffic scenario
WO2020073270A1 (en) Snapshot image of traffic scenario
CN114626986A (en) Polar coordinate contact interaction method based on two-dimensional map and rotary zoom camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination