CN105975967B - A kind of object localization method and system - Google Patents
A kind of object localization method and system Download PDFInfo
- Publication number
- CN105975967B CN105975967B CN201610279062.XA CN201610279062A CN105975967B CN 105975967 B CN105975967 B CN 105975967B CN 201610279062 A CN201610279062 A CN 201610279062A CN 105975967 B CN105975967 B CN 105975967B
- Authority
- CN
- China
- Prior art keywords
- information
- target
- identity
- candidate
- candidate target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
- G01C11/12—Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
Abstract
A kind of object localization method and system, its moving parameter information is determined by the positioning system of target object, the regional scope of target object is determined according to moving parameter information, the image capture device in monitoring objective region is obtained, computer vision processing is carried out to the image-forming information in the region, determine the candidate target in the target area, each candidate target is tracked, the motion state of candidate target is determined according to tracking result cartographic information, the location information of image capture device and the directional information of image capture device;The matching result according to target component information and candidate target parameter information is determined as the candidate target of target object again, finally exports the corresponding motion state of this candidate target as this positioning result.It can be seen that in this process, when determining the motion state of target object according to tracking result cartographic information, the location information of image capture device and the directional information of image capture device, not interfered by building etc., precision with higher.
Description
Technical field
The present invention relates to tracing and positioning technical field, and in particular to a kind of that wireless system, computer vision is used in combination
Object localization method and system.
Background technique
Current Satellite Navigation Technique can preferably complete outdoor positioning and navigation, but in room conditions due to building
Object blocking for signal is built, effectively can not complete indoor navigation application using Satellite Navigation Technique, limits indoor ground
The practical applications such as figure.There are many technical solutions to be suggested indoor positioning scheme, such as uses WiFi, BT, ultra wide band, ZigBee
The intensity and measured value of wireless signal are counted (such as patent WO/2012/106075A1).
Although having developed the indoor locating system for such as using WiFi, BT wireless signal at present, due to wireless signal
Positioning system is influenced that point-device locating effect can not be provided by indoor multipath effect.
Summary of the invention
In view of this, the embodiment of the present invention provides a kind of object localization method and system, to realize target under indoor situations
The accurate positioning of object.
To achieve the above object, the embodiment of the present invention provides the following technical solutions:
A kind of object localization method, comprising:
The target component information and moving parameter information of target object are obtained, the moving parameter information is by the object
Body is positioned to obtain using the measurement of self-sensor device with positioning system;
By in the moving parameter information three-dimensional coordinate and precision parameter determine target that the target object is located at
Region;
It determines and acquires equipment for target image needed for covering the target area;
Obtain the imaging data of the target image acquisition equipment;
Computer vision processing is carried out to the imaging data, determines the candidate target in the imaging data;
The candidate target is tracked, the position of equipment is acquired according to the tracking result, cartographic information, described image
The directional information of confidence breath and described image acquisition equipment determines the motion state of the candidate target;
Obtain the parameter information of the candidate target consistent with the target component information type;
Will candidate target corresponding with the highest candidate target parameter information of the target component information matches degree as mesh
Mark object;
The motion state to match with the target object is sent to the target object.
Preferably, in above-mentioned object localization method, the target component information is the moving parameter information of target object;
The candidate target parameter information is the motion state of the candidate target.
Preferably, in above-mentioned object localization method, it is described will be with the highest candidate mesh of the target component information matches degree
The corresponding candidate target of parameter information is marked as target object, comprising:
Based on the motion state of each candidate target, motion state model corresponding with the candidate target is established, it is described
Motion state template corresponding with each candidate target is stored in motion state model;
The moving parameter information is matched with the motion state template of candidate target in the motion state model,
Using the corresponding candidate target of the highest motion state template of matching degree as the target object.
Preferably, in above-mentioned object localization method, the target component information are as follows: based on mesh in continuous preset time period
Mark target motion outline or exercise data group that the moving parameter information of object is formed;
The candidate target parameter information are as follows: the position of equipment is acquired based on the tracking result, cartographic information, described image
Candidate fortune of the candidate target that the direction of confidence breath and described image acquisition equipment generates in the preset time period
Driving wheel exterior feature or exercise data group.
Preferably, in above-mentioned object localization method, the target component information are as follows: based on got in preset time period from
Object time-moving parameter information the collection corresponding with the target object that the moving parameter information of scattered target object is established
It closes, the object time-moving parameter information set includes: various discrete moment in the preset time period and right therewith
The target object moving parameter information answered;
The candidate target parameter information are as follows: the position of equipment is acquired based on the tracking result, cartographic information, described image
Candidate time-motion state set that the directional information of confidence breath and described image acquisition equipment is established, the candidate time-
Motion state set includes: various discrete moment and the corresponding candidate target movement in the preset time period
Status information.
Preferably, in above-mentioned object localization method, the target component information are as follows: the kind of the target object got
Category information, status information and visual signature information;
The candidate target parameter information are as follows: the candidate that computer vision is handled is carried out to the imaging data
Information, status information and the visual signature information of target.
Preferably, in above-mentioned object localization method, further includes:
The target component information is the identity of wireless device or identity of the target object obtained;
The candidate target parameter information is the identity of wireless device or identity of the candidate target obtained.
Preferably, in above-mentioned object localization method, further includes:
Judge whether to get the calibration request that image capture device uploads, if so, generating calibration information, the calibration
Information includes at least the number information and location information of the image capture device of required calibration.
Preferably, in above-mentioned object localization method, further includes:
By the routing information on cartographic information and the direction of motion information in the moving parameter information, described in prediction
The moving line of target object predicts subsequent time according to the velocity information in the moving line and the moving parameter information
Corresponding target area.
Preferably, in above-mentioned object localization method, the moving parameter information includes:
According to the acceleration of the target object that senser element measurement obtains entrained by the target object, directional velocity with
And towards data.
Preferably, in above-mentioned object localization method, further includes:
Obtain information, status information and the visual signature information and identity of wireless device and body of the target object
Part mark;
It obtains information, status information and the visual signature information of the candidate target and reports identity of wireless device
And identity;
After getting the identity of wireless device or identity that the target object or the candidate target report,
Will confirm that its matched target object or candidate target information, status information and visual signature information, by institute
The information, status information and visual signature information for stating target are bound with the identity of wireless device or identity;
Store the binding information.
Preferably, in above-mentioned object localization method, further includes:
Obtain the identity of wireless device or identity of target object;
The binding information according to storage, which is obtained, obtains the type to match with the identity of wireless device or identity
Information, status information and visual signature information, using the information, status information and visual signature information as the target
Parameter information;
The candidate target parameter information is information, status information and the visual signature information of candidate target;
Or
Obtain the identity of wireless device or identity of candidate target;
The binding information according to storage, which is obtained, obtains the type to match with the identity of wireless device or identity
Information, status information and visual signature information, using the information, status information and visual signature information as the candidate
Target component information;
The target component information is information, status information and visual signature information;
Or
Computer vision processing is carried out to the imaging data, obtains the information, status information and view of candidate target
Feel characteristic information;
Binding information acquisition according to storage matches with the information, status information and visual signature information
Identity of wireless device or identity, by the identity of wireless device or identity obtain as the candidate target parameter believe
Breath;
The target component information is identity of wireless device or identity.
A kind of object locating system, comprising:
Data acquisition unit, for obtaining the target component information and moving parameter information of target object, the movement ginseng
Number information is measured using self-sensor device by the target object and is positioned to obtain with positioning system;
Zone location unit, for by the moving parameter information three-dimensional coordinate and precision parameter determine the target
The target area that object is located at;
Imaging acquisition unit acquires equipment for target image needed for covering the target area for determining, obtains
The imaging data of the target image acquisition equipment;
Visual processing unit obtains the feature letter of candidate object for identifying candidate target object according to imaging data
Breath;
Motion state computing unit is believed for tracking to the candidate target according to the tracking result, map
The directional information of breath, the location information of described image acquisition equipment and described image acquisition equipment determines the candidate target
Motion state;
Candidate parameter acquisition unit, for the candidate target parameter information consistent with the target component information type;
Matching unit, being used for will time corresponding with the highest candidate target parameter information of the target component information matches degree
Select target as target object;
Transmission unit, for the motion state to match with the target object to be sent to the target object.
Preferably, in above-mentioned object locating system, the target component information is the moving parameter information of target object;
The candidate target parameter information is the motion state of the candidate target.
Preferably, in above-mentioned object locating system, the matching unit, comprising:
Model foundation unit is established corresponding with the candidate target for the motion state based on each candidate target
Motion state model is stored with motion state template corresponding with each candidate target in the motion state model;
First sub- matching unit, for by the fortune of candidate target in the moving parameter information and the motion state model
Dynamic state template is matched, using the corresponding candidate target of the highest motion state template of matching degree as the target object.
Preferably, in above-mentioned object locating system, further includes:
Target motion outline establishes unit, for the kinematic parameter letter based on target object in continuous preset time period
Breath, the target motion outline or exercise data group of formation;
Candidate Motion profile establishes unit, for acquiring equipment based on the tracking result, cartographic information, described image
Candidate of the candidate target that the direction of location information and described image acquisition equipment generates in the preset time period
Motion outline or exercise data group;
The target component information are as follows: target motion outline or exercise data group;
The candidate target parameter information are as follows: Candidate Motion profile or exercise data group.
Preferably, in above-mentioned object locating system, further includes:
Object time-motion information set establishes unit, for based on the discrete target got in preset time period
Object time-moving parameter information the set corresponding with the target object that the moving parameter information of object is established, the mesh
M- moving parameter information set includes: various discrete moment in the preset time period and corresponding described when mark
Target object moving parameter information;
Candidate time-motion state set establishes unit, for being based on the tracking result, cartographic information, described image
The directional information of the location information and described image acquisition equipment that acquire equipment establishes discrete candidate time-motion state collection
It closes, the candidate time-motion state set includes: various discrete moment in the preset time period and corresponding
The candidate target movement state information;
The target component information are as follows: object time-moving parameter information set;
The candidate target parameter information are as follows: candidate time-motion state set.
Preferably, in above-mentioned object locating system, the visual processing unit is also used to: being carried out to the imaging data
Information, status information and the visual signature information for the candidate target that computer vision is handled;
The target component information are as follows: body information, status information and the view that the target object got uploads
Feel characteristic information;
The candidate target parameter information are as follows: information, status information and the visual signature letter of the candidate target
Breath.
Preferably, in above-mentioned object locating system, the target component information be target object identity of wireless device or
Identity;
The candidate target parameter information is the identity of wireless device or identity of the candidate target.
Preferably, in above-mentioned object locating system, further includes:
Failure location unit, for judging whether to get the calibration request of image capture device upload, if so, generating
Calibration information, the calibration information include at least the number information and location information of the image capture device of required calibration.
Preferably, in above-mentioned object locating system, further includes:
Target area predicting unit, for by routing information on cartographic information and the moving parameter information
Direction of motion information predicts the moving line of the target object, according in the moving line and the moving parameter information
The corresponding target area of velocity information prediction subsequent time.
Preferably, in above-mentioned object locating system, the moving parameter information includes:
According to the acceleration of the target object that senser element measurement obtains entrained by the target object, directional velocity with
And towards data.
Preferably, in above-mentioned object locating system, further includes:
Secure information storage unit, for determining that the information, status information and visual signature of the candidate target are believed
Breath, requests the candidate target to report identity of wireless device and identity, when getting described in the candidate target reports
After identity of wireless device or identity, by the information of the candidate target, status information and visual signature information with
The identity of wireless device or identity are bound, and the binding information is stored to presetting database.
Preferably, in above-mentioned object locating system, the data acquisition unit is also used to: obtaining wirelessly setting for target object
Standby mark and identity, information, status information, visual signature information, the identity of wireless device of candidate target and identity
Mark;
The object locating system further include: target component information acquisition unit, when the target component information is type
When information, status information and visual signature information, for being obtained and the mesh according to the binding information in the presetting database
Information, status information and the visual signature information that the identity of wireless device or identity for marking object match, will be described
Information, status information and visual signature information are as target component information;When the target component information is wireless device
When mark or identity, believe for being obtained according to the binding information in the presetting database with the type of the target object
Breath, status information and visual signature the information identity of wireless device or identity that match, by the identity of wireless device or
Identity is as target component information;
Candidate target parameter information acquiring unit, when the candidate target parameter information be information, status information and
When visual signature information, for obtaining the wireless device with the candidate target according to the binding information in the presetting database
Information, status information and the visual signature information that mark or identity match, by the information, status information
With visual signature information as candidate target parameter information;When the candidate target parameter information is identity of wireless device or identity
When mark, believe for being obtained according to the binding information in the presetting database with the information of the candidate target, state
The identity of wireless device or identity that breath and visual signature information match make the identity of wireless device or identity
For candidate target parameter information.
Based on the above-mentioned technical proposal, above scheme provided in an embodiment of the present invention passes through the positioning system of target object first
It unites and determines its moving parameter information, determine the regional scope that the target object is located at according to the moving parameter information, it is right
The image capture device for monitoring the target area, which is obtained, carries out computer vision processing to the image-forming information in the region, and determining should
Candidate target in target area tracks each candidate target, adopts according to tracking result, cartographic information, described image
The directional information of the location information and described image acquisition equipment that collect equipment determines the motion state of the candidate target;Again according to
The candidate target of target object is determined as according to the matching result of target component information and candidate target parameter information, finally by institute
The corresponding motion state of candidate target is stated to export as this positioning result.It can be seen that in this process, according to tracking result, map
The directional information of information, the location information of described image acquisition equipment and described image acquisition equipment determines the target object
Motion state when, do not interfered by building etc., therefore, precision with higher.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow diagram of object localization method disclosed in the embodiment of the present application;
Fig. 2 is the application scenarios schematic diagram of localization method disclosed in the embodiment of the present application;
Fig. 3 is the process schematic that the motion state of candidate target is determined in localization method disclosed in the embodiment of the present application;
Fig. 4 is the disclosed process schematic that target object is determined by matching of the embodiment of the present application;
Fig. 5 is the application scenarios schematic diagram of localization method disclosed in another embodiment of the application;
Fig. 6 is a kind of structural schematic diagram of object locating system disclosed in the embodiment of the present application.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
In order to solve in the prior art, when positioning to target object, the low problem of positioning result precision, the application is disclosed
A kind of object localization method and system, referring to Fig. 1, the object localization method includes:
Step S101: obtain target component information that the target object uploads, matching with the target object and
Moving parameter information, the moving parameter information are determined using the measurement of self-sensor device with positioning system by the target object
Position obtains;
Wherein, the target object (vehicle or mobile phone etc.) can be measured and positioning system by self-sensor device, be used
The modes such as satellite, wireless location or ground magnetic orientation initiate Location Request to external system (network location server etc.), obtain institute
The moving parameter information of target object is stated, the moving parameter information may include motion state (three-dimensional coordinate information, speed letter
Breath, acceleration information and orientation information etc.) and motion state precision parameter (referring to its measurement accuracy, error range data), and
The estimation of the motion state is maintained by the Position Fixing Navigation System of the target object itself.The target component information is subsequent
In the process for the reference information by determining target object in candidate target, what the target data information was included is the mesh
The characteristic parameter of object is marked, and this feature parameter has uniqueness, which kind of characteristic parameter is specifically selected to join as the target
Number information, can voluntarily determine according to user demand;
Step S102: by the moving parameter information three-dimensional coordinate information and precision parameter determine the target object
The target area being located at;
In this step, the three-dimensional coordinate information reported due to target object is more rough, such as can be true by its precision parameter
Its fixed error can determine the target area that the target object is located at according to these information in the circle of 10 meters of radiuses, therefore
The range in domain;
Step S103: it determines and acquires equipment for target image needed for covering the target area;
After the target area determines, needed for screening to obtain for covering the target area according to the target area
Image capture device is compared required for confirmation covering monitoring area with the monitoring area of known image capture device
Image capture device list;For example, with reference to shown in Fig. 2, there are two image capture devices 301 within the scope of the target area
With 302, therefore, equipment, the imaging data of analysis and processing 301 and 302 can be acquired as the target image by 301 and 302
It can complete the identification, positioning and tracking process to target object in example mentioned in subsequent processes.
Step S104: it obtains the target image acquisition equipment and the imaging that Image Acquisition obtains is carried out to the target area
Data;
Step S105: computer vision processing is carried out to the imaging data, determines the candidate mesh in the imaging data
Mark;
Wherein, when carrying out computer vision processing to these imaging datas, by extracting preceding scenery after distinguishing background
Body, at identification in the target area kind of object (vehicles or pedestrians etc.) and state (stand, walking, lie down, specific appearance
Gesture), and visual signature (face contour, fixed shape contour, vehicle that can also extract object are handled by computer vision
Licence plate etc.), and the tracking based on image further can be carried out to the object identified.
Step S106: tracking the candidate target, adopts according to the tracking result, cartographic information, described image
The directional information of the location information and described image acquisition equipment that collect equipment determines the motion state of the candidate target;
In this step, by combining given cartographic information to set with Image Acquisition the tracking result of each candidate target
Motion state (three-dimensional coordinate, speed, acceleration, the court of the candidate target can be obtained in standby location information and directional information
To).For as shown in Figure 3, camera 301 gives the tracking result of two candidate targets after computer vision processing, ties
The cartographic information in the location information and directional information and the target area of camera 301 is closed, preset algorithm can be passed through
Obtain the motion state of the two candidate targets.
Step S107: the candidate target component information consistent with the target component information type is obtained;
In this step, the info class of candidate target parameter information can be determined according to the given target component information
Type, so that it is determined that the candidate target parameter information of the required candidate target transferred.
Step S108: will candidate mesh corresponding with the highest candidate target parameter information of the target component information matches degree
It is denoted as target object;
In this step, it by matching the target component information one by one with each candidate target parameter information, obtains
The matching degree of the target component information and each candidate target parameter information, the highest candidate parameter of the matching degree is corresponding
Candidate target as target object.Certainly, in order to guarantee that the correctness of matching result can also be set during matched
One preset value just can be corresponding by the highest candidate parameter of the matching degree only when highest matching degree is greater than the preset value
Candidate target as target object, otherwise, to user's output for characterizing the prompt information that can not be positioned, certainly, at this
With in the process, another preset value can also be set, when matching degree is higher than the preset value, then it is believed that the corresponding time of the matching degree
Selecting target is target object, without carrying out subsequent match, to improve locating speed.
Step S109: the motion state to match with the target object is sent to the target object.
Disclosed in the above embodiments of the present application in object localization method, determined first by the positioning system of target object
Its moving parameter information determines the regional scope that the target object is located at according to the moving parameter information, to monitoring institute
The image capture device for stating target area, which is obtained, carries out computer vision processing to the image-forming information in the region, determines the target area
Candidate target in domain tracks each candidate target, acquires equipment according to tracking result, cartographic information, described image
Location information and the directional information of described image acquisition equipment determine the motion state of the candidate target;Again according to target
The matching result of parameter information and candidate target parameter information is determined as the candidate target of target object, finally by the candidate
The corresponding motion state of target is exported as this positioning result.It can be seen that in this process, according to tracking result, cartographic information,
The location information of described image acquisition equipment and the directional information of described image acquisition equipment determine the fortune of the target object
When dynamic state, do not interfered by building etc., therefore, precision with higher.
Disclosed in the above embodiments of the present application in method, the target component information can voluntarily be set according to user demand
It is fixed, for example, the target component information can be the fortune of target object disclosed in the application one embodiment in technical solution
Dynamic parameter information;The candidate target parameter information is then the motion state of the candidate target at this time.
At this point, in the matching process by identifying in the motion state and target zone in the target object kinematic parameter
The motion state of all candidate targets, the most like candidate target with the motion state of the target object, as of the two
When reaching given threshold with degree, i.e., using the candidate target as target.In Fig. 4, motion state packet that target object reports
Include: position, the speed being not zero and direction share 2 candidate targets, one of candidate target in the target zone
Motion state be it is static, therefore, speeds match item is unsatisfactory for, and the motion state of another candidate target can succeed
Match, and matching degree is greater than the given threshold, therefore using the candidate target as target object.
Certainly, in order to advanced optimize matching process, in above-mentioned matching process, this method can also be directed to variety classes
The motion state of target object (pedestrian or vehicle etc.) establish model (including cadence, leg speed, maximum speed, peak acceleration
Deng), it can in the matching process, by comparing candidate target in the moving parameter information and above-mentioned model that target object reports
The mode of motion feature template is matched, and the matching that degree of conformity is higher than threshold value is then regarded as being effectively matched.To improve object
Body identifies the accuracy rate of matching process, reduces erroneous matching.
That is, above-mentioned steps S108: it can specifically include:
Based on the motion state of each candidate target, motion state model corresponding with the candidate target is established, it is described
Motion state template corresponding with each candidate target is stored in motion state model;
The moving parameter information is matched with the motion state template of candidate target in the motion state model,
Using the corresponding candidate target of the highest motion state template of matching degree as the target object.
Certainly, in addition to using above-mentioned moving parameter information as target component information other than, can solve in order to prevent described
Target object reports in single and (reports the primary target component information and moving parameter information) what Shi Wufa was effectively matched to ask
Topic, the target component information can be with are as follows: the moving parameter information based on target object in continuous preset time period is formed
Target motion outline or exercise data group;
The candidate target parameter information are as follows: the position of equipment is acquired based on the tracking result, cartographic information, described image
Candidate fortune of the candidate target that the direction of confidence breath and described image acquisition equipment generates in the preset time period
Driving wheel is wide or exercise data group, and the element that the exercise data group is included can be with are as follows: sensor on the target object
The feature of the motion state obtained after measurement data and the Measurement and Data Processing.
At this point, can recorde the movement shape of the same candidate target on continuous time point in a period in the above method
State forms Candidate Motion profile or exercise data group of the candidate target within this time, by by same time period by
The target motion outline or exercise data group for the target object that the moving parameter information that target object reports determines.By will be described
The target motion outline or exercise data group of target object are matched with the Candidate Motion profile or exercise data group, can be with
Solving the problems, such as that the target object gives the correct time on single can not be effectively matched.
It is understood that the application can also pass through other than using the matching way mentioned in above-described embodiment
The mode of comparison target object multiple groups motion information discrete for a period of time is matched, and obtains multiple images acquisition equipment first
The multiple groups or single movement state information set for belonging to the same candidate target in the image collected, then by the candidate
The multiple groups or single movement state information that the mostly single movement state information set of the corresponding multiple groups of target and target object report into
The best match that matching degree is higher than thresholding is found in row matching.Can be used for solving not being effectively matched after being matched asks
Topic.
Specifically, the multiple groups or single movement state information set can be the motion information-of object in preset time
Time set, i.e., in method disclosed in the above embodiments of the present application, the target component information can be with are as follows: is based on preset time period
Inside get object time-movement corresponding with the target object ginseng that the moving parameter information of discrete target object is established
Number information aggregates, the object time-moving parameter information set include: the various discrete moment in the preset time period with
And the corresponding target object moving parameter information;
The candidate target parameter information can be with are as follows: acquires equipment based on the tracking result, cartographic information, described image
Location information and described image acquisition equipment directional information establish candidate time-motion state set, the candidate
When m- motion state set include: various discrete moment and the corresponding candidate mesh in the preset time period
Mark movement state information.
Further, when being matched, other than above-mentioned matching way, information, state can also be passed through
The comprehensive characteristics information of the visual signatures information such as information, motion feature and face, profile, vehicle license is matched, described
Target object can by wireless communication system report the information of the target object, status information, motion feature and
The comprehensive characteristics information of the visual signatures information such as face, profile, vehicle license, certainly, these information can have the object
Body active reportings can also actively be acquired certainly by the system of application this method, and the side as disclosed in the embodiment of the present application
Method can identify information Yu the state letter of candidate target in the target area in such a way that computer vision is handled
Breath, movement state information and visual signature information, thus reported by comparison target object information, status information,
The information of the comprehensive characteristics and the candidate target obtained using computer vision processing mode of motion feature and visual signature,
Status information, movement state information and visual signature information are then regarded as being effectively matched when matching degree is higher than threshold value.From
And it can be improved and judge matched performance or directly matched with above- mentioned information to position.
Specifically, the target component information in the present embodiment can be with are as follows: the type of the target object got
Information, status information and visual signature information;
The candidate target parameter information can be with are as follows: carries out described in computer vision handles to the imaging data
Information, status information and the visual signature information of candidate target.
Certainly, the target component information and candidate target parameter information can use except above-mentioned data mode, this
Application the target object and candidate target can also be had, mark data with uniqueness it is matched as being used for
Which kind of mark data target component information and candidate target parameter information specifically select as the target component information and time
Select target component information, can according to user demand sets itself, as long as mark data with uniqueness can be used as it is described
Target component information and candidate target parameter information.Such as disclosed in the above embodiments of the present application in technical solution, the mesh
Mark parameter information is the identity of wireless device or identity of the target object obtained;The candidate target parameter information is to obtain
The candidate target identity of wireless device or identity.
It advanced optimizes, when such as the positioning object is pedestrian, then in using terminal positioning function, pedestrian is likely to
Observe mobile phone screen.The target object (mobile phone) when reporting oneself state, the target object can increase by one whether
Just judge in standing state, at this point, can effectively distinguish the pedestrian for needing to position and neighbouring neighbouring pedestrian in matching.Such as figure
Shown in 5.Its method includes:
Whether the map denotation interface software of detection target object is currently running, if so, the camera shooting for passing through target object
Head judges user whether just in viewing screen, if so, judging whether the inertia motion parameter of the target object meets user
Angle needed for standing viewing screen, if it is, determining, the state of the target object is " standing " state.
The above method is advanced optimized, in order to guarantee the accuracy of described image acquisition equipment pickup area, this Shen
It please each image capture device be that the variation of self-position and angle can be detected disclosed in above-described embodiment
Image capture device, by (comprising magnetic compass, being added on these image capture devices using the inertial measurement system being fixed thereon
Speedometer, gyroscope) method, and make inertial measurement system periodically or triggering property measurement, pass through judgement processing measurement knot
The situation of change of fruit realizes self calibration (the alignment angle information as calibrated imaging device), naturally it is also possible to export calibration request;
Disclosed in the above embodiments of the present application in method, can also include:
Judge whether to get the calibration request that image capture device uploads, if so, generating calibration information, the calibration
Information includes at least the number information and location information of the image capture device of required calibration.
The above method is advanced optimized, institute can be limited by the path on cartographic information when tracking the target object
The motion profile for stating target object predicts the moving line of the target object and selects the image capture device in proper range
Target image as subsequent time acquires equipment.In view of this, method disclosed in the above embodiments of the present application can also include:
By the routing information on cartographic information and the direction of motion information in the moving parameter information, described in prediction
The moving line of target object predicts subsequent time according to the velocity information in the moving line and the moving parameter information
Corresponding target area.
The above method is advanced optimized, the above method can pass through the kinematic parameter number for the target object that process cycle reports
According to obtaining the motion profile or profile of target object.At this time in order to reduce the power consumption and complexity of the target object, Ke Yizhu
The moving parameter information of the senser element measurement target object of target object Portable device is used, such as: acceleration, speed side
To with towards data.That is, the moving parameter information includes: to obtain according to senser element measurement entrained by the target object
The acceleration of target object, directional velocity and towards data.
In order to further be accurately positioned to the target object, during the above method, the candidate is calculated
When the motion state of target, the mode object identification and positioning of sound ranging can be carried out to the candidate target.It then needs at this time
It is required that installing acoustic equipment, including microphone or microphone array, loudspeaker or acoustical generator in described image acquisition equipment.By
In these image capture devices itself coordinate it is known that so the acoustic equipment being installed on it can carry out acoustic range, extract
Testee vocal print feature passes through microphone array and carries out processing progress object identification and positioning function using analysis system.
In order to advanced optimize the above method, when completing to determine the motion state of the candidate target, can recorde by
Survey type, state, motion feature (including cadence, leg speed, maximum speed, peak acceleration etc.) and the face of object, profile,
The comprehensive characteristics information that the visual signatures information such as vehicle license is constituted, and the candidate target equipment is requested to report its wireless device
Identify (IMSI of such as wireless device, IMEI, MAC Address) and identity (ID card No. of such as user, driver license number
Code), complete the binding of testee comprehensive characteristics and identity of wireless device or identity.This binding relationship can be used for subsequent
Identification and improvement effect when positioning.It further, as needed can be by this binding relationship and identity of wireless device, identity mark
Know and visual signature saves and passes to other systems.When determining target object, the comprehensive spy of the target device of extraction
Sign can be compared with the binding relationship in known property data base, complete the confirmation to target device identity.
In the above scheme, the target component information or candidate target parameter information can be obtained via above-mentioned bound content
, i.e., the above method can also include:
Obtain the identity of wireless device or identity of target object;
The binding information according to storage, which is obtained, obtains the type to match with the identity of wireless device or identity
Information, status information and visual signature information, using the information, status information and visual signature information as the target
Parameter information;
The candidate target parameter information is information, status information and the visual signature information of candidate target;
Or
Obtain the identity of wireless device or identity of candidate target;
The binding information according to storage, which is obtained, obtains the type to match with the identity of wireless device or identity
Information, status information and visual signature information, using the information, status information and visual signature information as the candidate
Target component information;
The target component information is information, status information and visual signature information;
Or
Computer vision processing is carried out to the imaging data, obtains the information, status information and view of candidate target
Feel characteristic information;
Binding information acquisition according to storage matches with the information, status information and visual signature information
Identity of wireless device or identity, by the identity of wireless device or identity obtain as the candidate target parameter believe
Breath;
The target component information is identity of wireless device or identity.
Specifically, the above method can also include:
Obtain information, status information and the visual signature information and identity of wireless device and body of the target object
Part mark;
It obtains information, status information and the visual signature information of the candidate target and reports identity of wireless device
And identity;
After getting the identity of wireless device or identity that the target object or the candidate target report,
Will confirm that the identity of wireless device or identity the information of the matched target object or candidate target, state letter
Breath and visual signature information, by after determination the information, status information and visual signature information and the wireless device
Mark or identity are bound;
Store the binding information.User passes through the identity of wireless device using with the binding information of storage
Corresponding information, status information and visual signature information are obtained with identity, and utilizes the binding information
By information, status information and visual signature acquisition of information identity of wireless device and identification information.
It is directed to the above method, disclosed herein as well is a kind of object locating systems, referring to Fig. 6, comprising:
Data acquisition unit 100, for obtaining the target component information and moving parameter information of target object, the movement
Parameter information is measured using self-sensor device by the target object and is positioned to obtain with positioning system;
Zone location unit 200, for as the three-dimensional coordinate in the moving parameter information and described in precision parameter determines
The target area that target object is located at;
Imaging acquisition unit 300 acquires equipment for target image needed for covering the target area for determining, obtains
Take the imaging data of the target image acquisition equipment;
Visual processing unit 400 obtains the feature of candidate object for identifying candidate target object according to imaging data
Information;
Motion state computing unit 500, for being tracked to the candidate target, according to the tracking result, map
The directional information of information, the location information of described image acquisition equipment and described image acquisition equipment determines the candidate target
Motion state;
Candidate parameter acquisition unit 600, for obtaining the candidate target ginseng consistent with the target component information type
Number information;
Matching unit 700, being used for will be corresponding with the highest candidate target parameter information of the target component information matches degree
Candidate target as target object;
Transmission unit 800, for the motion state to match with the target object to be sent to the target object.
The object locating system disclosed in the above embodiments of the present application determines it by the positioning system of target object first
Moving parameter information determines the regional scope that the target object is located at according to the moving parameter information, described in monitoring
The image capture device of target area, which is obtained, carries out computer vision processing to the image-forming information in the region, determines the target area
Interior candidate target tracks each candidate target, according to tracking result cartographic information, the position of described image acquisition equipment
The directional information of confidence breath and described image acquisition equipment determines the motion state of the candidate target;Again according to target component
The matching result of information and candidate target parameter information is determined as the candidate target of target object, finally by the candidate target
Corresponding motion state is exported as this positioning result.It can be seen that in this process, according to tracking result cartographic information, the figure
As the location information of acquisition equipment and the directional information of described image acquisition equipment determine the motion state of the target object
When, it is not interfered by building etc., therefore, precision with higher.
It corresponds to the above method, the target component information is the moving parameter information of target object;Candidate's mesh
Mark the motion state that parameter information is the candidate target.
It corresponds to the above method, matching unit described in system disclosed in the above embodiments of the present application, comprising:
Model foundation unit is established corresponding with the candidate target for the motion state based on each candidate target
Motion state model is stored with motion state template corresponding with each candidate target in the motion state model;
First sub- matching unit, for by the fortune of candidate target in the moving parameter information and the motion state model
Dynamic state template is matched, using the corresponding candidate target of the highest motion state template of matching degree as the target object.
It corresponds to the above method, can also include: that target motion outline establishes unit and Candidate Motion in above system
Profile establishes unit;
The target motion outline establishes unit, for the kinematic parameter based on target object in continuous preset time period
Information, the target motion outline or exercise data group of formation;
The Candidate Motion profile establishes unit, for being set based on the tracking result, cartographic information, described image acquisition
The candidate target that the direction of standby location information and described image acquisition equipment generates is in the preset time period
Candidate Motion profile or exercise data group;
In the present embodiment, the target component information are as follows: target motion outline or exercise data group;The candidate target
Parameter information are as follows: Candidate Motion profile or exercise data group.
It corresponds to the above method, can also include object time-movement in system disclosed in the above embodiments of the present application
Parameter information set establishes unit and candidate time-motion state set establishes unit;
Object time-moving parameter information the set establishes unit, for discrete based on getting in preset time period
Target object moving parameter information establish object time-moving parameter information set corresponding with the target object,
Object time-moving parameter information the set includes: various discrete moment in the preset time period and is corresponding to it
The target object moving parameter information;
The candidate time-motion state set establishes unit, for based on the tracking result, cartographic information, described
Candidate time-motion state collection that the directional information of location information and described image the acquisition equipment of image capture device is established
It closes, the candidate time-motion state set includes: various discrete moment in the preset time period and corresponding
The candidate target movement state information;
In the present embodiment, the target component information are as follows: object time-moving parameter information set;Candidate's mesh
Mark parameter information are as follows: candidate time-motion state set.
It corresponds to the above method, in the application above system, the visual processing unit is also used to: to the imaging
Data carry out information, status information and the visual signature information for the candidate target that computer vision is handled;
In the present embodiment, the target component information are as follows: the body information of the target object upload got,
Status information and visual signature information;The candidate target parameter information are as follows: information, the status information of the candidate target
And visual signature information.
It corresponds to the above method, in the application above system, the target component information and candidate target parameter information
It can be that the application can also have the target object and candidate target, have using except above-mentioned data mode
The mark data of uniqueness, which is used as, is used for matched target component information and candidate target parameter information, specifically selects which kind of mark
Knowledge data, can be according to user demand sets itself, as long as tool as the target component information and candidate target parameter information
There is the mark data of uniqueness to can be used as the target component information and candidate target parameter information.Such as it is above-mentioned in the application
In technical solution disclosed in embodiment, the target component information is the identity of wireless device or identity mark of the target object obtained
Know;The candidate target parameter information is the identity of wireless device or identity of the candidate target obtained.
It corresponds to the above method, system disclosed in the above embodiments of the present application, further includes:
Failure location unit, for judging whether to get the calibration request of image capture device upload, if so, generating
Calibration information, the calibration information include at least the number information and location information of the image capture device of required calibration.
It corresponds to the above method, system disclosed in the above embodiments of the present application, further includes:
Target area predicting unit, for by routing information on cartographic information and the moving parameter information
Direction of motion information predicts the moving line of the target object, according in the moving line and the moving parameter information
The corresponding target area of velocity information prediction subsequent time.
It corresponds to the above method, system disclosed in the above embodiments of the present application, the moving parameter information includes:
According to the acceleration of the target object that senser element measurement obtains entrained by the target object, directional velocity with
And towards data.
It corresponds to the above method, system disclosed in the above embodiments of the present application, further includes:
Secure information storage unit, for determining that the information, status information and visual signature of the candidate target are believed
Breath, requests the candidate target to report identity of wireless device and identity, when getting described in the candidate target reports
After identity of wireless device or identity, by the information of the candidate target, status information and visual signature information with
The identity of wireless device or identity are bound, and the binding information is stored to presetting database.Using described
Binding information is completed to obtain information, status information and visual signature information from identity of wireless device and identity, and
It is completed using the binding information from information, status information and visual signature acquisition of information identity of wireless device and identity mark
Know information.
It corresponds to the above method, data acquisition unit described in above system is also used to: obtaining the wireless of target object
Device identification and identity, information, status information, visual signature information, the identity of wireless device of candidate target and body
Part mark;
The object locating system further include: target component information acquisition unit, when the target component information is type
When information, status information and visual signature information, for being obtained and the mesh according to the binding information in the presetting database
Information, status information and the visual signature information that the identity of wireless device or identity for marking object match, will be described
Information, status information and visual signature information are as target component information;When the target component information is wireless device
When mark or identity, believe for being obtained according to the binding information in the presetting database with the type of the target object
Breath, status information and visual signature the information identity of wireless device or identity that match, by the identity of wireless device or
Identity is as target component information;
Candidate target parameter information acquiring unit, when the candidate target parameter information be information, status information and
When visual signature information, for obtaining the wireless device with the candidate target according to the binding information in the presetting database
Information, status information and the visual signature information that mark or identity match, by the information, status information
With visual signature information as candidate target parameter information;When the candidate target parameter information is identity of wireless device or identity
When mark, believe for being obtained according to the binding information in the presetting database with the information of the candidate target, state
The identity of wireless device or identity that breath and visual signature information match make the identity of wireless device or identity
For candidate target parameter information.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.For device disclosed in embodiment
Or for method, since it is corresponding with method or apparatus disclosed in embodiment, so being described relatively simple, related place ginseng
See method part illustration.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.
Claims (20)
1. a kind of object localization method characterized by comprising
The target component information and moving parameter information of target object are obtained, the moving parameter information is adopted by the target object
It is measured with self-sensor device and is positioned to obtain with positioning system;
By in the moving parameter information three-dimensional coordinate and precision parameter determine target area that the target object is located at;
It determines and acquires equipment for target image needed for covering the target area;
Obtain the imaging data of the target image acquisition equipment;
Computer vision processing is carried out to the imaging data, determines the candidate target in the imaging data;
The candidate target is tracked, according to tracking result, cartographic information, described image acquisition equipment location information with
And the directional information of described image acquisition equipment determines the motion state of the candidate target, the motion state of the candidate target
Refer to three-dimensional coordinate, speed, acceleration and the direction of candidate target;
Obtain the parameter information of the candidate target consistent with the target component information type;
Will candidate target corresponding with the highest candidate target parameter information of the target component information matches degree as object
Body;
The motion state of the candidate target to match with the target object is sent to the target object;
The target component information is the moving parameter information of target object;
The candidate target parameter information is the motion state of the candidate target;
It is described will candidate target corresponding with the highest candidate target parameter information of the target component information matches degree as mesh
Mark object, comprising:
Based on the motion state of each candidate target, motion state model corresponding with the candidate target, the movement are established
Motion state template corresponding with each candidate target is stored in state model;
The moving parameter information is matched with the motion state template of candidate target in the motion state model, general
With the corresponding candidate target of the highest motion state template of degree as the target object.
2. object localization method according to claim 1, which is characterized in that the target component information are as follows: based on continuous
Preset time period in target object moving parameter information formed target motion outline or exercise data group;
The candidate target parameter information are as follows: believe the position based on the tracking result, cartographic information, described image acquisition equipment
Candidate Motion wheel of the candidate target that the direction of breath and described image acquisition equipment generates in the preset time period
Wide or exercise data group.
3. object localization method according to claim 1, which is characterized in that the target component information are as follows: based on default
The object time-corresponding with the target object that the moving parameter information of discrete target object is established is got in period
Moving parameter information set, the object time-moving parameter information set include: the various discrete in the preset time period
Moment and the corresponding target object moving parameter information;
The candidate target parameter information are as follows: believe the position based on the tracking result, cartographic information, described image acquisition equipment
Candidate time-motion state set that the directional information of breath and described image acquisition equipment is established, the candidate time-movement
State set includes: various discrete moment and the corresponding candidate target motion state in the preset time period
Information.
4. object localization method according to claim 1, which is characterized in that the target component information are as follows: get
Information, status information and the visual signature information of the target object;
The candidate target parameter information are as follows: the candidate target that computer vision is handled is carried out to the imaging data
Information, status information and visual signature information.
5. object localization method according to claim 1, which is characterized in that further include:
The target component information is the identity of wireless device or identity of the target object obtained;
The candidate target parameter information is the identity of wireless device or identity of the candidate target obtained.
6. object localization method according to claim 1, which is characterized in that further include:
Judge whether to get the calibration request that image capture device uploads, if so, generating calibration information, the calibration information
Including at least the number information and location information of the image capture device of required calibration.
7. object localization method according to claim 1, which is characterized in that further include:
By the routing information on cartographic information and the direction of motion information in the moving parameter information, the target is predicted
The moving line of object, it is corresponding according to the velocity information prediction subsequent time in the moving line and the moving parameter information
Target area.
8. object localization method according to claim 1, which is characterized in that the moving parameter information includes:
According to the velocity information of the target object that senser element measurement obtains entrained by the target object, acceleration information with
And orientation information.
9. object localization method according to claim 1, which is characterized in that further include:
Obtain information, status information and the visual signature information and identity of wireless device and identity mark of the target object
Know;
It obtains information, status information and the visual signature information of the candidate target and reports identity of wireless device and body
Part mark;
It, will be true after getting the identity of wireless device or identity that the target object or the candidate target report
Recognize its matched target object or candidate target information, status information and visual signature information, by the mesh
Target information, status information and visual signature information are bound with the identity of wireless device or identity;
Store binding information.
10. object localization method according to claim 9, which is characterized in that further include:
Obtain the identity of wireless device or identity of target object;
The binding information according to storage obtain with the identity of wireless device or identity obtain the information to match,
Status information and visual signature information, using the information, status information and visual signature information as the target component
Information;
The candidate target parameter information is information, status information and the visual signature information of candidate target;
Or
Obtain the identity of wireless device or identity of candidate target;
The binding information according to storage obtain with the identity of wireless device or identity obtain the information to match,
Status information and visual signature information, using the information, status information and visual signature information as the candidate target
Parameter information;
The target component information is information, status information and visual signature information;
Or
Computer vision processing is carried out to the imaging data, the information, status information and vision for obtaining candidate target are special
Reference breath;
The binding information according to storage obtains the nothing to match with the information, status information and visual signature information
Line device identification or identity obtain the identity of wireless device or identity as the candidate target parameter information;
The target component information is identity of wireless device or identity.
11. a kind of object locating system characterized by comprising
Data acquisition unit, for obtaining the target component information and moving parameter information of target object, the kinematic parameter letter
Breath is measured using self-sensor device by the target object and is positioned to obtain with positioning system;
Zone location unit, for by the moving parameter information three-dimensional coordinate and precision parameter determine the target object
The target area being located at;
Imaging acquisition unit acquires equipment for target image needed for covering the target area for determining, described in acquisition
The imaging data of target image acquisition equipment;
Visual processing unit obtains the characteristic information of candidate object for identifying candidate target object according to imaging data;
Motion state computing unit, for being tracked to the candidate target, according to tracking result, cartographic information, the figure
The location information of picture acquisition equipment and the directional information of described image acquisition equipment determine the motion state of the candidate target,
The motion state of the candidate target refers to three-dimensional coordinate, speed, acceleration and the direction of candidate target;
Candidate parameter acquisition unit, for the candidate target parameter information consistent with the target component information type;
Matching unit, being used for will candidate mesh corresponding with the highest candidate target parameter information of the target component information matches degree
It is denoted as target object;
Transmission unit, for the motion state of the candidate target to match with the target object to be sent to the object
Body;
The target component information is the moving parameter information of target object;
The candidate target parameter information is the motion state of the candidate target;
The matching unit, comprising:
Model foundation unit establishes movement corresponding with the candidate target for the motion state based on each candidate target
State model is stored with motion state template corresponding with each candidate target in the motion state model;
First sub- matching unit, for by the movement shape of candidate target in the moving parameter information and the motion state model
Morphotype plate is matched, using the corresponding candidate target of the highest motion state template of matching degree as the target object.
12. object locating system according to claim 11, which is characterized in that further include:
Target motion outline establishes unit, for the moving parameter information based on target object in continuous preset time period, shape
At target motion outline or exercise data group;
Candidate Motion profile establishes unit, for acquiring the position of equipment based on the tracking result, cartographic information, described image
Candidate Motion of the candidate target that the direction of information and described image acquisition equipment generates in the preset time period
Profile or exercise data group;
The target component information are as follows: target motion outline or exercise data group;
The candidate target parameter information are as follows: Candidate Motion profile or exercise data group.
13. object locating system according to claim 11, which is characterized in that further include:
Object time-motion information set establishes unit, for based on the discrete target object got in preset time period
The object time-moving parameter information set corresponding with the target object established of moving parameter information, when the target
M- moving parameter information set includes: various discrete moment and the corresponding target in the preset time period
Object of which movement parameter information;
Candidate time-motion state set establishes unit, for based on the tracking result, cartographic information, described image acquisition
The directional information of location information and described image the acquisition equipment of equipment establishes discrete candidate time-motion state set,
The candidate time-motion state set includes: various discrete moment and corresponding institute in the preset time period
State candidate target movement state information;
The target component information are as follows: object time-moving parameter information set;
The candidate target parameter information are as follows: candidate time-motion state set.
14. object locating system according to claim 11, which is characterized in that the visual processing unit is also used to: right
The information, status information and vision that the imaging data carries out the candidate target that computer vision is handled are special
Reference breath;
The target component information are as follows: body information, status information and the vision that the target object got uploads are special
Reference breath;
The candidate target parameter information are as follows: information, status information and the visual signature information of the candidate target.
15. object locating system according to claim 11, which is characterized in that the target component information is target object
Identity of wireless device or identity;
The candidate target parameter information is the identity of wireless device or identity of the candidate target.
16. object locating system according to claim 11, which is characterized in that further include:
Failure location unit, for judging whether to get the calibration request of image capture device upload, if so, generating calibration
Information, the calibration information include at least the number information and location information of the image capture device of required calibration.
17. object locating system according to claim 11, which is characterized in that further include:
Target area predicting unit, for passing through the routing information on cartographic information and the movement in the moving parameter information
Directional information predicts the moving line of the target object, according to the speed in the moving line and the moving parameter information
Spend the corresponding target area of information prediction subsequent time.
18. object locating system according to claim 11, which is characterized in that the moving parameter information includes:
According to the velocity information of the target object that senser element measurement obtains entrained by the target object, acceleration information with
And orientation information.
19. object locating system according to claim 11, which is characterized in that further include:
Secure information storage unit is asked for determining the information, status information and visual signature information of the candidate target
The candidate target is asked to report identity of wireless device and identity, when get that the candidate target reports described wirelessly sets
After standby mark or identity, by the information of the candidate target, status information and visual signature information and the nothing
Line device identification or identity are bound, and binding information is stored to presetting database.
20. object locating system according to claim 19, which is characterized in that the data acquisition unit is also used to: being obtained
Take the identity of wireless device and identity, information, status information, visual signature information, candidate target of target object
Identity of wireless device and identity;
The object locating system further include: target component information acquisition unit, when the target component information be information,
When status information and visual signature information, for being obtained and the target object according to the binding information in the presetting database
Identity of wireless device or the identity information, status information and the visual signature information that match, the type is believed
Breath, status information and visual signature information are as target component information;When the target component information be identity of wireless device or
When identity, for being obtained and the information of the target object, shape according to the binding information in the presetting database
The identity of wireless device or identity that state information and visual signature information match, by the identity of wireless device or identity mark
Know and is used as target component information;
Candidate target parameter information acquiring unit, when the candidate target parameter information is information, status information and vision
When characteristic information, for obtaining the identity of wireless device with the candidate target according to the binding information in the presetting database
Or information, status information and visual signature information that identity matches, by the information, status information and view
Feel characteristic information as candidate target parameter information;When the candidate target parameter information is identity of wireless device or identity
When, for according to the binding information in the presetting database obtain with the information of the candidate target, status information and
The identity of wireless device or identity that visual signature information matches, using the identity of wireless device or identity as time
Select target component information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610279062.XA CN105975967B (en) | 2016-04-29 | 2016-04-29 | A kind of object localization method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610279062.XA CN105975967B (en) | 2016-04-29 | 2016-04-29 | A kind of object localization method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105975967A CN105975967A (en) | 2016-09-28 |
CN105975967B true CN105975967B (en) | 2019-04-23 |
Family
ID=56994205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610279062.XA Expired - Fee Related CN105975967B (en) | 2016-04-29 | 2016-04-29 | A kind of object localization method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105975967B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3061392B1 (en) * | 2016-12-27 | 2019-08-30 | Somfy Sas | PRESENCE CONTROL METHOD AND MONITORING SYSTEM |
CN107063189A (en) * | 2017-01-19 | 2017-08-18 | 上海勤融信息科技有限公司 | The alignment system and method for view-based access control model |
CN107133583A (en) * | 2017-04-27 | 2017-09-05 | 深圳前海弘稼科技有限公司 | One kind plants plant information acquisition method and device |
CN107024208A (en) * | 2017-05-18 | 2017-08-08 | 上海逍森自动化科技有限公司 | A kind of localization method and its positioner |
CN107356229B (en) * | 2017-07-07 | 2021-01-05 | 中国电子科技集团公司电子科学研究院 | Indoor positioning method and device |
CN108364314B (en) * | 2018-01-12 | 2021-01-29 | 香港科技大学深圳研究院 | Positioning method, system and medium |
CN108344416B (en) * | 2018-02-01 | 2021-06-01 | 感知智能科技新加坡有限公司 | Positioning method for automatically matching target based on map information |
CN110231039A (en) * | 2019-06-27 | 2019-09-13 | 维沃移动通信有限公司 | A kind of location information modification method and terminal device |
CN112577475A (en) * | 2021-01-14 | 2021-03-30 | 天津希格玛微电子技术有限公司 | Video ranging method capable of effectively reducing power consumption |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916437B (en) * | 2010-06-18 | 2014-03-26 | 中国科学院计算技术研究所 | Method and system for positioning target based on multi-visual information |
CN103139700B (en) * | 2011-11-28 | 2017-06-27 | 联想(北京)有限公司 | A kind of method and system of terminal positioning |
US8874135B2 (en) * | 2012-11-30 | 2014-10-28 | Cambridge Silicon Radio Limited | Indoor positioning using camera and optical signal |
CN103249142B (en) * | 2013-04-26 | 2016-08-24 | 东莞宇龙通信科技有限公司 | A kind of localization method, system and mobile terminal |
CN103841518B (en) * | 2014-03-03 | 2017-12-26 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104936283B (en) * | 2014-03-21 | 2018-12-25 | 中国电信股份有限公司 | Indoor orientation method, server and system |
CN104378735B (en) * | 2014-11-13 | 2018-11-13 | 无锡儒安科技有限公司 | Indoor orientation method, client and server |
CN104700408B (en) * | 2015-03-11 | 2017-10-17 | 中国电子科技集团公司第二十八研究所 | A kind of indoor single goal localization method based on camera network |
-
2016
- 2016-04-29 CN CN201610279062.XA patent/CN105975967B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN105975967A (en) | 2016-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105975967B (en) | A kind of object localization method and system | |
EP3213031B1 (en) | Simultaneous localization and mapping by using earth's magnetic fields | |
Li et al. | A reliable and accurate indoor localization method using phone inertial sensors | |
EP2885609B1 (en) | Crowd-sourcing indoor locations | |
CN107111641B (en) | Location estimation for updating a database of location data | |
US20130211718A1 (en) | Apparatus and method for providing indoor navigation service | |
CN108151747A (en) | A kind of indoor locating system and localization method merged using acoustical signal with inertial navigation | |
CN105987694B (en) | The method and apparatus for identifying the user of mobile device | |
WO2013155919A1 (en) | Positioning method and system | |
WO2016068742A1 (en) | Method and system for indoor positioning of a mobile terminal | |
CN106461786A (en) | Indoor global positioning system | |
CN109076191A (en) | Monitoring system and method for the monitoring based on video camera | |
Tung et al. | Use of phone sensors to enhance distracted pedestrians’ safety | |
WO2014113014A1 (en) | Method, apparatus and computer program product for orienting a smartphone display and estimating direction of travel of a pedestrian | |
JP5742794B2 (en) | Inertial navigation device and program | |
JP2018061114A (en) | Monitoring device and monitoring method | |
CN106600652A (en) | Panorama camera positioning method based on artificial neural network | |
KR20170032147A (en) | A terminal for measuring a position and method thereof | |
GB2586099A (en) | An apparatus and method for person detection, tracking and identification utilizing wireless signals and images | |
CN110741271B (en) | System and method for locating building doorways | |
KR100939731B1 (en) | Terminal and method for measuring position using location of position identifying tag | |
Lu et al. | Speedtalker: Automobile speed estimation via mobile phones | |
CN108981729A (en) | Vehicle positioning method and device | |
Lan et al. | An indoor locationtracking system for smart parking | |
Niu et al. | Research on indoor positioning on inertial navigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190423 Termination date: 20210429 |