CN110443850A - Localization method and device, storage medium, the electronic device of target object - Google Patents
Localization method and device, storage medium, the electronic device of target object Download PDFInfo
- Publication number
- CN110443850A CN110443850A CN201910718260.5A CN201910718260A CN110443850A CN 110443850 A CN110443850 A CN 110443850A CN 201910718260 A CN201910718260 A CN 201910718260A CN 110443850 A CN110443850 A CN 110443850A
- Authority
- CN
- China
- Prior art keywords
- information
- target object
- flat image
- feature object
- location information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Abstract
The present invention provides a kind of localization method of target object and device, storage medium, electronic devices, wherein, include: to take pictures to target object to be positioned, obtain the first flat image for containing target object, target object is located in the specified region for having constructed three-dimension virtual reality space;The first location information and the first appearance information of target object are obtained from the first flat image, the second location information and the second appearance information of N number of fisrt feature object, the mutual alignment relation of target object and N number of fisrt feature object in the first flat image;N number of three-dimensional model information of N number of second feature object corresponding with N number of fisrt feature object and N number of second feature object is determined in three-dimension virtual reality space according to N number of fisrt feature object of acquisition;The third place information of the target object in specified region is determined according to N number of three-dimensional model information, first location information, the first appearance information, second location information, the second appearance information and mutual alignment relation.
Description
Technical field
The present invention relates to the communications fields, and the localization method and device, storage in particular to a kind of target object are situated between
Matter, electronic device.
Background technique
With the raising of wireless location technology, in electric system (such as power plant, substation, transmission and distribution networks and petrochemical industry, track
Traffic etc.) in the wireless location technology be used widely mainly have global positioning system (Global Positioning
System, referred to as GPS), Beidou satellite navigation system (Beidou Navigation Satellite System, referred to as
BDS), Wireless Fidelity (Wireless Fidelity, referred to as WIFI), bluetooth, infrared, the Technology of Ultra
(Ultra Wide-Band, referred to as UWB) etc..
But the outer integrated application, complex environment, project plan etc. indoors of these wireless location technologies all has one
Fixed problem.For example, GPS/ Beidou location difficulty indoors, WIFI/ bluetooth positioning accuracy is insufficient, UWB engineering construction is difficult and
Higher cost.
For in the related technology, wireless location technology location difficulty, it is at high cost the problems such as, not yet propose effective technical side
Case.
Summary of the invention
The embodiment of the invention provides a kind of localization method of target object and device, storage medium, electronic devices, so that
It is few to solve wireless location technology location difficulty in the related technology, it is at high cost the problems such as.
According to one embodiment of present invention, a kind of localization method of target object is provided, comprising: to mesh to be positioned
Mark object is taken pictures, and the first flat image for containing the target object is obtained, wherein the target object is located at structure
It builds in the specified region in three-dimension virtual reality space;
The first location information and the first appearance information of the target object are obtained from first flat image, it is described
The second location information and the second appearance information of N number of fisrt feature object and the target object and institute in first flat image
State the mutual alignment relation of N number of fisrt feature object, wherein N is the integer greater than 1;
It is determined and N number of fisrt feature in the three-dimension virtual reality space according to N number of fisrt feature object of acquisition
N number of three-dimensional model information of the corresponding N number of second feature object of object and N number of second feature object;
According to N number of three-dimensional model information, the first location information, first appearance information, the second
Confidence breath, second appearance information and the mutual alignment relation determine the target object in the specified region
The third place information.
Optionally, N number of fisrt feature object according to acquisition determining and N in the three-dimension virtual reality space
N number of three-dimensional model information of a corresponding N number of second feature object of fisrt feature object and N number of second feature object, comprising:
One by one by the K second feature object progress in N number of fisrt feature object and the three-dimension virtual reality space
Match, it is determining to believe with N number of threedimensional model of N number of corresponding N number of second feature object of fisrt feature object and N number of second feature object
Breath;Wherein, the K second feature object is the characteristic body that has marked in advance in the three-dimension virtual reality space, K to be greater than or
Integer equal to N.
Optionally, according to the N number of three-dimensional model information, first location information, first appearance information, described
Second location information, second appearance information and the mutual alignment relation determine the target object in the specified area
The third place information in domain, comprising:
According to N number of three-dimensional model information, the second location information and second appearance information, the mutual position
Set the cone angle and video camera aspect ratio of the image collecting device that relationship is determined for shooting first flat image;
According to the cone angle, the video camera aspect ratio, the first location information and first appearance information
Determine the third place information of the target object in the specified region.
Optionally, according to the N number of three-dimensional model information, first location information, first appearance information, described
Second location information, second appearance information and the mutual alignment relation determine the target object in the specified area
The third place information in domain, comprising: in the three-dimension virtual reality space, it is flat to get second by perspective projection variation
Face image, wherein the second shooting visual angle phase of the first shooting visual angle of first flat image and second flat image
Together, so that the first flat image and the location information of characteristic body in second flat image are consistent with appearance information;According to
The first location information, first appearance information, the second shooting visual angle of second flat image, the mutual alignment
Relationship and N number of three-dimensional model information determine the third place information of the target object in the specified region.
According to another embodiment of the invention, a kind of positioning device of target object is additionally provided, comprising: first obtains
Module obtains the first flat image for containing the target object for taking pictures to target object to be positioned,
In, the target object is located in the specified region for having constructed three-dimension virtual reality space;Second obtains module, is used for from described
The first location information and the first appearance information of the target object are obtained in first flat image, in first flat image
The second location information and the second appearance information of N number of fisrt feature object and the target object and N number of fisrt feature object
Mutual alignment relation, wherein N is integer greater than 1;First determining module, for being existed according to N number of fisrt feature object of acquisition
N number of second feature object corresponding with the N number of first N number of characteristic body and N number of second are determined in the three-dimension virtual reality space
N number of three-dimensional model information of characteristic body;Second determining module, for according to N number of three-dimensional model information, the first position
Information, first appearance information, the second location information, second appearance information and the mutual alignment relation are true
Fixed the third place information of the target object in the specified region.
Optionally, first determining module is specifically used for N number of fisrt feature object and the three-dimension virtual reality
K second feature object in space is matched one by one, determining N number of second feature object corresponding with the N number of fisrt feature object
And N number of three-dimensional model information of N number of second feature object;Wherein, the K second feature object is the three-dimension virtual reality space
In the characteristic body that has marked in advance, K is the integer more than or equal to N.
Optionally, second determining module is also used to according to N number of three-dimensional model information, the second confidence
Breath, second appearance information and the mutual alignment relation determine the image collector for shooting first flat image
The cone angle and video camera aspect ratio set;According to the cone angle and the video camera aspect ratio, first position letter
Breath and first appearance information determine the third place information of the target object in the specified region.
Optionally, second determining module is also used in the three-dimension virtual reality space, is become by perspective projection
Change gets the second flat image, wherein the first shooting visual angle of first flat image and second flat image
Second shooting visual angle is identical, so that the location information of characteristic body and shape letter in the first flat image and second flat image
It ceases consistent;According to the first location information, first appearance information, the second shooting view of second flat image
Angle, the mutual alignment information and N number of three-dimensional model information determine the target object in the specified region
The third place information.
According to another embodiment of the invention, a kind of storage medium is additionally provided, meter is stored in the storage medium
Calculation machine program, wherein the computer program is arranged to execute the positioning of target object described in any of the above item when operation
Method.
According to another embodiment of the invention, a kind of electronic device, including memory and processor are additionally provided, it is special
Sign is, computer program is stored in the memory, and the processor is arranged to run the computer program to hold
The localization method of target object described in row any of the above item.
Through the invention, it in the scene for having constructed three-dimension virtual reality space, can be got by way of taking pictures
Include the first flat image of target object in specified region, then determines first of target object in first flat image
Confidence breath and the first appearance information, and determine the second location information of N number of fisrt feature object and the in first flat image
The mutual alignment relation of two appearance informations and N number of fisrt feature object in target object and first flat image, further
N number of fisrt feature object is mapped in three-dimension virtual reality space by ground, to determine corresponding with the N number of fisrt feature object N number of the
Two characteristic bodies, and N number of three-dimensional model information of N number of second feature object is obtained, finally, pass through N number of three-dimensional model information, first
Location information and the first appearance information, second location information and the second appearance information and mutual alignment relation can determine mesh
Mark the third place information of the object in specified region.By adopting the above technical scheme, it solves in the related technology, wireless location skill
Art location difficulty, it is at high cost the problems such as, not yet propose effective technical solution.Through the above technical solutions, plane can be passed through
Image determines target object position in specified region with the mode that three-dimension virtual reality space combines, and solves wireless location
The problem of technological orientation difficulty.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair
Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart according to a kind of localization method of optional target object of the embodiment of the present invention;
Fig. 2 is the schematic diagram according to characteristic body R1 and R2 in the two dimensional image of the embodiment of the present invention;
Fig. 3 is according to characteristic body R ' 1 in characteristic body R1 and R2 in the two dimensional image of the embodiment of the present invention and 3-D image
With the schematic diagram of R ' 2;
Fig. 4 is the three-dimensional imaging perspective according to the embodiment of the present invention, projection theory figure;
Fig. 5 is according to the view frustums information in the three-dimensional space of the embodiment of the present invention;
Fig. 6 is the schematic diagram according to the target object of the embodiment of the present invention in the physical location in three-dimension virtual reality space;
Fig. 7 is the schematic diagram of the object of reference and object photo according to the actual photographed of the embodiment of the present invention;
Fig. 8 is the schematic diagram according to the object of reference model in the three-dimensional space of the embodiment of the present invention;
Fig. 9 is the schematic diagram according to the object of reference model of the embodiment of the present invention triangle formed and the plane of fixation;
Figure 10 is the schematic diagram with photo object of reference orientation same view angle according to the embodiment of the present invention;
Figure 11 is that photo object is corresponded in the visual angle that three-dimensional virtual scene fits according to the embodiment of the present invention
Make the schematic diagram of a ray in position;
Figure 12 is to determine final position using other plane of reference information in three-dimensional virtual scene according to the embodiment of the present invention
Schematic diagram;
Figure 13 is the schematic diagram being identified to the key feature points in three-dimensional scenic according to the embodiment of the present invention;
Figure 14 is the schematic diagram according to the two-dimensional image information of the embodiment of the present invention;
Figure 15 is the mapping according to characteristic point in the two-dimensional image of the embodiment of the present invention and three-dimension virtual reality space
The schematic diagram of relationship;
Figure 16 is the structural block diagram of the device of target object according to an embodiment of the present invention.
Specific embodiment
Hereinafter, the present invention will be described in detail with reference to the accompanying drawings and in combination with Examples.It should be noted that not conflicting
In the case of, the features in the embodiments and the embodiments of the present application can be combined with each other.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.
It should be noted that at present in electric system (such as power plant, substation, transmission and distribution networks and petrochemical industry, rail traffic
Deng) in the wireless location technology be used widely mainly have GPS, Beidou, WIF, bluetooth, infrared, UWB etc., but these technologies
Outer integrated application, complex environment, project plan etc. all have certain problems indoors, such as:
GPS/ Beidou is suitable for outdoor open field, but location difficulty indoors;
WIFI/ bluetooth depends on radio signal attenuation model, and positioning accuracy is insufficient;
UWB positioning it is more accurate, but to locating base station density, deployment construction all have higher requirements, field conduct difficulty compared with
Big and one-time investment higher cost.
The above wireless location technology is required to positioning target and additionally carries positioning auxiliary device, and there are power supplies, comfortable wearing
Property etc. positioning convenience on difficulty.
In view of the above technical problems, following embodiment of the present invention can provide a kind of localization method of target object, to solve
Wireless location technology location difficulty, it is at high cost the problems such as.
The embodiment of the invention provides a kind of localization method of target object, Fig. 1 is one kind according to the embodiment of the present invention
The flow chart of the localization method of optional target object, as shown in Figure 1, comprising:
Step S102 takes pictures to target object to be positioned, obtains the first plane for containing the target object
Image, wherein the target object is located in the specified region for having constructed three-dimension virtual reality space;
Step S104 obtains the first location information and the first shape of the target object from first flat image
Information, the second location information and the second appearance information and the target of N number of fisrt feature object in first flat image
The mutual alignment relation of object and N number of fisrt feature object, wherein N is the integer greater than 1;
Step S106, according to N number of fisrt feature object of acquisition in the three-dimension virtual reality space determine with it is described N number of
N number of three-dimensional model information of the corresponding N number of second feature object of fisrt feature object and N number of second feature object;
Step S108, according to N number of three-dimensional model information, the first location information, first appearance information, institute
Stating second location information, second appearance information and the mutual alignment relation determines the target object described specified
The third place information in region.
In the embodiment of the present invention, the modeling of high accuracy three-dimensional outdoor scene can be carried out to the regional environment positioned,
Three-dimension virtual reality space is formed, and the characteristic body (such as: landmark building, road) in three-dimension virtual reality space is carried out
Mark.
It, can be with when target object enters (i.e. above-mentioned specified region) in the environment for having constructed three virtual reality spaces
By based on digital image recognition technology, to positioning target, (such as: people, vehicle, equipment, article are equivalent to above-described embodiment
Target object to be positioned) it is identified, and get comprising the part in the positioning target and the positioning target local environment
Or the first flat image of whole characteristic bodies (corresponding above-mentioned N number of fisrt feature object);It is above-mentioned to be appreciated that the positioning target corresponds to
Target object.
Analysis can be carried out to the characteristic body in the first flat image with Feature Extraction Technology based on digital image recognition to mention
It takes, and mutual in first flat image to all characteristic bodies extracted in positioning target and the first flat image
Positional relationship is confirmed, and the first location information and the first appearance information of confirmation target object, N in the first flat image
The second location information and the second appearance information of a fisrt feature object.
Then, the target object recognized and N number of fisrt feature object are introduced into three-dimension virtual reality space, Ke Yitong
It crosses above-mentioned mutual alignment relation, first location information, the first appearance information, second location information and the second appearance information and determines mesh
Mark the mapping relations of object and N number of fisrt feature object in three-dimension virtual reality space.
Finally, target object can be determined in three-dimensional according to mapping relations of the target object in three-dimension virtual reality space
Spatial position coordinate and three-dimensional model information in virtual reality, and then pass through first location information, the first appearance information, second
Location information, the second appearance information, mutual alignment relation and N number of three-dimensional model information can determine target object in real ring
Physical location in border achievees the purpose that position target object.
Through the invention, it in the scene for having constructed three-dimension virtual reality space, can be got by way of taking pictures
Include the first flat image of target object in specified region, then determines first of target object in first flat image
Confidence breath and the first appearance information, and determine the second location information of N number of fisrt feature object and the in first flat image
The mutual alignment relation of two appearance informations and N number of fisrt feature object in target object and first flat image, further
N number of fisrt feature object is mapped in three-dimension virtual reality space by ground, to determine corresponding with the N number of fisrt feature object N number of the
Two characteristic bodies, and N number of three-dimensional model information of N number of second feature object is obtained, finally, pass through N number of three-dimensional model information, first
Location information and the first appearance information, second location information and the second appearance information and mutual alignment relation can determine mesh
Mark the third place information of the object in specified region.By adopting the above technical scheme, it solves in the related technology, wireless location skill
Art location difficulty, it is at high cost the problems such as, not yet propose effective technical solution.Through the above technical solutions, plane can be passed through
Image determines target object position in specified region with the mode that three-dimension virtual reality space combines, and solves wireless location
The problem of technological orientation difficulty.
In embodiments of the present invention, N number of fisrt feature object according to acquisition is in the three-dimension virtual reality space
It is determining with N number of three-dimensional model information of N number of corresponding N number of second feature object of fisrt feature object and N number of second feature object, can
To be achieved through the following technical solutions: K second in N number of fisrt feature object and the three-dimension virtual reality space is special
Sign object is matched one by one, is determined and the N of N number of corresponding N number of second feature object of fisrt feature object and N number of second feature object
A three-dimensional model information;Wherein, the K second feature object is the feature marked in advance in the three-dimension virtual reality space
Object, K are the integer more than or equal to N.
It wherein, can be by by N number of fisrt feature object and in advance for N number of fisrt feature object in the first flat image
The K characteristic body in three-dimension virtual reality space marked is matched one by one, so as to obtain and N number of fisrt feature
The matched N number of second feature object of object.
Then pass through mutual alignment relation in two-dimensional scene (i.e. above-mentioned first flat image), first location information, the
Three-dimensional scenic (i.e. above-mentioned three-dimension virtual reality space) where two location informations, with N number of second feature object is checked, and is passed through
Three-dimensional scenic is converted, to ensure the mutual alignment relation in three-dimensional scenic between N number of second feature object correlated characteristic point
The mutual position with the characteristic point of target object in two-dimensional scene (i.e. above-mentioned first flat image) and N number of fisrt feature object
There is no conflicting then mutual according to characteristic point in two-dimensional scene distance relations for relationship, further to verify three-dimensional
The sighting distance of character pair point matches with the mutual distance of characteristic point in two dimensional image in space, establishes two dimensional image to reach
The mapping relations of characteristic point and three-dimensional scenic (i.e. above-mentioned three-dimension virtual reality space) characteristic point.Wherein, feature in two-dimensional scene
The mutual distance relation of point can pass through first location information, the first appearance information, second location information and the second shape
Information obtains.
There are many implementations of above-mentioned steps S108, in one alternate embodiment, can be by the following technical programs
It realizes: according to N number of three-dimensional model information, the second location information, second appearance information and the mutual alignment
Relationship determines the cone angle and video camera aspect ratio of the image collecting device for shooting first flat image;According to institute
It states cone angle, the video camera aspect ratio, the first location information and first appearance information and determines the target pair
As the third place information in the specified region.
In the embodiment of the present invention, a kind of method of determining the third place information is additionally provided, as described below:
Step 1, as shown in Fig. 2, by image intelligent analysis and Feature Extraction Technology, from two dimensional image (i.e. above-mentioned first
Flat image) in obtain characteristic body R1, R2 (N number of fisrt feature object in corresponding above-mentioned first flat image) and target of environment
Characteristic body A.Wherein, two characteristic bodies R1, R2 are illustrated only in Fig. 2, it is possible to understand that for N number of first in the embodiment of the present invention
The particular number N of characteristic body is how much to be not construed as limiting.
Step 2, as shown in figure 3, being searched by the three-dimension virtual reality space for crossing characteristic body labeled in advance
Meet the three-dimensional scenic of the characteristic body of two dimensional image (i.e. above-mentioned first flat image), and characteristic body is closed correspondingly
Connection, available, R1 corresponds to R ' 1, R2 corresponds to R ' 2.
Step 3, by the spatial data of three-dimension virtual reality, the three-dimensional model information of characteristic body R ' 1 and R ' 2 are obtained, and
And physical location and distance relation between R ' 1 and R ' 2 can be calculated by the spatial information of characteristic body R ' 1 and R ' 2.
Step 4, the target object (i.e. object A) and characteristic body (i.e. characteristic body R1 and R2) by being identified to two dimensional image
Into sign measurement analysis, it can be deduced that at a distance from the appearance information of A, R1 and R2 and A, R1 and R2 are mutual.
Step 5, as shown in figure 4, the information that 1-4 is obtained through the above steps, and combination three-dimensional imaging have an X-rayed, project original
Manage available corresponding projection matrix.
Specifically, setting phase hither plane " Near " and far plane " Far " and cone angle " FOV ", while assuming when proactive
The aspect ratio of camera is " Aspect ", then the projection matrix of corresponding perspective projection are as follows:
Step 6, in conjunction with above-mentioned formula, using two dimensional image as far plane, characteristic body R ' 1 and 2 place plane of R ' are hither plane,
As shown in figure 5, the view frustums information in three-dimensional space can be extrapolated.
It step 7, as shown in fig. 6, can be saturating by the exact shape information of combining target object A and established three-dimensional
Depending on, projection cone volume data, counter can release target object A in the physical location in three-dimension virtual reality space.
It should be noted that if the exact shape information to target object A is unknowable, it can be in conjunction with specific plane (such as
Ground level) Lai Jinhang auxiliary positioning, but this mode can only orient the position on specific plane.
Step 8, by the location information in three-dimension virtual reality space, the positioning of target object in the actual environment is derived
Data (i.e. above-mentioned the third place information).
In embodiments of the present invention, according to outside N number of three-dimensional model information, the first location information, described first
Shape information, the second location information, first appearance information and the mutual alignment relation determine the target object
The third place information in the specified region, comprising: in the three-dimension virtual reality space, changed by perspective projection
Get the second flat image, wherein the of the first shooting visual angle of first flat image and second flat image
Two shooting visual angles are identical so that in the first flat image and second flat image characteristic body location information and appearance information
It is consistent;According to the first location information, first appearance information, the second shooting visual angle of second flat image,
The mutual alignment information and N number of three-dimensional model information determine third of the target object in the specified region
Location information, optionally, the technical solution are also understood that include: the of N number of second feature object in N number of three-dimensional model information
In the case where three location informations and third appearance information, so that characteristic body in the first flat image and second flat image
Location information it is consistent with appearance information it is to be understood that make N number of fisrt feature object the second location information and
The third place information of N number of second feature object is consistent and second appearance information of N number of fisrt feature object and
The third appearance information of N number of second feature object is consistent.
In the embodiment of the present invention, another method for determining the third place information is additionally provided, as described below:
Step 1, as shown in fig. 7, wherein three squares in left side represent object of reference, intermediate sphere represents object (on i.e.
State target object);It should be noted that every picture is all compared with the small window in the lower left corner in following steps;
Step 2, as shown in figure 8, determining the object of reference model in three-dimensional space, wherein known to the coordinate of object of reference model;
Step 3, as shown in figure 9, for the object of reference model triangle formed and the plane of fixation, specifically, in order to judge
Both forward and reverse directions can introduce more than three objects of reference, i.e., calculated with more than one triangle and take intersection, and each triangle pushes away
The principle of calculation is consistent;
Step 4, as shown in Figure 10, object of reference mould is utilized in three-dimensional virtual scene (i.e. above-mentioned three-dimension virtual reality space)
The perspective projection transformation of type triangle fits visual angle identical with object of reference orientation in photo;
Step 5, as shown in figure 11, one is done in the position for photo object being corresponded in the visual angle that three-dimensional virtual scene fits
Ray, that is to say, that all the points on ray all may be the position of the object extrapolated in actual scene.That is target
The target point of object should be on this ray in the coordinate of three-dimensional space, wherein the white point in Figure 11 is actually perpendicular to view
The ray at angle;
Step 6, as shown in figure 12, final position is determined using plane of reference information other in three-dimensional virtual scene, such as know
The target point of other object is sole, and the intersection point that can use ground and ray carrys out the final three dimensional space coordinate for calculating object
(i.e. above-mentioned the third place information);
Step 7, positioning calculates successfully.
Above-mentioned technical proposal is illustrated below in conjunction with preferred embodiment, but is not used in the skill for limiting the embodiment of the present invention
Art scheme.
Step 1, three-dimensional scenic modeling is carried out to the region positioned, wherein three-dimensional scenic corresponds to above-mentioned three
Tie up virtual reality space;
Step 2, as shown in figure 13, in three-dimensional space model, key feature points are labeled (such as: landmark building,
Road etc.), and since characteristic point is also all mock-up, so its location information in three-dimensional space, three-dimensional model information is (long
Width is high) it is all known;Wherein, key feature points correspond to above-mentioned N number of second feature object;
Step 3, as shown in figure 14, acquisition plane image information, then again by image intelligent analytical technology in image
Target (people, object to be positioned etc.) identified (circular object in such as Figure 14).Again to pass all in its surrounding enviroment
Key characteristic point (i.e. above-mentioned N number of fisrt feature object) is analyzed and is extracted, and calculates target and characteristic point position in the planes
Confidence breath, appearance information, mutual position and distance relation;Wherein, two-dimensional image corresponds to above-mentioned first plan view
Picture;
Step 4, as shown in figure 15, pass through the key in the object and surrounding enviroment in the two-dimensional image that extracts
Characteristic point and mutual position and distance relation.By carrying out corresponding matched and searched, mutual position in three-dimensional virtual scene
Verification, visible sensation distance applied analysis are set, is established special (i.e. in three-dimension virtual reality space) in two-dimensional image and three-dimensional scenic
Levy the mapping relations of point;
Step 5, then according to the spatial information in three-dimensional scenic, target and characteristic point location information, shape in the planes
Information calculates the actual coordinate information of object, reaches positioning purpose.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation
The method of example can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but it is very much
In the case of the former be more preferably embodiment.Based on this understanding, technical solution of the present invention is substantially in other words to existing
The part that technology contributes can be embodied in the form of software products, which is stored in a storage
In medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, calculate
Machine, server or network equipment etc.) execute method described in each embodiment of the present invention.
A kind of device of target object is additionally provided in the present embodiment, and the device is for realizing above-described embodiment and preferably
Embodiment, the descriptions that have already been made will not be repeated.As used below, predetermined function may be implemented in term " module "
The combination of software and/or hardware.Although device described in following embodiment is preferably realized with software, hardware, or
The realization of the combination of person's software and hardware is also that may and be contemplated.
Figure 16 is the structural block diagram of the device of target object according to an embodiment of the present invention, as shown in figure 16, the device packet
It includes:
First acquisition module 160 obtains containing the target object for taking pictures to target object to be positioned
The first flat image, wherein the target object, which is located at, have been constructed in the specified region in three-dimension virtual reality space;
Second obtains module 162, and the first position for obtaining the target object from first flat image is believed
Breath and the first appearance information, the second location information and the second appearance information of N number of fisrt feature object in first flat image,
And the mutual alignment relation of the target object and N number of fisrt feature object, wherein N is the integer greater than 1;
First determining module 164, it is true in the three-dimension virtual reality space for N number of fisrt feature object according to acquisition
Determine and N number of three-dimensional model information of N number of corresponding N number of second feature object of fisrt feature object and N number of second feature object;
Second determining module 166, for according to N number of three-dimensional model information, the first location information, described first
Appearance information, the second location information, second appearance information and the mutual alignment relation determine the target pair
As the third place information in the specified region.
Through the invention, it in the scene for having constructed three-dimension virtual reality space, can be got by way of taking pictures
Include the first flat image of target object in specified region, then determines first of target object in first flat image
Confidence breath and the first appearance information, and determine the second location information of N number of fisrt feature object and the in first flat image
The mutual alignment relation of two appearance informations and N number of fisrt feature object in target object and first flat image, further
N number of fisrt feature object is mapped in three-dimension virtual reality space by ground, to determine corresponding with the N number of fisrt feature object N number of the
Two characteristic bodies, and N number of three-dimensional model information of N number of second feature object is obtained, finally, pass through N number of three-dimensional model information, first
Location information, the first appearance information, second location information, the second appearance information and mutual alignment relation can determine target pair
As the third place information in specified region.By adopting the above technical scheme, it solves in the related technology, wireless location technology is fixed
The problems such as position is difficult, at high cost, not yet proposes effective technical solution.Through the above technical solutions, flat image can be passed through
Target object position in specified region is determined with the mode that three-dimension virtual reality space combines, and solves wireless location technology
The problem of location difficulty.
In embodiments of the present invention, first determining module 164, be also used to by N number of fisrt feature object with it is described
K second feature object in three-dimension virtual reality space is matched one by one, determining N corresponding with the N number of fisrt feature object
N number of three-dimensional model information of a second feature object and N number of second feature object;Wherein, the K second feature object is the three-dimensional
The characteristic body marked in advance in virtual reality space, K are the integer more than or equal to N.
In embodiments of the present invention, second determining module 166 is also used to according to N number of three-dimensional model information, institute
Second location information, second appearance information and the mutual alignment relation is stated to determine for shooting first flat image
Image collecting device cone angle and video camera aspect ratio;According to the cone angle, the video camera aspect ratio, described
First location information and first appearance information determine the third place information of the target object in the specified region.
In embodiments of the present invention, second determining module 166 is also used in the three-dimension virtual reality space,
By perspective projection variation get the second flat image, wherein the first shooting visual angle of first flat image with it is described
Second shooting visual angle of the second flat image is identical so that in the first flat image and second flat image characteristic body position
Confidence breath is consistent with appearance information;According to the first location information, first appearance information, second flat image
The second shooting visual angle, the mutual alignment information and N number of three-dimensional model information determine the target object described
The third place information in specified region.
The embodiments of the present invention also provide a kind of storage medium, which includes the program of storage, wherein above-mentioned
The method of any of the above-described is executed when program is run.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store the journey for executing following steps
Sequence code:
S1 takes pictures to target object to be positioned, obtains the first flat image for containing the target object,
In, the target object is located in the specified region for having constructed three-dimension virtual reality space;
S2 obtains the first location information and the first appearance information of the target object from first flat image,
The second location information and the second appearance information of N number of fisrt feature object and the target object in first flat image
With the mutual alignment relation of N number of fisrt feature object, wherein N is the integer greater than 1;
S3 is determined in the three-dimension virtual reality space and N number of first spy according to N number of fisrt feature object of acquisition
Levy N number of three-dimensional model information of the corresponding N number of second feature object of object and N number of second feature object;
S4, according to N number of three-dimensional model information, the first location information, first appearance information, described second
Location information, second appearance information and the mutual alignment relation determine the target object in the specified region
The third place information.
Optionally, in the present embodiment, above-mentioned storage medium can include but is not limited to: USB flash disk, read-only memory (Read-
Only Memory, referred to as ROM), it is random access memory (Random Access Memory, referred to as RAM), mobile hard
The various media that can store program code such as disk, magnetic or disk.
Optionally, the specific example in the present embodiment can be with reference to described in above-described embodiment and optional embodiment
Example, details are not described herein for the present embodiment.
The embodiments of the present invention also provide a kind of electronic device, including memory and processor, stored in the memory
There is computer program, which is arranged to run computer program to execute the step in any of the above-described embodiment of the method
Suddenly.
Optionally, above-mentioned electronic device can also include transmission device and input-output equipment, wherein the transmission device
It is connected with above-mentioned processor, which connects with above-mentioned processor.
Optionally, in the present embodiment, above-mentioned processor can be set to execute following steps by computer program:
S1 takes pictures to target object to be positioned, obtains the first flat image for containing the target object,
In, the target object is located in the specified region for having constructed three-dimension virtual reality space;
S2 obtains the first location information and the first appearance information of the target object from first flat image,
The second location information and the second appearance information of N number of fisrt feature object and the target object in first flat image
With the mutual alignment relation of N number of fisrt feature object, wherein N is the integer greater than 1;
S3 is determined in the three-dimension virtual reality space and N number of first spy according to N number of fisrt feature object of acquisition
Levy N number of three-dimensional model information of the corresponding N number of second feature object of object and N number of second feature object;
S4, according to N number of three-dimensional model information, the first location information, first appearance information, described second
Location information, second appearance information and the mutual alignment relation determine the target object in the specified region
The third place information.
Optionally, the specific example in the present embodiment can be with reference to described in above-described embodiment and optional embodiment
Example, details are not described herein for the present embodiment.
Obviously, those skilled in the art should be understood that each module of the above invention or each step can be with general
Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed
Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored
It is performed by computing device in the storage device, and in some cases, it can be to be different from shown in sequence execution herein
Out or description the step of, perhaps they are fabricated to each integrated circuit modules or by them multiple modules or
Step is fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific hardware and softwares to combine.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.It is all within principle of the invention, it is made it is any modification, etc.
With replacement, improvement etc., should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of localization method of target object characterized by comprising
It takes pictures to target object to be positioned, obtains the first flat image for containing the target object, wherein described
Target object is located in the specified region for having constructed three-dimension virtual reality space;
Obtain the first location information and the first appearance information of the target object from first flat image, described first
The second location information and the second appearance information of N number of fisrt feature object and the target object and described N number of in flat image
The mutual alignment relation of fisrt feature object, wherein N is the integer greater than 1;
It is determined and N number of fisrt feature object pair in the three-dimension virtual reality space according to N number of fisrt feature object of acquisition
N number of three-dimensional model information of the N number of second feature object and N number of second feature object answered;
According to N number of three-dimensional model information, the first location information, first appearance information, the second confidence
Breath, second appearance information and the mutual alignment relation determine third of the target object in the specified region
Location information.
2. the method according to claim 1, wherein N number of fisrt feature object according to acquisition is described three
It ties up in virtual reality space and determines and the N of N number of corresponding N number of second feature object of fisrt feature object and N number of second feature object
A three-dimensional model information, comprising:
N number of fisrt feature object is matched one by one with K second feature object in the three-dimension virtual reality space, really
Determine and N number of three-dimensional model information of N number of corresponding N number of second feature object of fisrt feature object and N number of second feature object;Its
In, the K second feature object is the characteristic body marked in advance in the three-dimension virtual reality space, and K is more than or equal to N
Integer.
3. the method according to claim 1, wherein according to N number of three-dimensional model information, the first position
Information, first appearance information, the second location information, second appearance information and the mutual alignment relation are true
Fixed the third place information of the target object in the specified region, comprising:
It is closed according to N number of three-dimensional model information, the second location information, second appearance information and the mutual alignment
System determines the cone angle and video camera aspect ratio of the image collecting device for shooting first flat image;
It is determined according to the cone angle, the video camera aspect ratio, the first location information and first appearance information
The third place information of the target object in the specified region.
4. the method according to claim 1, wherein according to N number of three-dimensional model information, the first position
Information, first appearance information, the second location information, second appearance information and the mutual alignment relation are true
Fixed the third place information of the target object in the specified region, comprising:
In the three-dimension virtual reality space, the second flat image is got by perspective projection variation, wherein described first
First shooting visual angle of flat image is identical as the second shooting visual angle of second flat image so that the first flat image and
The location information of characteristic body is consistent with appearance information in second flat image;
According to the first location information, first appearance information, the second shooting visual angle of second flat image, described
Mutual alignment information and N number of three-dimensional model information determine the third place of the target object in the specified region
Information.
5. a kind of positioning device of target object characterized by comprising
First acquisition module obtains containing the first of the target object for taking pictures to target object to be positioned
Flat image, wherein the target object is located in the specified region for having constructed three-dimension virtual reality space;
Second obtains module, for obtaining the first location information and first of the target object from first flat image
Appearance information, the second location information and the second appearance information of N number of fisrt feature object and described in first flat image
The mutual alignment relation of target object and N number of fisrt feature object, wherein N is the integer greater than 1;
First determining module, for N number of fisrt feature object according to acquisition in the three-dimension virtual reality space determining and institute
State N number of three-dimensional model information of the corresponding N number of second feature object of N number of fisrt feature object and N number of second feature object;
Second determining module, for being believed according to N number of three-dimensional model information, the first location information, first shape
Breath, the second location information, second appearance information and the mutual alignment relation determine the target object in institute
State the third place information in specified region.
6. device according to claim 5, which is characterized in that first determining module, for special by described N number of first
Sign object is matched one by one with K second feature object in the three-dimension virtual reality space, determining and N number of fisrt feature
N number of three-dimensional model information of the corresponding N number of second feature object of object and N number of second feature object;Wherein, the K second feature object
For the characteristic body marked in advance in the three-dimension virtual reality space, K is the integer more than or equal to N.
7. device according to claim 5, which is characterized in that second determining module is also used to according to described N number of three
Dimension module information, the second location information, second appearance information and the mutual alignment relation are determined for shooting
State the cone angle and video camera aspect ratio of the image collecting device of the first flat image;According to the cone angle, described take the photograph
Camera aspect ratio, the first location information and first appearance information determine the target object in the specified region
The third place information.
8. device according to claim 5, which is characterized in that second determining module is also used to described three-dimensional empty
In quasi- realistic space, the second flat image is got by perspective projection variation, wherein the first count of first flat image
It is identical as the second shooting visual angle of second flat image to take the photograph visual angle, so that the first flat image and second flat image
The location information of middle characteristic body is consistent with appearance information;According to the first location information, first appearance information, described
Second shooting visual angle of the second flat image, the mutual alignment information and N number of three-dimensional model information determine the mesh
Mark the third place information of the object in the specified region.
9. a kind of storage medium, which is characterized in that be stored with computer program in the storage medium, wherein the computer
Program is arranged to execute method described in any one of Claims 1-4 when operation.
10. a kind of electronic device, including memory and processor, which is characterized in that be stored with computer journey in the memory
Sequence, the processor are arranged to run the computer program to execute side described in any one of Claims 1-4
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910718260.5A CN110443850B (en) | 2019-08-05 | 2019-08-05 | Target object positioning method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910718260.5A CN110443850B (en) | 2019-08-05 | 2019-08-05 | Target object positioning method and device, storage medium and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110443850A true CN110443850A (en) | 2019-11-12 |
CN110443850B CN110443850B (en) | 2022-03-22 |
Family
ID=68433246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910718260.5A Active CN110443850B (en) | 2019-08-05 | 2019-08-05 | Target object positioning method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110443850B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178127A (en) * | 2019-11-20 | 2020-05-19 | 青岛小鸟看看科技有限公司 | Method, apparatus, device and storage medium for displaying image of target object |
CN111475026A (en) * | 2020-04-10 | 2020-07-31 | 李斌 | Space positioning method based on mobile terminal application augmented virtual reality technology |
CN112001947A (en) * | 2020-07-30 | 2020-11-27 | 海尔优家智能科技(北京)有限公司 | Shooting position determining method and device, storage medium and electronic device |
CN112948814A (en) * | 2021-03-19 | 2021-06-11 | 合肥京东方光电科技有限公司 | Account password management method and device and storage medium |
WO2022056924A1 (en) * | 2020-09-21 | 2022-03-24 | 西门子(中国)有限公司 | Target positioning method and device, and computer-readable medium |
WO2023231425A1 (en) * | 2022-05-31 | 2023-12-07 | 中兴通讯股份有限公司 | Positioning method, electronic device, storage medium and program product |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831401A (en) * | 2012-08-03 | 2012-12-19 | 樊晓东 | Method and system for tracking, three-dimensionally superposing and interacting target object without special mark |
CN106780735A (en) * | 2016-12-29 | 2017-05-31 | 深圳先进技术研究院 | A kind of semantic map constructing method, device and a kind of robot |
CN107845060A (en) * | 2017-10-31 | 2018-03-27 | 广东中星电子有限公司 | Geographical position and corresponding image position coordinates conversion method and system |
US20180192035A1 (en) * | 2017-01-04 | 2018-07-05 | Qualcomm Incorporated | Systems and methods for object location |
CN108415639A (en) * | 2018-02-09 | 2018-08-17 | 腾讯科技(深圳)有限公司 | Visual angle regulating method, device, electronic device and computer readable storage medium |
WO2019014585A2 (en) * | 2017-07-14 | 2019-01-17 | Materialise Nv | System and method of radiograph correction and visualization |
CN106162149B (en) * | 2016-09-29 | 2019-06-11 | 宇龙计算机通信科技(深圳)有限公司 | A kind of method and mobile terminal shooting 3D photo |
-
2019
- 2019-08-05 CN CN201910718260.5A patent/CN110443850B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831401A (en) * | 2012-08-03 | 2012-12-19 | 樊晓东 | Method and system for tracking, three-dimensionally superposing and interacting target object without special mark |
CN106162149B (en) * | 2016-09-29 | 2019-06-11 | 宇龙计算机通信科技(深圳)有限公司 | A kind of method and mobile terminal shooting 3D photo |
CN106780735A (en) * | 2016-12-29 | 2017-05-31 | 深圳先进技术研究院 | A kind of semantic map constructing method, device and a kind of robot |
US20180192035A1 (en) * | 2017-01-04 | 2018-07-05 | Qualcomm Incorporated | Systems and methods for object location |
WO2019014585A2 (en) * | 2017-07-14 | 2019-01-17 | Materialise Nv | System and method of radiograph correction and visualization |
CN107845060A (en) * | 2017-10-31 | 2018-03-27 | 广东中星电子有限公司 | Geographical position and corresponding image position coordinates conversion method and system |
CN108415639A (en) * | 2018-02-09 | 2018-08-17 | 腾讯科技(深圳)有限公司 | Visual angle regulating method, device, electronic device and computer readable storage medium |
Non-Patent Citations (2)
Title |
---|
WEI SONG 等: "A 3D localisation method in indoor environments for virtual reality applications", 《HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES》 * |
武高雨: "家用机器人室内定位技术研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178127A (en) * | 2019-11-20 | 2020-05-19 | 青岛小鸟看看科技有限公司 | Method, apparatus, device and storage medium for displaying image of target object |
CN111178127B (en) * | 2019-11-20 | 2024-02-20 | 青岛小鸟看看科技有限公司 | Method, device, equipment and storage medium for displaying image of target object |
CN111475026A (en) * | 2020-04-10 | 2020-07-31 | 李斌 | Space positioning method based on mobile terminal application augmented virtual reality technology |
CN111475026B (en) * | 2020-04-10 | 2023-08-22 | 李斌 | Spatial positioning method based on mobile terminal application augmented virtual reality technology |
CN112001947A (en) * | 2020-07-30 | 2020-11-27 | 海尔优家智能科技(北京)有限公司 | Shooting position determining method and device, storage medium and electronic device |
WO2022056924A1 (en) * | 2020-09-21 | 2022-03-24 | 西门子(中国)有限公司 | Target positioning method and device, and computer-readable medium |
CN112948814A (en) * | 2021-03-19 | 2021-06-11 | 合肥京东方光电科技有限公司 | Account password management method and device and storage medium |
WO2023231425A1 (en) * | 2022-05-31 | 2023-12-07 | 中兴通讯股份有限公司 | Positioning method, electronic device, storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN110443850B (en) | 2022-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443850A (en) | Localization method and device, storage medium, the electronic device of target object | |
CN105143907B (en) | Alignment system and method | |
EP2579128B1 (en) | Portable device, virtual reality system and method | |
CN103442436B (en) | A kind of indoor positioning terminal, network, system and method | |
Zlatanova | Augmented reality technology | |
CN107197200A (en) | It is a kind of to realize the method and device that monitor video is shown | |
Verma et al. | Indoor navigation using augmented reality | |
US20140016857A1 (en) | Point cloud construction with unposed camera | |
CN105046752A (en) | Method for representing virtual information in a view of a real environment | |
CN103852066B (en) | Method, control method, electronic equipment and the control system of a kind of equipment location | |
CN108227929A (en) | Augmented reality setting-out system and implementation method based on BIM technology | |
US20200265644A1 (en) | Method and system for generating merged reality images | |
CN111028358A (en) | Augmented reality display method and device for indoor environment and terminal equipment | |
CN107066747A (en) | A kind of vision measurement network organizing planing method | |
CN108537214A (en) | Automatic construction method of indoor semantic map | |
KR102102803B1 (en) | Real-Time Indoor Positioning Method Based on Static Marker Grid and System Therefor | |
Wither et al. | Using aerial photographs for improved mobile AR annotation | |
CN112422653A (en) | Scene information pushing method, system, storage medium and equipment based on location service | |
CN108804675A (en) | Unmanned plane mobile space information management system based on multi-source Spatial Data and method | |
CN102946476B (en) | Rapid positioning method and rapid positioning device | |
CN109712249A (en) | Geographic element augmented reality method and device | |
CN104539926B (en) | Distance determines method and apparatus | |
CN110969704B (en) | Mark generation tracking method and device based on AR guide | |
CN108289327A (en) | A kind of localization method and system based on image | |
KR102031760B1 (en) | Bumper Car Service Method Using Real-Time Indoor Positioning System and System Therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |