CN110443850B - Target object positioning method and device, storage medium and electronic device - Google Patents

Target object positioning method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110443850B
CN110443850B CN201910718260.5A CN201910718260A CN110443850B CN 110443850 B CN110443850 B CN 110443850B CN 201910718260 A CN201910718260 A CN 201910718260A CN 110443850 B CN110443850 B CN 110443850B
Authority
CN
China
Prior art keywords
information
features
target object
position information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910718260.5A
Other languages
Chinese (zh)
Other versions
CN110443850A (en
Inventor
肖少剑
吴睿
朱红岷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Unitech Power Technology Co Ltd
Original Assignee
Zhuhai Unitech Power Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Unitech Power Technology Co Ltd filed Critical Zhuhai Unitech Power Technology Co Ltd
Priority to CN201910718260.5A priority Critical patent/CN110443850B/en
Publication of CN110443850A publication Critical patent/CN110443850A/en
Application granted granted Critical
Publication of CN110443850B publication Critical patent/CN110443850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a target object positioning method and device, a storage medium and an electronic device, wherein the method comprises the following steps: photographing a target object to be positioned to obtain a first plane image containing the target object, wherein the target object is positioned in a designated area of a constructed three-dimensional virtual reality space; acquiring first position information and first appearance information of a target object from a first plane image, second position information and second appearance information of N first features in the first plane image, and a mutual position relation between the target object and the N first features; determining N second features corresponding to the N first features and N three-dimensional model information of the N second features in a three-dimensional virtual reality space according to the obtained N first features; and determining third position information of the target object in the designated area according to the N pieces of three-dimensional model information, the first position information, the first appearance information, the second position information, the second appearance information and the mutual position relation.

Description

Target object positioning method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of communications, and in particular, to a method and an apparatus for locating a target object, a storage medium, and an electronic apparatus.
Background
With the improvement of Wireless Positioning technology, the Wireless Positioning technology widely applied to power systems (such as power plants, substations, power transmission and distribution networks, petrochemical industry, rail transit, and the like) mainly includes a Global Positioning System (GPS), a Beidou Navigation Satellite System (BDS), a Wireless Fidelity (WIFI), bluetooth, infrared, and Ultra Wide Band-Band (UWB).
However, these wireless positioning technologies have certain problems in indoor and outdoor comprehensive applications, complex environments, engineering deployment, and the like. For example, GPS/Beidou is difficult to position indoors, WIFI/Bluetooth positioning accuracy is insufficient, UWB engineering is difficult to implement, and cost is high.
Aiming at the problems of difficult positioning, high cost and the like of the wireless positioning technology in the related technology, an effective technical scheme is not provided yet.
Disclosure of Invention
The embodiment of the invention provides a target object positioning method and device, a storage medium and an electronic device, and aims to at least solve the problems of difficult positioning, high cost and the like of a wireless positioning technology in the related technology.
According to an embodiment of the present invention, there is provided a target object positioning method, including: photographing a target object to be positioned to obtain a first plane image containing the target object, wherein the target object is positioned in a designated area of a constructed three-dimensional virtual reality space;
acquiring first position information and first appearance information of the target object from the first plane image, second position information and second appearance information of N first features in the first plane image, and mutual position relations between the target object and the N first features, wherein N is an integer greater than 1;
determining N second features corresponding to the N first features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space according to the obtained N first features;
and determining third position information of the target object in the specified area according to the N pieces of three-dimensional model information, the first position information, the first appearance information, the second position information, the second appearance information and the mutual position relation.
Optionally, the determining, according to the acquired N first features, N second features corresponding to the N first features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space includes:
matching the N first features with K second features in the three-dimensional virtual reality space one by one, and determining N second features corresponding to the N first features and N three-dimensional model information of the N second features; the K second features are pre-marked features in the three-dimensional virtual reality space, and K is an integer greater than or equal to N.
Optionally, determining third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the second shape information, and the mutual position relationship includes:
determining the view cone angle and the camera aspect ratio of an image acquisition device for shooting the first plane image according to the N pieces of three-dimensional model information, the second position information, the second appearance information and the mutual position relationship;
and determining third position information of the target object in the designated area according to the viewing cone angle, the camera aspect ratio, the first position information and the first appearance information.
Optionally, determining third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the second shape information, and the mutual position relationship includes: acquiring a second plane image in the three-dimensional virtual reality space through perspective projection change, wherein a first shooting visual angle of the first plane image is the same as a second shooting visual angle of the second plane image, so that the position information and the shape information of the feature objects in the first plane image and the second plane image are consistent; and determining third position information of the target object in the specified area according to the first position information, the first appearance information, the second shooting visual angle of the second plane image, the mutual position relation and the N pieces of three-dimensional model information.
According to another embodiment of the present invention, there is also provided a target object locating apparatus including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for photographing a target object to be positioned to obtain a first plane image containing the target object, and the target object is positioned in a specified area of a constructed three-dimensional virtual reality space; a second obtaining module, configured to obtain first position information and first shape information of the target object from the first planar image, second position information and second shape information of N first features in the first planar image, and a mutual position relationship between the target object and the N first features, where N is an integer greater than 1; the first determining module is used for determining N second features corresponding to the N first N features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space according to the obtained N first features; a second determining module, configured to determine third location information of the target object in the designated area according to the N pieces of three-dimensional model information, the first location information, the first shape information, the second location information, the second shape information, and the mutual location relationship.
Optionally, the first determining module is specifically configured to perform one-to-one matching on the N first features and K second features in the three-dimensional virtual reality space, and determine N second features corresponding to the N first features and N three-dimensional model information of the N second features; the K second features are pre-marked features in the three-dimensional virtual reality space, and K is an integer greater than or equal to N.
Optionally, the second determining module is further configured to determine, according to the N pieces of three-dimensional model information, the second position information, the second shape information, and the mutual position relationship, a view cone angle and a camera aspect ratio of an image acquisition device used for capturing the first planar image; and determining third position information of the target object in the designated area according to the viewing cone angle, the camera aspect ratio, the first position information and the first appearance information.
Optionally, the second determining module is further configured to acquire a second planar image through perspective projection change in the three-dimensional virtual reality space, where a first shooting angle of the first planar image is the same as a second shooting angle of the second planar image, so that position information and shape information of the feature object in the first planar image and the second planar image are both consistent; and determining third position information of the target object in the specified area according to the first position information, the first appearance information, the second shooting visual angle of the second plane image, the mutual position information and the N pieces of three-dimensional model information.
According to another embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is configured to execute the method for locating a target object according to any one of the above when the computer program runs.
According to another embodiment of the present invention, there is also provided an electronic apparatus including a memory and a processor, wherein the memory stores therein a computer program, and the processor is configured to execute the computer program to perform the target object positioning method according to any one of the above.
According to the invention, in a scene of a constructed three-dimensional virtual reality space, a first plane image containing a target object in a specified area can be acquired in a photographing mode, then first position information and first appearance information of the target object in the first plane image are determined, second position information and second appearance information of N first features in the first plane image are determined, and mutual position relations between the target object and the N first features in the first plane image are determined, further, the N first features are mapped into the three-dimensional virtual reality space to determine N second features corresponding to the N first features, and N three-dimensional model information of the N second features is obtained, and finally, the N three-dimensional model information, the first position information, the first appearance information, the second position information and the first appearance information are obtained, The second appearance information and the mutual positional relationship may determine third positional information of the target object in the specified area. By adopting the technical scheme, the problems of difficult positioning, high cost and the like of the wireless positioning technology in the related technology are solved, and an effective technical scheme is not provided. By the technical scheme, the position of the target object in the designated area can be determined in a mode of combining the plane image with the three-dimensional virtual reality space, and the problem of difficulty in positioning by the wireless positioning technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an alternative method for locating a target object according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of features R1 and R2 in a two-dimensional image according to an embodiment of the invention;
FIG. 3 is a schematic diagram of features R1 and R2 in a two-dimensional image and features R '1 and R' 2 in a three-dimensional image according to an embodiment of the invention;
FIG. 4 is a perspective, projection schematic diagram of three-dimensional imaging according to an embodiment of the present invention;
FIG. 5 is view frustum information in three-dimensional space according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an actual position of a target object in a three-dimensional virtual reality space according to an embodiment of the present invention;
FIG. 7 is a schematic illustration of a reference and target photograph taken in practice in accordance with an embodiment of the invention;
FIG. 8 is a schematic diagram of a reference object model in three-dimensional space according to an embodiment of the invention;
FIG. 9 is a schematic diagram of a triangle formed by a reference object model and a fixed plane according to an embodiment of the invention;
FIG. 10 is a schematic view from the same perspective as the orientation of a photographic reference, in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram illustrating a ray taken at a position corresponding to a photographic target in a perspective fitted to a three-dimensional virtual scene, according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of determining a final position using other reference plane information in a three-dimensional virtual scene according to an embodiment of the present invention;
FIG. 13 is a schematic diagram illustrating identification of key feature points in a three-dimensional scene according to an embodiment of the invention;
FIG. 14 is a schematic diagram of two-dimensional planar image information according to an embodiment of the invention;
fig. 15 is a schematic diagram of a mapping relationship between a two-dimensional plane image and feature points in a three-dimensional virtual reality space according to an embodiment of the present invention;
fig. 16 is a block diagram of a structure of an apparatus of a target object according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
It should be noted that, at present, wireless positioning technologies widely applied in power systems (such as power plants, substations, power transmission and distribution networks, petrochemical industry, rail transit, and the like) mainly include GPS, beidou, WIF, bluetooth, infrared, UWB, and the like, but these technologies all have certain problems in the aspects of indoor and outdoor comprehensive application, complex environments, engineering deployment, and the like, such as:
the GPS/Beidou is suitable for outdoor open zones, but is difficult to position indoors;
WIFI/Bluetooth relies on a radio signal attenuation model, and the positioning precision is insufficient;
UWB positioning is accurate, but has higher requirements on positioning base station density and deployment construction, larger field implementation difficulty and higher one-time investment cost.
Above wireless location technique all needs the location target additionally to carry the location auxiliary device, has the difficulty on location convenience such as power supply, wearing travelling comfort.
In view of the above technical problems, the following embodiments of the present invention provide a method for positioning a target object to solve the problems of difficult positioning, high cost, and the like of the wireless positioning technology.
An embodiment of the present invention provides a target object positioning method, and fig. 1 is a flowchart of an optional target object positioning method according to an embodiment of the present invention, as shown in fig. 1, including:
step S102, a target object to be positioned is photographed to obtain a first plane image containing the target object, wherein the target object is located in a designated area of a constructed three-dimensional virtual reality space;
step S104, acquiring first position information and first appearance information of the target object from the first plane image, second position information and second appearance information of N first features in the first plane image, and mutual position relations between the target object and the N first features, wherein N is an integer greater than 1;
step S106, determining N second characteristics corresponding to the N first characteristics and N three-dimensional model information of the N second characteristics in the three-dimensional virtual reality space according to the obtained N first characteristics;
step S108, determining third position information of the target object in the designated area according to the N pieces of three-dimensional model information, the first position information, the first appearance information, the second position information, the second appearance information and the mutual position relation.
In the embodiment of the invention, high-precision three-dimensional live-action modeling can be carried out on the regional environment needing to be positioned, a three-dimensional virtual reality space is formed, and the characteristics (such as landmark buildings, roads and the like) in the three-dimensional virtual reality space are identified.
When a target object enters an environment (i.e. the designated area) in which a three-dimensional virtual reality space is constructed, a positioning target (e.g. a person, a vehicle, a device, an article, etc., which is equivalent to the target object to be positioned in the above embodiment) can be identified by an image-based intelligent identification technology, and a first plane image containing the positioning target and part or all of features (corresponding to the N first features) in the environment where the positioning target is located is obtained; it is to be understood that the positioning target corresponds to the target object described above.
The method comprises the steps of analyzing and extracting the characteristic objects in a first plane image based on an image intelligent identification and characteristic extraction technology, confirming the mutual position relation of a positioning target and all the characteristic objects extracted from the first plane image in the first plane image, confirming first position information and first appearance information of a target object, and confirming second position information and second appearance information of N first characteristic objects in the first plane image.
Then, the identified target object and the N first features are introduced into the three-dimensional virtual reality space, and the mapping relation of the target object and the N first features in the three-dimensional virtual reality space can be determined through the above mutual position relation, the first position information, the first shape information, the second position information and the second shape information.
And finally, according to the mapping relation of the target object in the three-dimensional virtual reality space, the space position coordinate and the three-dimensional model information of the target object in the three-dimensional virtual reality space can be determined, and further the actual position of the target object in the real environment can be determined through the first position information, the first appearance information, the second position information, the second appearance information, the mutual position relation and the N pieces of three-dimensional model information, so that the purpose of positioning the target object is achieved.
According to the invention, in a scene of a constructed three-dimensional virtual reality space, a first plane image containing a target object in a specified area can be acquired in a photographing mode, then first position information and first appearance information of the target object in the first plane image are determined, second position information and second appearance information of N first features in the first plane image are determined, and mutual position relations between the target object and the N first features in the first plane image are determined, further, the N first features are mapped into the three-dimensional virtual reality space to determine N second features corresponding to the N first features, and N three-dimensional model information of the N second features is obtained, and finally, the N three-dimensional model information, the first position information, the first appearance information, the second position information and the first appearance information are obtained, The second appearance information and the mutual positional relationship may determine third positional information of the target object in the specified area. By adopting the technical scheme, the problems of difficult positioning, high cost and the like of the wireless positioning technology in the related technology are solved, and an effective technical scheme is not provided. By the technical scheme, the position of the target object in the designated area can be determined in a mode of combining the plane image with the three-dimensional virtual reality space, and the problem of difficulty in positioning by the wireless positioning technology is solved.
In the embodiment of the present invention, the determining, according to the acquired N first features, N second features corresponding to the N first features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space may be implemented by the following technical solutions: matching the N first features with K second features in the three-dimensional virtual reality space one by one, and determining N second features corresponding to the N first features and N three-dimensional model information of the N second features; the K second features are pre-marked features in the three-dimensional virtual reality space, and K is an integer greater than or equal to N.
For the N first features in the first plane image, the N first features may be matched with K features in the pre-marked three-dimensional virtual reality space one by one, so that N second features matched with the N first features may be obtained.
Then, the mutual position relation, the first position information and the second position information in the two-dimensional scene (namely the first plane image) are checked with the three-dimensional scene (namely the three-dimensional virtual reality space) where the N second characteristic objects are positioned, by converting the three-dimensional scene, the mutual position relationship between the N second characteristic object related characteristic points in the three-dimensional scene and the position relationship between the target object and the N first characteristic objects in the two-dimensional scene (namely the first plane image) are ensured not to be contradictory, and the sight distance of the corresponding characteristic points in the three-dimensional space is further verified to be consistent with the mutual distance of the characteristic points in the two-dimensional image according to the distance relationship between the characteristic points in the two-dimensional scene, therefore, the mapping relation between the two-dimensional image characteristic points and the three-dimensional scene (namely the three-dimensional virtual reality space) characteristic points is established. The distance relationship between the feature points in the two-dimensional scene can be obtained through the first position information, the first appearance information, the second position information and the second appearance information.
The implementation manner of the step S108 is various, and in an optional embodiment, the following technical solution may be implemented: determining a view cone angle and a camera aspect ratio of an image acquisition device for shooting the first planar image according to the N pieces of three-dimensional model information, the second position information, the second appearance information and the mutual position relationship; and determining third position information of the target object in the designated area according to the viewing cone angle, the camera aspect ratio, the first position information and the first appearance information.
In the embodiment of the present invention, a method for determining third location information is further provided, which is as follows:
step 1, as shown in fig. 2, the features R1 and R2 (corresponding to N first features in the first planar image) and the target feature a of the environment are obtained from the two-dimensional image (i.e. the first planar image) by the image intelligent analysis and feature extraction technology. Wherein, only two features R1, R2 are shown in fig. 2, it can be understood that the specific number N of the N first features is not limited in the embodiment of the present invention.
Step 2, as shown in fig. 3, by searching a three-dimensional scene of the feature object that conforms to the two-dimensional image (i.e., the first plane image) in the three-dimensional virtual reality space in which the feature object has been marked in advance, and performing one-to-one association on the feature object, it can be obtained that R1 corresponds to R '1, and R2 corresponds to R' 2.
And 3, acquiring three-dimensional model information of the features R '1 and R' 2 through the spatial data of the three-dimensional virtual reality, and calculating the actual position and distance relation between the R '1 and the R' 2 through the spatial information of the features R '1 and R' 2.
Step 4, by performing feature measurement analysis on the target object (i.e. the target object a) and the features (i.e. the features R1 and R2) identified by the two-dimensional image, the shape information of A, R1 and R2 and the distance between A, R1 and R2 can be obtained.
And 5, as shown in fig. 4, obtaining the corresponding projection matrix by the information obtained in the steps 1-4 and combining the three-dimensional imaging perspective and projection principle.
Specifically, the Near plane "Near" and the Far plane "Far" and the view cone angle "FOV" are set, and meanwhile, assuming that the Aspect ratio of the current camera is "Aspect", the projection matrix of the corresponding perspective projection is:
Figure BDA0002156223970000101
and 6, combining the formula, taking the two-dimensional image as a far plane, and taking the plane where the features R '1 and R' 2 are located as a near plane, and calculating the information of the cone in the three-dimensional space as shown in FIG. 5.
Step 7, as shown in fig. 6, the actual position of the target object a in the three-dimensional virtual reality space can be reversely deduced by combining the actual shape information of the target object a and the formed three-dimensional perspective and projection view cone data.
It should be noted that if the actual shape information of the target object a is unknown, the auxiliary positioning may be performed in conjunction with a specific plane (e.g., a ground plane), but this approach can only position the position on the specific plane.
And 8, deducing the positioning data (namely the third position information) of the target object in the actual environment through the position information of the three-dimensional virtual reality space.
In an embodiment of the present invention, determining third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the first shape information, and the mutual position relationship includes: acquiring a second plane image in the three-dimensional virtual reality space through perspective projection change, wherein a first shooting visual angle of the first plane image is the same as a second shooting visual angle of the second plane image, so that the position information and the shape information of the feature objects in the first plane image and the second plane image are consistent; according to the first position information, the first shape information, the second shooting view angle of the second plane image, the mutual position information and the N three-dimensional model information, determining third position information of the target object in the specified area, optionally, the technical solution may further be understood that the N three-dimensional model information includes: in the case of the third position information and the third shape information of the N second features, making the position information and the shape information of the features in the first planar image and the second planar image uniform may be understood as making the second position information of the N first features and the third position information of the N second features uniform, and the second shape information of the N first features and the third shape information of the N second features uniform.
In the embodiment of the present invention, another method for determining third location information is further provided, which is as follows:
step 1, as shown in fig. 7, three squares on the left side represent a reference object, and a middle sphere represents a target object (i.e. the target object); it should be noted that in the following steps, each picture is compared by using a small window at the lower left corner;
step 2, as shown in fig. 8, determining a reference object model in a three-dimensional space, wherein coordinates of the reference object model are known;
step 3, as shown in fig. 9, triangles and fixed planes formed by reference object models are used, specifically, more than three reference objects can be introduced for judging the forward and backward directions, that is, more than one triangle is used for calculating and obtaining intersection, and the calculation principle of each triangle is consistent;
step 4, as shown in fig. 10, fitting the same view angle as the orientation of the reference object in the picture by using perspective projection transformation of the reference object model triangle in the three-dimensional virtual scene (i.e. the three-dimensional virtual reality space);
step 5, as shown in fig. 11, a ray is made corresponding to the position of the target object in the photograph in the view angle fitted by the three-dimensional virtual scene, that is, all points on the ray may be the calculated position of the target object in the actual scene. That is, the coordinates of the target point of the target object in three-dimensional space should be on this ray, wherein the white point in fig. 11 is actually a ray perpendicular to the viewing angle;
step 6, as shown in fig. 12, determining a final position by using information of other reference surfaces in the three-dimensional virtual scene, for example, identifying that a target point of the target object is a sole, and finally calculating a three-dimensional space coordinate of the target object (i.e., the third position information) by using an intersection point of the ground and the ray;
and 7, successfully performing positioning calculation.
The technical solutions described above are described below with reference to preferred embodiments, but are not intended to limit the technical solutions of the embodiments of the present invention.
Step 1, carrying out three-dimensional scene modeling on an area needing to be positioned, wherein the three-dimensional scene corresponds to the three-dimensional virtual reality space;
step 2, as shown in fig. 13, in the three-dimensional space model, labeling key feature points (such as landmark buildings, roads, etc.), and since the feature points are all physical models, the position information and the three-dimensional model information (such as length, width, height, etc.) of the three-dimensional space model are known; wherein the key feature points correspond to the N second features;
and 3, as shown in fig. 14, acquiring plane image information, and then identifying a target (a person, an object and the like to be positioned) in the image (such as a round object in fig. 14) by using an image intelligent analysis technology. Analyzing and extracting all key feature points (namely the N first feature objects) in the surrounding environment, and calculating position information, shape information, mutual positions and distance relations of the target and the feature points in the plane; wherein the two-dimensional plane image corresponds to the first plane image;
and 4, as shown in fig. 15, extracting the key feature points in the target object and the surrounding environment in the two-dimensional plane image, and the mutual position and distance relationship. Establishing a mapping relation between a two-dimensional plane image and feature points in a three-dimensional scene (namely in a three-dimensional virtual reality space) by carrying out corresponding matching search, mutual position checking and visual distance application analysis in the three-dimensional virtual scene;
and 5, calculating the actual coordinate information of the target object according to the spatial information, the position information and the shape information of the target and the characteristic points in the plane in the three-dimensional scene, thereby achieving the purpose of positioning.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device of a target object is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 16 is a block diagram of a structure of an apparatus of a target object according to an embodiment of the present invention, as shown in fig. 16, the apparatus including:
the first obtaining module 160 is configured to take a picture of a target object to be positioned, so as to obtain a first planar image including the target object, where the target object is located in a specified area of a constructed three-dimensional virtual reality space;
a second obtaining module 162, configured to obtain first position information and first shape information of the target object from the first planar image, second position information and second shape information of N first features in the first planar image, and a mutual position relationship between the target object and the N first features, where N is an integer greater than 1;
a first determining module 164, configured to determine, according to the acquired N first features, N second features corresponding to the N first features and N pieces of three-dimensional model information of the N second features in the three-dimensional virtual reality space;
a second determining module 166, configured to determine third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the second shape information, and the mutual position relationship.
According to the invention, in a scene of a constructed three-dimensional virtual reality space, a first plane image containing a target object in a specified area can be acquired in a photographing mode, then first position information and first appearance information of the target object in the first plane image are determined, second position information and second appearance information of N first features in the first plane image are determined, mutual position relations between the target object and the N first features in the first plane image are determined, further, the N first features are mapped into the three-dimensional virtual reality space to determine N second features corresponding to the N first features, N three-dimensional model information of the N second features is obtained, and finally, a third position of the target object in the specified area can be determined through the N three-dimensional model information, the first position information, the first appearance information, the second position information, the second appearance information and the mutual position relations And (4) information. By adopting the technical scheme, the problems of difficult positioning, high cost and the like of the wireless positioning technology in the related technology are solved, and an effective technical scheme is not provided. By the technical scheme, the position of the target object in the designated area can be determined in a mode of combining the plane image with the three-dimensional virtual reality space, and the problem of difficulty in positioning by the wireless positioning technology is solved.
In this embodiment of the present invention, the first determining module 164 is further configured to match the N first features with K second features in the three-dimensional virtual reality space one by one, and determine N second features corresponding to the N first features and N three-dimensional model information of the N second features; the K second features are pre-marked features in the three-dimensional virtual reality space, and K is an integer greater than or equal to N.
In this embodiment of the present invention, the second determining module 166 is further configured to determine, according to the N pieces of three-dimensional model information, the second position information, the second shape information, and the mutual position relationship, a view angle and a camera aspect ratio of an image capturing device for capturing the first planar image; and determining third position information of the target object in the designated area according to the viewing cone angle, the camera aspect ratio, the first position information and the first appearance information.
In this embodiment of the present invention, the second determining module 166 is further configured to acquire a second planar image through perspective projection change in the three-dimensional virtual reality space, where a first capturing perspective of the first planar image is the same as a second capturing perspective of the second planar image, so that the position information and the shape information of the feature object in the first planar image and the second planar image are both the same; and determining third position information of the target object in the specified area according to the first position information, the first appearance information, the second shooting visual angle of the second plane image, the mutual position information and the N pieces of three-dimensional model information.
An embodiment of the present invention further provides a storage medium including a stored program, wherein the program executes any one of the methods described above.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, photographing a target object to be positioned to obtain a first plane image containing the target object, wherein the target object is located in a specified area of a constructed three-dimensional virtual reality space;
s2, obtaining first position information and first shape information of the target object from the first planar image, second position information and second shape information of N first features in the first planar image, and a mutual position relationship between the target object and the N first features, where N is an integer greater than 1;
s3, determining N second features corresponding to the N first features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space according to the obtained N first features;
s4, determining third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the second shape information, and the mutual position relationship.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, photographing a target object to be positioned to obtain a first plane image containing the target object, wherein the target object is located in a specified area of a constructed three-dimensional virtual reality space;
s2, obtaining first position information and first shape information of the target object from the first planar image, second position information and second shape information of N first features in the first planar image, and a mutual position relationship between the target object and the N first features, where N is an integer greater than 1;
s3, determining N second features corresponding to the N first features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space according to the obtained N first features;
s4, determining third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the second shape information, and the mutual position relationship.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for locating a target object, comprising:
photographing a target object to be positioned to obtain a first plane image containing the target object, wherein the target object is positioned in a designated area of a constructed three-dimensional virtual reality space;
acquiring first position information and first appearance information of the target object from the first plane image, second position information and second appearance information of N first features in the first plane image, and mutual position relations between the target object and the N first features, wherein N is an integer greater than 1;
determining N second features corresponding to the N first features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space according to the obtained N first features;
and determining third position information of the target object in the specified area according to the N pieces of three-dimensional model information, the first position information, the first appearance information, the second position information, the second appearance information and the mutual position relation.
2. The method according to claim 1, wherein the determining N three-dimensional model information of N second features and N second features corresponding to the N first features in the three-dimensional virtual reality space according to the acquired N first features includes:
matching the N first features with K second features in the three-dimensional virtual reality space one by one, and determining N second features corresponding to the N first features and N three-dimensional model information of the N second features; the K second features are pre-marked features in the three-dimensional virtual reality space, and K is an integer greater than or equal to N.
3. The method according to claim 1, wherein determining third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the second shape information, and the mutual position relationship comprises:
determining a view cone angle and a camera aspect ratio of an image acquisition device for shooting the first planar image according to the N pieces of three-dimensional model information, the second position information, the second appearance information and the mutual position relationship;
and determining third position information of the target object in the designated area according to the viewing cone angle, the camera aspect ratio, the first position information and the first appearance information.
4. The method according to claim 1, wherein determining third position information of the target object in the designated area according to the N three-dimensional model information, the first position information, the first shape information, the second position information, the second shape information, and the mutual position relationship comprises:
acquiring a second plane image in the three-dimensional virtual reality space through perspective projection change, wherein a first shooting visual angle of the first plane image is the same as a second shooting visual angle of the second plane image, so that the position information and the shape information of the feature objects in the first plane image and the second plane image are consistent;
and determining third position information of the target object in the specified area according to the first position information, the first appearance information, the second shooting visual angle of the second plane image, the mutual position relation and the N pieces of three-dimensional model information.
5. An apparatus for locating a target object, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for photographing a target object to be positioned to obtain a first plane image containing the target object, and the target object is positioned in a specified area of a constructed three-dimensional virtual reality space;
a second obtaining module, configured to obtain first position information and first shape information of the target object from the first planar image, second position information and second shape information of N first features in the first planar image, and a mutual position relationship between the target object and the N first features, where N is an integer greater than 1;
the first determining module is used for determining N second features corresponding to the N first features and N three-dimensional model information of the N second features in the three-dimensional virtual reality space according to the obtained N first features;
a second determining module, configured to determine third location information of the target object in the designated area according to the N pieces of three-dimensional model information, the first location information, the first shape information, the second location information, the second shape information, and the mutual location relationship.
6. The apparatus according to claim 5, wherein the first determining module is configured to match the N first features with K second features in the three-dimensional virtual reality space one by one, and determine N second features corresponding to the N first features and N three-dimensional model information of the N second features; the K second features are pre-marked features in the three-dimensional virtual reality space, and K is an integer greater than or equal to N.
7. The apparatus according to claim 5, wherein the second determining module is further configured to determine a view cone angle and a camera aspect ratio of an image capturing apparatus for capturing the first planar image according to the N three-dimensional model information, the second position information, the second appearance information, and the mutual position relationship; and determining third position information of the target object in the designated area according to the viewing cone angle, the camera aspect ratio, the first position information and the first appearance information.
8. The apparatus according to claim 5, wherein the second determining module is further configured to acquire a second planar image through perspective projection change in the three-dimensional virtual reality space, wherein a first capturing perspective of the first planar image is the same as a second capturing perspective of the second planar image, so that the position information and the shape information of the feature in the first planar image and the second planar image are consistent; and determining third position information of the target object in the specified area according to the first position information, the first appearance information, the second shooting visual angle of the second plane image, the mutual position relation and the N pieces of three-dimensional model information.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 4 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 4.
CN201910718260.5A 2019-08-05 2019-08-05 Target object positioning method and device, storage medium and electronic device Active CN110443850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910718260.5A CN110443850B (en) 2019-08-05 2019-08-05 Target object positioning method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910718260.5A CN110443850B (en) 2019-08-05 2019-08-05 Target object positioning method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110443850A CN110443850A (en) 2019-11-12
CN110443850B true CN110443850B (en) 2022-03-22

Family

ID=68433246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910718260.5A Active CN110443850B (en) 2019-08-05 2019-08-05 Target object positioning method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110443850B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178127B (en) * 2019-11-20 2024-02-20 青岛小鸟看看科技有限公司 Method, device, equipment and storage medium for displaying image of target object
CN111475026B (en) * 2020-04-10 2023-08-22 李斌 Spatial positioning method based on mobile terminal application augmented virtual reality technology
CN112001947A (en) * 2020-07-30 2020-11-27 海尔优家智能科技(北京)有限公司 Shooting position determining method and device, storage medium and electronic device
CN115867937A (en) * 2020-09-21 2023-03-28 西门子(中国)有限公司 Target positioning method, device and computer readable medium
CN112581530A (en) * 2020-12-01 2021-03-30 北京时代凌宇信息技术有限公司 Indoor positioning method, storage medium, equipment and system
CN112948814A (en) * 2021-03-19 2021-06-11 合肥京东方光电科技有限公司 Account password management method and device and storage medium
CN115984366A (en) * 2022-05-31 2023-04-18 中兴通讯股份有限公司 Positioning method, electronic device, storage medium, and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831401A (en) * 2012-08-03 2012-12-19 樊晓东 Method and system for tracking, three-dimensionally superposing and interacting target object without special mark
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN107845060A (en) * 2017-10-31 2018-03-27 广东中星电子有限公司 Geographical position and corresponding image position coordinates conversion method and system
CN108415639A (en) * 2018-02-09 2018-08-17 腾讯科技(深圳)有限公司 Visual angle regulating method, device, electronic device and computer readable storage medium
WO2019014585A3 (en) * 2017-07-14 2019-02-21 Materialise Nv System and method of radiograph correction and visualization
CN106162149B (en) * 2016-09-29 2019-06-11 宇龙计算机通信科技(深圳)有限公司 A kind of method and mobile terminal shooting 3D photo

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10630962B2 (en) * 2017-01-04 2020-04-21 Qualcomm Incorporated Systems and methods for object location

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831401A (en) * 2012-08-03 2012-12-19 樊晓东 Method and system for tracking, three-dimensionally superposing and interacting target object without special mark
CN106162149B (en) * 2016-09-29 2019-06-11 宇龙计算机通信科技(深圳)有限公司 A kind of method and mobile terminal shooting 3D photo
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
WO2019014585A3 (en) * 2017-07-14 2019-02-21 Materialise Nv System and method of radiograph correction and visualization
CN107845060A (en) * 2017-10-31 2018-03-27 广东中星电子有限公司 Geographical position and corresponding image position coordinates conversion method and system
CN108415639A (en) * 2018-02-09 2018-08-17 腾讯科技(深圳)有限公司 Visual angle regulating method, device, electronic device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A 3D localisation method in indoor environments for virtual reality applications;Wei Song 等;《Human-centric Computing and Information Sciences》;20171231;全文 *
家用机器人室内定位技术研究;武高雨;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20180415(第4期);全文 *

Also Published As

Publication number Publication date
CN110443850A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110443850B (en) Target object positioning method and device, storage medium and electronic device
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN110645986B (en) Positioning method and device, terminal and storage medium
JP6255085B2 (en) Locating system and locating method
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN111060948B (en) Positioning method, positioning device, helmet and computer readable storage medium
CN104936283A (en) Indoor positioning method, server and system
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN107103056B (en) Local identification-based binocular vision indoor positioning database establishing method and positioning method
CN103874193A (en) Method and system for positioning mobile terminal
CN111083633B (en) Mobile terminal positioning system, establishment method thereof and positioning method of mobile terminal
US10107629B2 (en) Information processing system, information processing method, and non-transitory computer readable storage medium
CN110243339A (en) A kind of monocular cam localization method, device, readable storage medium storing program for executing and electric terminal
CN114092646A (en) Model generation method and device, computer equipment and storage medium
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
CN112991440A (en) Vehicle positioning method and device, storage medium and electronic device
CN110969704B (en) Mark generation tracking method and device based on AR guide
CN111652338B (en) Method and device for identifying and positioning based on two-dimensional code
CN109712249A (en) Geographic element augmented reality method and device
Sokolov et al. Development of software and hardware of entry-level vision systems for navigation tasks and measuring
KR101601726B1 (en) Method and system for determining position and attitude of mobile terminal including multiple image acquisition devices
Zhang et al. Camera Calibration for Long‐Distance Photogrammetry Using Unmanned Aerial Vehicles
Kuo et al. Calibrating a wide-area camera network with non-overlapping views using mobile devices
KR20150096127A (en) Method and apparatus for calculating location of points captured in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant