CN116758150A - Position information determining method and device - Google Patents

Position information determining method and device Download PDF

Info

Publication number
CN116758150A
CN116758150A CN202310559838.3A CN202310559838A CN116758150A CN 116758150 A CN116758150 A CN 116758150A CN 202310559838 A CN202310559838 A CN 202310559838A CN 116758150 A CN116758150 A CN 116758150A
Authority
CN
China
Prior art keywords
image
target
determining
coordinates
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310559838.3A
Other languages
Chinese (zh)
Other versions
CN116758150B (en
Inventor
顾佳琦
樊鲁斌
赖百胜
吴岳
周昌
叶杰平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202310559838.3A priority Critical patent/CN116758150B/en
Publication of CN116758150A publication Critical patent/CN116758150A/en
Application granted granted Critical
Publication of CN116758150B publication Critical patent/CN116758150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a position information determining method and a device, wherein the position information determining method comprises the following steps: determining the current position of a target object and obtaining a target image of a reference object; according to the installation position of the target object and a preset coordinate algorithm, determining an initial coordinate and an associated area of the target object in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image. The method and the device realize the positioning of the target object based on the target image containing the reference object and the associated image of the associated reference object, which are acquired at the current position, and the position information of the current position of the target object, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the target object are improved.

Description

Position information determining method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a position information determining method.
Background
With the development of internet technology, in the perception understanding of large-scale urban scenes, shooting equipment is one of the most extensive sensors, and is widely applied to urban understanding scenes such as traffic, navigation and the like for realizing traffic detection, event accident detection and the like. However, the number of the shooting equipment points installed in the urban street is large, the body volume is large, and the problem of non-standardization and non-uniformity of the installation and the operation and the maintenance of each shooting equipment exists, so that a great deal of manpower is required to be consumed for the management and the maintenance of the shooting equipment. When the photographing apparatus is installed and the photographing apparatus is operated, the change of the position information of the photographing apparatus is easy to occur, and at this time, the actual position information of the photographing apparatus needs to be recorded.
In the prior art, a manual positioning mode is generally adopted to determine and record the position information, and a worker is required to acquire the position information from the installation position of each shooting device. The method consumes a large amount of human resources, and the accuracy of the acquired position information is low, and the information acquisition efficiency is low. Therefore, a more efficient location information determining method is needed to solve the above-mentioned problems.
Disclosure of Invention
In view of this, the present embodiment provides a position information determination method. One or more embodiments of the present specification relate to a position information determining apparatus, a computing device, a computer-readable storage medium, and a computer program that solve the technical drawbacks existing in the prior art.
According to a first aspect of embodiments of the present specification, there is provided a first location information determining method, including:
determining the current position of a target object and obtaining a target image of a reference object;
according to the installation position of the target object and a preset coordinate algorithm, determining initial coordinates and an associated area of the target object in a world coordinate system;
determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image.
According to a second aspect of embodiments of the present specification, there is provided a first position information determining apparatus comprising:
the target image acquisition module is configured to determine that the target object is at the current position and acquire a target image of the reference object;
the information determining module is configured to determine initial coordinates and associated areas of the target object in a world coordinate system according to the installation position of the target object and a preset coordinate algorithm;
the associated image determining module is configured to determine an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
And the position information determining module is configured to determine the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image.
According to a third aspect of embodiments of the present specification, there is provided a second position information determining method, including:
determining the current position of the shooting device and obtaining a target image of a reference object;
according to the installation position of the shooting device and a preset coordinate algorithm, determining initial coordinates and an associated area of the shooting device in a world coordinate system;
determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and determining the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image.
According to a fourth aspect of the embodiments of the present specification, there is provided a second position information determining apparatus comprising:
the target image acquisition module is configured to determine a target image of the reference object acquired by the shooting device at the current position;
the information determining module is configured to determine initial coordinates and an associated area of the shooting device in a world coordinate system according to the installation position of the shooting device and a preset coordinate algorithm;
The associated image determining module is configured to determine an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and the position information determining module is configured to determine the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image.
According to a fifth aspect of embodiments of the present disclosure, there is provided a third location information determining method applied to a city management platform, including:
determining the current position of the image acquisition equipment and obtaining a target image of a reference object;
determining initial coordinates and associated areas of the image acquisition equipment in a world coordinate system according to the installation position of the image acquisition equipment and a preset coordinate algorithm;
determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and determining the position information of the current position of the image acquisition equipment according to the initial coordinates of the image acquisition equipment and the acquisition coordinates of the associated image.
According to a sixth aspect of embodiments of the present specification, there is provided a third location information determining apparatus applied to a city management platform, comprising:
the target image acquisition module is configured to determine the target image of the reference object acquired by the image acquisition equipment at the current position;
the information determining module is configured to determine initial coordinates and an associated area of the image acquisition equipment in a world coordinate system according to the installation position of the image acquisition equipment and a preset coordinate algorithm;
the associated image determining module is configured to determine an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and the position information determining module is configured to determine the position information of the current position of the image acquisition device according to the initial coordinates of the image acquisition device and the acquisition coordinates of the associated image.
According to a seventh aspect of embodiments of the present specification, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions that, when executed by the processor, perform the steps of the location information determination method described above.
According to an eighth aspect of embodiments of the present specification, there is provided a computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the steps of the above-described location information determining method.
According to a ninth aspect of the embodiments of the present specification, there is provided a computer program, wherein the computer program, when executed in a computer, causes the computer to perform the steps of the above-described position information determination method.
One embodiment of the present specification obtains a target image of a reference object by determining that the target object is at a current position; according to the installation position of the target object and a preset coordinate algorithm, determining an initial coordinate and an associated area of the target object in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image. The method has the advantages that the method realizes the positioning of the target object based on the target image containing the reference object and the associated image of the associated reference object, which are acquired at the current position, and the position information of the current position of the target object, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the target object are improved; and correcting the initial coordinates of the target object according to the position information, so that the correction efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of a position information determining method according to an embodiment of the present disclosure
FIG. 2 is a flow chart of a first location information determination method provided by one embodiment of the present disclosure;
fig. 3 is an image coordinate system construction schematic diagram of a position information determining method according to an embodiment of the present disclosure;
FIG. 4 is a process flow diagram of a method for determining location information according to one embodiment of the present disclosure;
fig. 5 is a schematic structural view of a first position information determining apparatus provided in one embodiment of the present specification;
FIG. 6 is a flow chart of a second method for determining location information provided in one embodiment of the present disclosure;
fig. 7 is a schematic structural view of a second position information determining apparatus provided in one embodiment of the present specification;
FIG. 8 is a flow chart of a third method for determining location information provided by one embodiment of the present disclosure;
fig. 9 is a schematic structural view of a third position information determining apparatus provided in one embodiment of the present specification;
FIG. 10 is a block diagram of a computing device provided in one embodiment of the present description.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many other forms than described herein and similarly generalized by those skilled in the art to whom this disclosure pertains without departing from the spirit of the disclosure and, therefore, this disclosure is not limited by the specific implementations disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Furthermore, it should be noted that, user information (including, but not limited to, user equipment information, user personal information, etc.) and data (including, but not limited to, data for analysis, stored data, presented data, etc.) according to one or more embodiments of the present disclosure are information and data authorized by a user or sufficiently authorized by each party, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions, and is provided with corresponding operation entries for the user to select authorization or denial.
First, terms related to one or more embodiments of the present specification will be explained.
Longitude and latitude coordinates: longitude and latitude coordinates are the sum of longitude and latitude to form a coordinate system, which is called a geographic coordinate system. The invention uses the WGS84 (World Geodetic System 1984) coordinate system to be consistent with the coordinate system of the street view picture.
Street view picture: the image information comprises acquired longitude and latitude coordinates based on self-help acquisition or city street images disclosed by the Internet, and the sources comprise pictures acquired by mobile phones, automobile data recorders, map acquisition vehicles and the like.
Global descriptor: the global descriptor describes global feature information of a frame of picture data with a one-dimensional feature vector 1×m, where M is the length of the feature vector. Which typically includes VLAD features, boW features, deep learning algorithm features, etc.
Image matching: image matching refers to describing the distance relationship between two pictures using the similarity of global descriptors or local features. Nearest neighbor feature vector matching or a deep learning based matching algorithm is typically employed. The method of matching through global descriptors can also be called image retrieval.
Sparse reconstruction: and carrying out local sparse reconstruction by using the shooting point position image and the street view image of the same scene to obtain longitude and latitude position information of the shooting point position. The sparse reconstruction mode mainly comprises a motion restoration structure algorithm.
Calibrating a camera: in order to determine the relative transformation relationship between the three-dimensional geometric position of a point on the surface of an object and its corresponding point on the image, a geometric model of camera imaging must be established, and parameters of the geometric model include camera internal parameters and camera external parameters. The process of solving the parameters is called camera calibration.
AOI: an algorithm to acquire a region Of Interest (Area Of Interest).
POI: an algorithm to acquire a region of interest (Point Of Interest).
VLAD: (Vector of locally aggregated descriptors) is a feature representation method of an image, and is widely used in the fields of image retrieval and image classification.
CNN: (Convolutional Neural Network) convolutional neural network, which is a neural network dedicated to processing data having a grid-like structure. Convolutional networks refer to those neural networks that use convolutional operations in at least one layer of the network to replace the general matrix multiplication operations.
VGG: (Visual Geometry Group), the deep convolutional neural network, VGG means VGG-16 (13 convolutional layers+3 fully connected layers).
FCN: (Fully Convolutional Networks), a full convolution network. The network structure is mainly divided into two parts: a full convolution portion and a deconvolution portion. Wherein the full convolution part is classical CNN networks (such as VGG, resNet and the like) and is used for extracting features; the deconvolution part is to obtain the original-size semantic segmentation image through up-sampling.
SuperPoint algorithm: an algorithm for feature detection and matching. It is able to quickly and accurately detect key points in images and use them to match different images. Such algorithms find widespread use in many computer vision tasks, including image stitching, camera positioning, and three-dimensional modeling.
SIFT features: (Scale-Invariant Feature Transform) feature, scale-invariant feature transform, is a computer-vision feature extraction algorithm used to detect and describe local features in images. RANSAC algorithm: random Sample Consensus, which is an algorithm for calculating mathematical model parameters of data to obtain valid sample data, based on a set of sample data sets containing abnormal data.
ORB characterization: (Oriented FAST and Rotated BRIEF) has scale and rotation invariance, has invariance to noise and perspective transformation thereof, and has good performance that the ORB is utilized in a very wide application scene when the feature description is carried out. ORB feature detection mainly comprises the following two steps of (1) direction FAST feature point detection (2) BRIEF feature description.
K-means algorithm: the K-means clustering algorithm (K-means clustering algorithm) is an iterative solution clustering analysis algorithm, and includes the steps of pre-dividing data into K groups, and randomly selecting K objects as initial clustering centers. The distance between each object and the respective seed cluster center is calculated, and each object is assigned to the cluster center closest to it. The cluster centers and the objects assigned to them represent a cluster.
With the development of internet technology, in the perception understanding of large-scale urban scenes, shooting equipment is one of the most extensive sensors, and is widely applied to urban understanding scenes such as traffic, navigation and the like for realizing traffic detection, event accident detection and the like. However, the number of the shooting equipment points installed in the urban street is large, the body volume is large, and the problem of non-standardization and non-uniformity of the installation and the operation and the maintenance of each shooting equipment exists, so that a great deal of manpower is required to be consumed for the management and the maintenance of the shooting equipment. When the photographing apparatus is installed and the photographing apparatus is operated, the change of the position information of the photographing apparatus is easy to occur, and at this time, the actual position information of the photographing apparatus needs to be recorded.
In the prior art, a manual positioning mode is generally adopted to determine and record the position information, and a worker is required to acquire the position information from the installation position of each shooting device. The method consumes a large amount of human resources, and the accuracy of the acquired position information is low, and the information acquisition efficiency is low. Therefore, a more efficient location information determining method is needed to solve the above-mentioned problems.
Fig. 1 is a schematic structural diagram of a method for determining location information according to an embodiment of the present disclosure, where, as shown in fig. 1, a target object is determined, and a target image of a reference object is obtained by determining a current location of the target object. And determining initial coordinates and associated areas of the target object in the world coordinate system according to the installation position of the target object and a preset coordinate algorithm. And determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region. And determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image.
The method comprises the steps of determining the current position of a target object and obtaining a target image of a reference object; according to the installation position of the target object and a preset coordinate algorithm, determining an initial coordinate and an associated area of the target object in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image. The method has the advantages that the method realizes the positioning of the target object based on the target image containing the reference object and the associated image of the associated reference object, which are acquired at the current position, and the position information of the current position of the target object, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the target object are improved; and correcting the initial coordinates of the target object according to the position information, so that the correction efficiency is improved.
In a city management scenario, the target object may be an image capture device installed at an arbitrary location. Under the condition that the image acquisition equipment determines that an abnormal event occurs in the urban streetscape, if the recorded point position coordinates of the image acquisition equipment are inaccurate, the recorded point position coordinates cannot reach the place where the event occurs quickly. By correcting the position information of the image acquisition equipment, the accurate point position coordinates of the image acquisition equipment can be obtained, and when an abnormal event occurs, the occurrence place of the abnormal event can be rapidly and accurately positioned based on the point position coordinates, so that event processing personnel can rapidly arrive at the site for processing. The target image may be any image including a reference object, and the reference object may be an element such as a building or a road layout structure. When the target object is any image containing the reference object, the image acquisition position for acquiring the target object can be determined through the target object.
In the present specification, a position information determining method is provided, and the present specification relates to a position information determining apparatus, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
Referring to fig. 2, fig. 2 shows a flowchart of a first location information determining method according to an embodiment of the present specification, which specifically includes the following steps.
Step S202: and determining the current position of the target object and obtaining a target image of the reference object.
Specifically, the target object may be a photographing device, and the target object may be a photographing device installed in a street, for photographing an object included in a building, a scenic spot, a road, or the like in a street view; the current position is the installation position of the target object; the target image is an image captured by a target object, and the reference object is an object such as a building included in the target image.
Based on the method, a target object is determined in the street view, and a target image which is shot and contains a reference object is acquired, wherein the target object is at the current position. The target image is used as a reference image for determining the position information of the target object subsequently.
In practical application, the target object may be a shooting device capable of shooting video or image, such as a camera installed in a street view; the target image may be an image captured by the target object, or may be an image frame specified in a video captured by the target object, and the image frame including the reference object is set as the target image.
Further, considering that when the target object is a photographing device, an image or video of a photographing range can be recorded in real time, and considering that a reference object included in the target image needs to be clearly visible, an image frame including the reference object can be selected from the target video photographed by the target object as a target image, which is specifically implemented as follows:
determining the current position of a target object and obtaining a target video; and taking the image frame containing the reference object in the target video as a target image.
Based on the above, the target video is a video shot by the target object, and the image frame is a video frame contained in the target video. And determining a target object and a target video of the target object in a period of time acquired by the current position. And selecting an image frame containing a reference object from image frames corresponding to the target video as a target image, wherein when the image frame is selected, the definition of the reference object contained in the image frame is larger than a preset definition threshold.
For example, in the case where the target object is a photographing apparatus installed at the north door of school a, an image of a teaching building including school a is photographed by the photographing apparatus, or a target video of the teaching building including school a is photographed, and an image frame of the teaching building including school a is selected as a target image in the target video.
In summary, the target video shot by the target object at the current position is determined, and then the image frame containing the reference object is selected as the target image in the target video, so that the subsequent determination of the position information of the target object based on the target image is facilitated.
Step S204: and determining initial coordinates and associated areas of the target object in a world coordinate system according to the installation position of the target object and a preset coordinate algorithm.
Specifically, after the target object is determined to be at the current position and the acquired target image of the reference object is obtained, the initial coordinates and the associated area of the target object in the world coordinate system can be determined according to the installation position of the target object and a preset coordinate algorithm, wherein the installation position refers to the target object position recorded by an installer of the target object when the target object is installed at the current position; the preset coordinate algorithm can be an interest point association algorithm, and comprises an interest region algorithm and an interest point algorithm; the world coordinate system may be the WGS84 (World Geodetic System 1984) coordinate system; the initial coordinates are determined based on the interest point algorithm and the installation position of the target object, and the longitude and latitude coordinates of the target object in the world coordinate system; correspondingly, the associated area is determined based on an interest area algorithm and the installation position of the target object, the target object is in the region governed by the world coordinate system, and the associated area can be an irregular graph or a polygon.
Based on the method, after the target object is determined to be at the current position and the acquired target image of the reference object, the installation position of the target object is determined, the initial coordinates of the target object in the world coordinate system are determined according to the preset coordinate algorithm and the installation position, the association area of the target object in the world coordinate system is determined according to the preset coordinate algorithm and the installation position, and the association area contains coordinate points corresponding to the initial coordinates.
In practical application, in the case where the target object is a photographing apparatus, the installation position refers to installation address information recorded when an installer of the photographing apparatus installs the target object in a street, for example: north gate of B primary school in street a. When the initial coordinates and the association areas of the target object in the world coordinate system are determined, the installation address information of the target object can be processed through a preset coordinate algorithm, and the initial coordinates with high association degree with the installation address information are determined.
Further, after determining the installation position of the target object, the coordinates of the target object and the associated area of the target object may be determined in the world coordinate system, and considering that the target object is a photographing device installed at a fixed position, the initial coordinates and the associated area of the target object in the world coordinate system may be determined based on a preset coordinate point algorithm and a preset area algorithm, respectively, which is specifically implemented as follows:
The preset coordinate algorithm comprises a preset coordinate point algorithm and a preset area algorithm; correspondingly, the determining the initial coordinates and the associated areas of the target object in the world coordinate system according to the installation position of the target object and a preset coordinate algorithm comprises the following steps: determining initial coordinates of the target object in the world coordinate system according to the installation position of the target object and the preset coordinate point algorithm; and determining the association area of the target object in the world coordinate system according to the initial coordinates and the preset area algorithm.
Specifically, the preset coordinate algorithm may be an interest point algorithm; the preset region algorithm may be a region of interest algorithm.
Based on the above, in the case that the preset coordinate algorithm includes the preset coordinate point algorithm and the preset area algorithm, the reference coordinates associated with the target object are determined according to the installation position of the target object and the preset coordinate point algorithm, and the initial coordinates having a higher degree of association with the installation position of the target object are determined in the reference coordinates as the initial coordinates of the target object in the world coordinate system. And determining an associated area of the target object in the world coordinate system according to the initial coordinates and a preset area algorithm, wherein the associated area can represent a shooting area of the target object.
For example, in the case where it is determined that the installation position of the photographing apparatus installed at the north door of school a is "north door of school a", positional information such as "parking lot of school a", "north 1 door of school a", and "south door of school a" associated with "north door of school a" is determined based on the interest point algorithm, the "north 1 door of school a" having a higher degree of association with the installation position "north door of school a" is selected from the above positional information, and the coordinates of "north 1 door of school a" are taken as the initial coordinates of the target object. A polygonal or irregularly shaped shot region of the target object is determined based on the region of interest algorithm and the initial coordinates.
In summary, the initial coordinates and the associated areas of the target object in the world coordinate system are determined based on the preset coordinate point algorithm and the preset area algorithm, so that the accuracy of determining the initial coordinates and the associated areas of the target object is improved, and the initial coordinates and the associated areas have a higher reference value.
Step S206: and determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region.
Specifically, after the initial coordinates and the associated areas of the target object in the world coordinate system are determined according to the installation position of the target object and the preset coordinate algorithm, an associated image associated with the reference object in the target image and an acquisition coordinate of the associated image can be determined according to the initial coordinates and the associated areas, wherein the associated image refers to an image containing the reference object, and the associated image can be an image acquired by utilizing a mobile phone camera, a vehicle recorder, a panoramic camera and the like; acquisition coordinates refer to coordinates of an image acquisition device that acquires the associated image.
Based on the above, after the initial coordinates and the associated areas of the target object in the world coordinate system are determined according to the installation position of the target object and a preset coordinate algorithm, the associated images associated with the reference object in the target image are selected in the initial image set corresponding to the initial coordinates and the associated areas according to the initial coordinates and the associated areas, and the acquisition coordinates of the associated images are determined.
In practical applications, the associated image includes a reference object. After the initial coordinates and the associated area of the target object in the world coordinate system are determined, image acquisition can be performed in the associated area by using a mobile phone camera, a vehicle event data recorder, a panoramic camera and the like to obtain an initial image set, and then an image containing the reference object is selected as an associated image in the initial image set.
Further, considering that the target object corresponds to the initial coordinate and the associated area, and the target object includes the reference object, in order to more comprehensively determine the image set related to the associated area, the initial image set may be determined by taking the initial coordinate as a starting point and performing image acquisition in the associated area, and then the associated image is selected in the initial image set, which is specifically implemented as follows:
Acquiring an initial image set which takes the initial coordinates as a starting point and is determined in the association area; and selecting an associated image associated with the reference object in the target image in the initial image set, and determining acquisition coordinates of the associated image.
Specifically, the initial image set may be obtained by using a mobile phone camera, a vehicle event data recorder, a panoramic camera, etc., to collect images in the relevant area, and combining the collected images to obtain the initial image set; or video acquisition is carried out in the association area, image frames of the video are intercepted, the image corresponding to each image frame is taken as an initial image, and the initial images are combined to obtain an initial image set.
Based on the above, at least two initial images which are determined in the association area by taking the initial coordinates as a starting point are acquired, and an initial image set is formed by the at least two initial images; or, the initial video is determined in the associated region with the initial coordinates as a starting point, and the image frames included in the initial video are used as initial images to form an initial image set. An associated image associated with a reference object in the target image is selected in the initial image set, and coordinates of an acquisition device that acquires the associated image are taken as acquisition coordinates of the associated image.
For example, after the coordinates of "1 st door of school a" are taken as the initial coordinates of the target object and the polygon or irregularly shaped shooting area of the target object is determined based on the interest area algorithm and the initial coordinates, the 1 st door of school a is taken as the starting point, the image related to the shooting area is collected by the image collecting device, the images corresponding to the shooting angles and shooting orientations of the shooting area are obtained, and an initial image set is formed. And selecting an initial image of the teaching building including the reference object A school from the initial image set as a related image, and taking the coordinates of an image acquisition point for acquiring the related image as the acquisition coordinates of the related image.
In summary, by selecting the associated image associated with the reference object in the target image in the initial image set, each associated image is ensured to contain the reference object, and accuracy of determining the position information of the target object is improved.
Further, considering that the initial images in the acquired initial image set may not include the reference object, it is necessary to perform preliminary screening on the initial images included in the initial image set, and select the initial images associated with the reference object as associated images, which is specifically implemented as follows:
Determining an initial feature vector of an initial image in the initial image set, and determining a target feature vector of the target image; and comparing the initial feature vector of the initial image with the target feature vector of the target image, and selecting an associated image associated with a reference object in the target image according to a comparison result.
Specifically, the initial feature vector refers to a vector expression of the initial image, and may be a global description sub-feature corresponding to the initial image, or may be determined by a feature extraction model; correspondingly, the target feature vector refers to the vector expression of the target image, can be a global description sub-feature corresponding to the target image, and can also determine an initial feature vector of the initial image through a feature extraction model; the comparison result refers to the feature correlation degree between the determined target image and the initial image after comparing the initial feature vector of the initial image with the target feature vector of the target image.
Based on this, an initial feature vector of an initial image in the initial image set is determined based on the global description sub-algorithm or the feature extraction model, and a target feature vector of a target image is determined based on the global description sub-algorithm or the feature extraction model. And comparing the initial feature vectors of the initial images in the initial image set with the target feature vectors of the target images, and selecting associated images associated with the reference objects in the target images in the initial image set according to the comparison result.
By way of example, a global descriptor feature is determined for a target image comprising the teaching building of school a, and a global descriptor vector feature is determined for each initial image in the set of initial images. And respectively comparing the global description sub-features of the target image containing the teaching building of the school A with the global description sub-vector features of each initial image, and selecting the initial image with high feature similarity from the initial image set as the associated image. The associated image with higher similarity with the target object can be found out by taking the linear distance between the feature vectors as a basis. A global description sub-algorithm, such as VLAD (Vector of locally aggregated descriptors), may be used to determine global description sub-features for the target image comprising the teaching building of school a, as well as global description sub-vector features for each initial image in the set of initial images.
In summary, by performing the preliminary screening on the initial images included in the initial image set, the initial image associated with the reference object is selected as the associated image, and then when the position information of the target object is determined later, the reference object in the target image and the reference object in the associated image can be referred to for determination, thereby improving the accuracy of the position information.
Further, when selecting the related image in the initial image set, considering that the object regions of the reference object included in the initial image are different from each other in the shooting angle of the reference object in the initial image, in order to select the reference object including the best possible integrity in the initial image, the related image may be selected in the initial image set by determining whether the candidate image includes the candidate feature point overlapping with the reference feature point of the reference object in the target image, which is specifically implemented as follows:
determining candidate images in the initial image according to the comparison result; determining candidate feature points in the candidate image and reference feature points of the reference object in the target image; and using the candidate image with the characteristic point superposition relation with the reference characteristic point as a related image related to the reference object in the target image.
Specifically, the candidate image refers to an image associated with a reference object in the target image; the candidate feature points are feature points contained in the candidate image, and the vertexes of things such as buildings and roads in the candidate image are taken as candidate feature points; the reference feature points refer to feature points corresponding to a reference object in the target image, and when the reference object is a building, the reference feature points are feature points corresponding to the building, and the feature points of the image include corner points, SIFT feature points, ORB feature points, deep learning algorithm feature points, and the like.
Based on this, a candidate image associated with the reference object in the target image is determined in the initial image according to the comparison result. Each candidate feature point included in the candidate image and a reference feature point of the reference object in the target image are determined. And determining a candidate image which has a characteristic point coincidence relation with the reference characteristic point in the target image from the candidate images as an associated image associated with the reference object in the target image.
For example, when the image acquisition device acquires an initial image associated with the shooting area of the shooting device, which is a target object of the shooting device installed on the north door of school a, the initial image may or may not include a building, which is a teaching building of school a. After the candidate image of the building including the teaching building of the A school is selected in the initial image set, all the feature points in the candidate image are determined, and all the feature points of the building including the teaching building of the A school in the target image are determined. And comparing the target image with the candidate image, and taking the candidate image as the associated image when the characteristic point of the building, namely the teaching building of the school A, in the target image is overlapped with the characteristic point in the candidate image.
In summary, the related images are selected in the initial image set by judging whether the candidate images contain candidate feature points which are coincident with the reference feature points of the reference objects in the target image, so that the related images are all related to the reference objects in the target image.
Further, when selecting the associated image in the initial image set, considering that the shooting angles of the reference objects in the initial image are different, the object regions of the reference objects included in the initial image are also different, in order to select the reference objects which contain the whole as possible in the initial image, the associated image may be selected in the initial image set by calculating the number of overlapping of the target image and the feature points in the initial image, which is specifically implemented as follows:
determining target candidate images with characteristic point coincidence relation with the reference characteristic points in the candidate images, and determining the quantity of coincident characteristic points in the target candidate images; and under the condition that the number of the coincident characteristic points is larger than a preset number threshold, the target candidate image is used as an associated image associated with a reference object in the target image.
Based on this, among the candidate images, a candidate image having a feature point overlapping relationship with the reference feature point is determined as a target candidate image. A number of feature points overlapping with reference feature points of a reference object in the target image is determined in the target candidate image. Judging whether the number of the coincident characteristic points is larger than a preset number threshold value, and under the condition that the number of the coincident characteristic points is larger than the preset number threshold value, representing that the similarity between the reference object contained in the candidate image and the reference object contained in the target image is higher, taking the target candidate image as a related image related to the reference object in the target image.
For example, when the image acquisition device acquires an initial image associated with the shooting area of the shooting device, which is a target object of the shooting device installed on the north door of school a, the initial image may or may not include a building, which is a teaching building of school a. After the candidate image of the building including the teaching building of the A school is selected in the initial image set, all the feature points in the candidate image are determined, and all the feature points of the building including the teaching building of the A school in the target image are determined. And comparing the target image with the candidate images respectively, determining the superposition quantity of the characteristic points when the characteristic points of the building, namely the teaching building of the A school, in the target image are superposed with the characteristic points in the candidate images, and taking the candidate images as the associated images when the superposition quantity is larger than a preset quantity threshold value.
In summary, the related images are selected in the initial image set by calculating the number of the feature points in the target image and the initial image, so that the related images with high similarity with the target image are selected in the candidate images, and the accuracy of determining the position information of the target object is improved.
Step S208: and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image.
Specifically, after determining the associated image associated with the reference object in the target image and the acquisition coordinates of the associated image according to the initial coordinates and the associated region, the position information of the current position of the target object may be determined according to the initial coordinates of the target object and the acquisition coordinates of the associated image, where the position information includes the longitude and latitude coordinates of the target object and the camera parameters of the target object, and the camera parameters may be the full-scale camera parameters of the target object, including but not limited to parameters such as focal length, yaw angle (Yaw), roll angle (Roll), pitch angle (Pitch), and the like.
Based on the above, after determining the associated image associated with the reference object in the target image and the acquisition coordinates of the associated image according to the initial coordinates and the associated region, determining the longitude and latitude coordinates and the full camera parameters of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image, and taking the longitude and latitude coordinates and the full camera parameters of the current position as the position information of the current position of the target object.
Further, considering that the target image and each associated image are associated with the reference object, and the target image corresponds to the initial coordinate, and each associated image corresponds to the acquisition coordinate, the relative position information between any two images can be determined, and then the position information of the current position of the target object is determined according to the relative position information, which is specifically implemented as follows:
The number of the associated images is at least two; correspondingly, the determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image includes: determining target position information of the target object and each associated image and associated position information between any two associated images according to the initial coordinates of the target object and the acquisition coordinates of the associated images; and determining the position information of the current position of the target object according to the target position information of the target object and each associated image and the associated position information between any two associated images.
Specifically, the target position information refers to position relation information between the target image and the associated image determined according to the initial coordinates of the target object and the acquisition coordinates of the associated image; correspondingly, the associated position information refers to position relation information among the associated images determined according to the acquisition coordinates of the associated images.
Based on the above, in the case that there are at least two associated images, the target position information of each associated image and the target object are determined according to the initial coordinates of the target object and the acquisition coordinates of the associated images, and the associated position information between any two associated images is determined according to the acquisition coordinates of the associated images. And determining the position information of the current position of the target object according to the target position information of the target object and each associated image and the associated position information between any two associated images.
For example, the spatial position relationship between the target object and the associated image and between any two associated images is determined according to the initial coordinates of the target object and the acquired coordinates of the associated images, and then the position information of the target object is calculated based on the spatial position relationship and the world coordinate system. And performing camera calibration on the target object, namely, establishing a geometric model of camera imaging for determining the relative transformation relation between the three-dimensional geometric position of a certain point on the surface of the object and the corresponding point on the image, wherein parameters of the geometric model comprise camera internal parameters and camera external parameters. And solving parameters, namely camera calibration.
In summary, the position information of the current position of the target object is determined based on the target position information between the target image and each associated image and the associated position information between any two associated images, so that the position information of the current position of the target object is determined according to the relative position information of the target image and the associated images, and the determination efficiency of the position information is improved.
Further, considering that the target image and the associated image are both associated with the reference object, and the reference feature point in the target image coincides with the feature point in the associated image, the spatial position relationship between the target image and the associated image and between any two associated images can be determined, and then an image coordinate system is constructed, and then the image coordinate system is mapped to the world coordinate system, so that the position information of the current position of the target object can be determined, and the specific implementation is as follows:
Constructing an image coordinate system according to the current position, the acquisition coordinates, the target position information between the target object and each associated image and the associated position information between any two associated images; mapping the image coordinate system to the world coordinate system, and determining position information of the current position of the target object in the world coordinate system.
Specifically, the image coordinate system refers to a coordinate system constructed based on the positional relationship between the target object and the associated images, and the positional relationship between any two associated images.
Based on the above, an image coordinate system is constructed according to the current position, the acquisition coordinates, the target position information between the target object and each associated image, and the associated position information between any two associated images, and the target object and the associated images are represented in the form of coordinate points. The image coordinate system is mapped to the world coordinate system, and position information of the current position of the target object is determined in the world coordinate system.
For example, fig. 3 is a schematic diagram of a coordinate system of a location information determining method according to an embodiment of the present disclosure, as shown in fig. 3, where (a) in fig. 3 represents a world coordinate system, and (b) in fig. 3 represents an image coordinate system. In an image coordinate system Representing the associated image, ■ representing the target object, and further comprising a reference object, the current position of the target object, and the relative positional relationship between the target object and the associated image based on the acquisition coordinates of the associated images, the relative positional relationship between any two associated images constructing an image coordinate system as shown in fig. 3 (b). And mapping the image coordinate system to a world coordinate system, and calculating longitude and latitude coordinates of the target object and internal camera parameters and external camera parameters according to the relative position relation between each associated image in the image coordinate system, the relative position relation between the associated image and the target object and the actual longitude and latitude coordinates of each associated image, wherein the longitude and latitude coordinates of the target object and the internal camera parameters and the external camera parameters are used as the position information of the current position of the target object.
In summary, in one embodiment of the present disclosure, the target image of the reference object is obtained by determining that the target object is at the current position; according to the installation position of the target object and a preset coordinate algorithm, determining an initial coordinate and an associated area of the target object in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image. The method has the advantages that the method realizes the positioning of the target object based on the target image containing the reference object and the associated image of the associated reference object, which are acquired at the current position, and the position information of the current position of the target object, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the target object are improved; and correcting the initial coordinates of the target object according to the position information, so that the correction efficiency is improved.
The method for determining position information will be further described with reference to fig. 4, taking an application of the method for determining position information provided in the present specification in determining position information as an example. Fig. 4 is a flowchart of a processing procedure of a location information determining method according to an embodiment of the present disclosure.
With rapid innovation and development of information technology, the urban large-scale digital twin system establishes a digital city by carrying out digital processing on all elements of people, things and the like in the physical world, realizes real-time and visualization of the whole state of the city, promotes coordination and intelligence of city planning, management and operation, and improves the efficiency and accuracy of city decision. Among them, a photographing camera is widely used in the traffic field as a general and efficient video image capturing apparatus. In order to accurately project two-dimensional semantic information in a picture shot by a shooting camera into a real three-dimensional space, association between an image space and a geographic space is established, and longitude and latitude coordinates of a shooting camera point position are important. The shooting camera in the current city has the problems of large time span, wide distribution range, multiple hardware manufacturers and the like, and the longitude and latitude coordinates of the point positions are updated very dependently by manpower, so that a large amount of manpower resources are consumed, the problems of inaccurate updating and low updating efficiency exist, and the position information determining method provided by one embodiment of the specification specifically comprises the following steps of:
Step S402: and determining the current position of the target object, and taking the acquired target video, wherein the target video comprises the image frame of the reference object as a target image.
The target object may be a photographing apparatus having an image capturing function such as a camera, a video camera, or the like. The method can be realized through an automatic calibration mode when the point position calibration of the shooting equipment is carried out, namely, the longitude and latitude coordinates of the installation position of the shooting equipment and the internal and external parameters of a camera of the shooting equipment are determined, so that the labor cost of manual calibration is reduced, the automatic, efficient and light point position calibration is realized, and the correction or calculation of the point position information can be realized when the stored position information is inaccurate or is not stored in the installation record of the shooting equipment.
In the implementation, an image including a reference object captured by the capturing device is obtained as a target image, where the target image may be an image captured directly by the capturing device, or may be a target image obtained by capturing an image frame in a video captured by the capturing device.
Step S404: the preset coordinate algorithm comprises a preset coordinate point algorithm and a preset area algorithm, and initial coordinates of the target object in the world coordinate system are determined according to the installation position of the target object and the preset coordinate point algorithm.
Step S406: and determining the association region of the target object in the world coordinate system according to the initial coordinates and a preset region algorithm.
And searching the approximate longitude and latitude coordinates, namely the initial coordinates, of the point position of the shooting equipment and the range of the shooting equipment, namely the association area by using the interest point association algorithm. Point of interest association algorithms refer generally to AoI (region of interest) algorithms and PoI (point of interest) algorithms. AoI algorithm provides longitude and latitude coordinates of K1 landmarks with higher association degree with the point location names of the shooting equipment, wherein K1 is generally 3-5, and further the administered area of the shooting equipment is determined in a world coordinate system. The PoI algorithm provides the approximate latitude and longitude coordinates of the point location of the photographing device.
Step S408: an initial image set determined within the associated region starting from the initial coordinates is acquired.
And determining all street view images related to the associated area through image acquisition to form an initial image set, or acquiring the street view images disclosed by the Internet to form the initial image set. And taking the initial image set as an associated street view image set of the shooting equipment point positions.
Step S410: an initial feature vector of an initial image in the initial image set is determined, and a target feature vector of a target image is determined.
Step S412: and comparing the initial feature vector of the initial image with the target feature vector of the target image, and determining a candidate image in the initial image according to the comparison result.
And constructing global description sub-features of each associated street view image in the associated street view image set, and constructing global description sub-features of the target image. And comparing the global description sub-vectors of the associated street view image and the target image, and selecting at least two images which are closer to the target image from the associated street view image set as candidate images according to the similarity between the two global description sub-vectors. In practical applications, 500-1500 candidate images may be selected. In specific implementation, a global feature vector with a size of 1×4096 is generated for the target image and the associated street view image, and generally, K2 street view pictures (K2 may be 500-1500) closest to the target image are selected in the associated street view image set according to the L2 distance between feature vectors, that is, the straight line distance, to be used as candidate images of the target image, so as to form a candidate image set. In determining global description sub-features of an image, a VLAD (Vector of locally aggregated descriptors) related global description sub-algorithm is generally adopted.
Step S414: among the candidate images, a target candidate image having a feature point coincidence relation with the reference feature point is determined, and the number of coincident feature points is determined in the target candidate image.
Step S416: and under the condition that the number of the coincident characteristic points is larger than a preset number threshold, taking the target candidate image as an associated image associated with the reference object in the target image.
For the candidate image set, whether the candidate image set has a relevant overlapping area with the target image or not is further confirmed through fine image matching. Feature extraction and matching can be achieved through an image feature extraction and matching network constructed based on CNNs and transformers. The backbone of the feature extraction network is typically a full convolution network of VGG16 or FCN-like, with two branches generating feature points and feature vectors. The trunk of the feature matching network is generally a transducer, and the geometric matching relation of the feature points of different images is represented by dividing the images and constructing a feature score matrix. The feature extraction network refers to the SuperPoint algorithm to construct a network model, and traditional features such as SIFT features and ORB features or other deep learning image feature extraction algorithms can be used.
In addition, in order to promote the matching result of the low texture region, detection matching integrated branches and multi-scale branches are added. The detection and matching integrated branches can construct feature matching results, and the number of feature points in the image is increased; the multi-scale branches are added with algorithms of multi-scale feature analysis, and comprehensive analysis is carried out by combining matching results under different scales, so that the matching capacity and anti-interference performance of the weak texture region are improved.
In specific implementation, a local feature point vector with the size of 2048×256 dimensions is generated for the target image and the corresponding K2 Zhang Houxuan image through a feature extraction algorithm, and the global feature vector is matched through a feature matching algorithm. And (3) removing the mismatching by combining a random sampling consistency algorithm (RANSAC) and a homography transformation model to obtain a matching result M multiplied by 2-dimensional local feature point matching result. And considering the candidate images with the number of the matched characteristic points being greater than a preset number threshold as the associated images of the target image, wherein the candidate images are successfully matched.
Step S418: and determining target position information of the target object and each associated image and associated position information between any two associated images according to the initial coordinates of the target object and the acquisition coordinates of the associated images.
Step S420: and constructing an image coordinate system according to the current position, the acquisition coordinates, the target position information between the target object and each associated image and the associated position information between any two associated images.
Step S422: the image coordinate system is mapped to the world coordinate system, and position information of the current position of the target object is determined in the world coordinate system.
And carrying out joint optimization of the positions and estimation of the full-quantity parameters on the target image and the associated image by utilizing the image feature point matching relation and longitude and latitude coordinates. Local features and geometric matching relations between any two images can be calculated through feature extraction and matching algorithms, and 2D-3D matching constraint is constructed to optimize the overall geometric structure, so that the position and the posture of the shooting equipment under a relative coordinate system are obtained. In order to obtain the fine positioning coordinates under the world coordinate system, the relative coordinate system is aligned to the world coordinate system by utilizing the longitude and latitude coordinates of the associated image, so that the accurate longitude and latitude coordinates of the point position of the shooting equipment are obtained, and meanwhile, the camera overall parameters of the shooting equipment can also be obtained. Simple point location coordinate estimation can be performed through longitude and latitude coordinates of streetscapes through a nearest neighbor algorithm or a K-means algorithm, and accurate longitude and latitude coordinates of shooting equipment are obtained.
In summary, the target image shot by the shooting equipment is fully utilized, a spatial interest surface algorithm and large-scale street view data are introduced as support, heterogeneous feature matching is carried out on the target image and the street view image, the heterogeneous feature matching comprises global descriptor coarse screening matching and local feature point fine screening matching, candidate images are screened out, sparse reconstruction is carried out on the target image and the candidate images, longitude and latitude coordinates of points and full-scale camera parameters are obtained, and point location and deviation rectifying work is completed. The method has the advantages of automation, high efficiency, light weight, interference resistance and easiness in combination with various camera calibration algorithms and sensing application algorithms to construct an end-to-end camera three-dimensional analysis link. The camera pose can be obtained by sparse reconstruction, the automatic calibration of the shooting equipment can be realized, and the manual calibration time of the shooting equipment is greatly shortened.
One embodiment of the present specification obtains a target image of a reference object by determining that the target object is at a current position; according to the installation position of the target object and a preset coordinate algorithm, determining an initial coordinate and an associated area of the target object in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image. The method has the advantages that the method realizes the positioning of the target object based on the target image containing the reference object and the associated image of the associated reference object, which are acquired at the current position, and the position information of the current position of the target object, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the target object are improved; and correcting the initial coordinates of the target object according to the position information, so that the correction efficiency is improved.
Corresponding to the above method embodiments, the present disclosure further provides an embodiment of a location information determining device, and fig. 5 shows a schematic structural diagram of a first location information determining device provided in one embodiment of the present disclosure. As shown in fig. 5, the apparatus includes:
a target image acquisition module 502 configured to determine that the target object is at the current position, and an acquired target image of the reference object;
an information determining module 504 configured to determine an initial coordinate and an associated region of the target object in a world coordinate system according to an installation position of the target object and a preset coordinate algorithm;
an associated image determining module 506 configured to determine an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
the location information determining module 508 is configured to determine location information of the current location of the target object according to the initial coordinates of the target object and the acquired coordinates of the associated image.
In an alternative embodiment, the information determination module 504 is further configured to:
the preset coordinate algorithm comprises a preset coordinate point algorithm and a preset area algorithm; determining initial coordinates of the target object in the world coordinate system according to the installation position of the target object and the preset coordinate point algorithm; and determining the association area of the target object in the world coordinate system according to the initial coordinates and the preset area algorithm.
In an alternative embodiment, the associated image determining module 506 is further configured to:
acquiring an initial image set which takes the initial coordinates as a starting point and is determined in the association area; and selecting an associated image associated with the reference object in the target image in the initial image set, and determining acquisition coordinates of the associated image.
In an alternative embodiment, the associated image determining module 506 is further configured to:
determining an initial feature vector of an initial image in the initial image set, and determining a target feature vector of the target image; and comparing the initial feature vector of the initial image with the target feature vector of the target image, and selecting an associated image associated with a reference object in the target image according to a comparison result.
In an alternative embodiment, the associated image determining module 506 is further configured to:
determining candidate images in the initial image according to the comparison result; determining candidate feature points in the candidate image and reference feature points of the reference object in the target image; and using the candidate image with the characteristic point superposition relation with the reference characteristic point as a related image related to the reference object in the target image.
In an alternative embodiment, the associated image determining module 506 is further configured to:
determining target candidate images with characteristic point coincidence relation with the reference characteristic points in the candidate images, and determining the quantity of coincident characteristic points in the target candidate images; and under the condition that the number of the coincident characteristic points is larger than a preset number threshold, the target candidate image is used as an associated image associated with a reference object in the target image.
In an alternative embodiment, the location information determining module 508 is further configured to:
the number of the associated images is at least two; determining target position information of the target object and each associated image and associated position information between any two associated images according to the initial coordinates of the target object and the acquisition coordinates of the associated images; and determining the position information of the current position of the target object according to the target position information of the target object and each associated image and the associated position information between any two associated images.
In an alternative embodiment, the location information determining module 508 is further configured to:
Constructing an image coordinate system according to the current position, the acquisition coordinates, the target position information between the target object and each associated image and the associated position information between any two associated images; mapping the image coordinate system to the world coordinate system, and determining position information of the current position of the target object in the world coordinate system.
In an alternative embodiment, the target image acquisition module 502 is further configured to:
determining the current position of a target object and obtaining a target video; and taking the image frame containing the reference object in the target video as a target image.
In summary, in one embodiment of the present disclosure, the target image of the reference object is obtained by determining that the target object is at the current position; according to the installation position of the target object and a preset coordinate algorithm, determining an initial coordinate and an associated area of the target object in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image. The method has the advantages that the method realizes the positioning of the target object based on the target image containing the reference object and the associated image of the associated reference object, which are acquired at the current position, and the position information of the current position of the target object, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the target object are improved; and correcting the initial coordinates of the target object according to the position information, so that the correction efficiency is improved.
The above is an exemplary scheme of a position information determination apparatus of the present embodiment. It should be noted that, the technical solution of the location information determining apparatus and the technical solution of the location information determining method belong to the same concept, and details of the technical solution of the location information determining apparatus, which are not described in detail, can be referred to the description of the technical solution of the location information determining method.
Referring to fig. 6, fig. 6 shows a flowchart of a second location information determining method according to an embodiment of the present specification, which specifically includes the following steps.
Step S602: determining the current position of the shooting device and obtaining a target image of a reference object;
step S604: according to the installation position of the shooting device and a preset coordinate algorithm, determining initial coordinates and an associated area of the shooting device in a world coordinate system;
step S606: determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
step S608: and determining the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image.
In practical application, shooting equipment installed in an actual street is determined, and a target image containing a reference object is acquired at the current position through the shooting equipment. And determining the installation address of the shooting device according to the installation position of the shooting device, and determining the initial coordinates of the shooting device in a world coordinate system and the shooting area of the shooting device based on the installation address and a preset coordinate algorithm. And determining initial images associated with the initial coordinates and the shooting areas of the shooting equipment according to the initial coordinates and the shooting areas of the shooting equipment, and forming an initial image set. And determining an associated image associated with the reference object in the target image in the initial image set, and taking the coordinates of an image acquisition device for acquiring the associated image as the acquisition coordinates of the associated image. And determining the position information of the current position of the shooting equipment according to the initial coordinates of the shooting equipment and the acquisition coordinates of the associated image, wherein the position information comprises longitude and latitude coordinates of the shooting equipment, and camera internal parameters and camera external parameters of the shooting equipment.
In summary, in one embodiment of the present disclosure, the target image of the reference object is obtained by determining that the photographing device is at the current position; according to the installation position of the shooting device and a preset coordinate algorithm, determining initial coordinates and an associated area of the shooting device in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image. The method and the device have the advantages that the position information of the current position of the shooting device is determined based on the target image containing the reference object and the associated image associated with the reference object, which are acquired by the shooting device at the current position, and the shooting device is positioned, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the shooting device are improved; the initial coordinates of the shooting device can be rectified according to the position information, and therefore rectification efficiency is improved.
Corresponding to the above method embodiments, the present disclosure further provides an embodiment of a location information determining device, and fig. 7 shows a schematic structural diagram of a second location information determining device provided in one embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
a target image acquisition module 702 configured to determine a target image of the reference object acquired by the photographing device at the current position;
an information determining module 704 configured to determine an initial coordinate and an associated area of the photographing device in a world coordinate system according to an installation position of the photographing device and a preset coordinate algorithm;
an associated image determining module 706 configured to determine an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
a location information determining module 708 is configured to determine location information of a current location of the photographing device according to the initial coordinates of the photographing device and the acquisition coordinates of the associated image.
In summary, in one embodiment of the present disclosure, the target image of the reference object is obtained by determining that the photographing device is at the current position; according to the installation position of the shooting device and a preset coordinate algorithm, determining initial coordinates and an associated area of the shooting device in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image. The method and the device have the advantages that the position information of the current position of the shooting device is determined based on the target image containing the reference object and the associated image associated with the reference object, which are acquired by the shooting device at the current position, and the shooting device is positioned, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the shooting device are improved; the initial coordinates of the shooting device can be rectified according to the position information, and therefore rectification efficiency is improved.
The above is a schematic version of the second position information determining apparatus of the present embodiment. It should be noted that, the technical solution of the location information determining apparatus and the technical solution of the location information determining method belong to the same concept, and details of the technical solution of the location information determining apparatus, which are not described in detail, can be referred to the description of the technical solution of the location information determining method.
Referring to fig. 8, fig. 8 is a flowchart illustrating a third location information determining method according to an embodiment of the present disclosure, which is applied to a city management platform and specifically includes the following steps.
Step S802: determining the current position of the image acquisition equipment and obtaining a target image of a reference object;
step S804: determining initial coordinates and associated areas of the image acquisition equipment in a world coordinate system according to the installation position of the image acquisition equipment and a preset coordinate algorithm;
step S806: determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
step S808: and determining the position information of the current position of the image acquisition equipment according to the initial coordinates of the image acquisition equipment and the acquisition coordinates of the associated image.
In practical application, shooting equipment installed in an actual street is determined, and a target image containing a reference object is acquired at the current position through the shooting equipment. And determining the installation address of the shooting device according to the installation position of the shooting device, and determining the initial coordinates of the shooting device in a world coordinate system and the shooting area of the shooting device based on the installation address and a preset coordinate algorithm. And determining initial images associated with the initial coordinates and the shooting areas of the shooting equipment according to the initial coordinates and the shooting areas of the shooting equipment, and forming an initial image set. And determining an associated image associated with the reference object in the target image in the initial image set, and taking the coordinates of an image acquisition device for acquiring the associated image as the acquisition coordinates of the associated image. And determining the position information of the current position of the shooting equipment according to the initial coordinates of the shooting equipment and the acquisition coordinates of the associated image, wherein the position information comprises longitude and latitude coordinates of the shooting equipment, and camera internal parameters and camera external parameters of the shooting equipment.
In the urban management scenario, the urban management platform may manage image capturing devices installed in the urban environment. The target object may be an image acquisition device mounted at any location. Under the condition that the image acquisition equipment determines that an abnormal event occurs in the urban streetscape, if the recorded point position coordinates of the image acquisition equipment are inaccurate, the recorded point position coordinates cannot reach the place where the event occurs quickly. By correcting the position information of the image acquisition equipment, the accurate point position coordinates of the image acquisition equipment can be obtained, and when an abnormal event occurs, the occurrence place of the abnormal event can be rapidly and accurately positioned based on the point position coordinates, so that event processing personnel can rapidly arrive at the site for processing. The target image may be any image including a reference object, and the reference object may be an element such as a building or a road layout structure. When the target object is any image containing the reference object, the image acquisition position for acquiring the target object can be determined through the target object.
When the target object is any image containing the reference object, the image acquisition position for acquiring the target object can be determined through the target object. After the position information of the target object is determined, the city management platform updates the position information of the target object, so that the position information of the target object is updated and maintained in real time, and when the accident occurrence is determined through the target object, the accident position can be accurately positioned based on the position information of the target object recorded in the city management platform, and the accident handling personnel can conveniently and rapidly reach the accident scene.
In summary, in one embodiment of the present disclosure, the target image of the reference object is obtained by determining that the photographing device is at the current position; according to the installation position of the shooting device and a preset coordinate algorithm, determining initial coordinates and an associated area of the shooting device in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image. The method and the device have the advantages that the position information of the current position of the shooting device is determined based on the target image containing the reference object and the associated image associated with the reference object, which are acquired by the shooting device at the current position, and the shooting device is positioned, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the shooting device are improved; the initial coordinates of the shooting device can be rectified according to the position information, and therefore rectification efficiency is improved.
Corresponding to the above method embodiments, the present disclosure further provides an embodiment of a location information determining device, and fig. 9 shows a schematic structural diagram of a third location information determining device provided in one embodiment of the present disclosure. A third location information determining apparatus, applied to a city management platform, as shown in fig. 9, includes:
a target image acquisition module 902 configured to determine a target image of the reference object acquired by the image acquisition device at the current position;
an information determining module 904 configured to determine an initial coordinate and an associated area of the image capturing device in a world coordinate system according to an installation position of the image capturing device and a preset coordinate algorithm;
an associated image determining module 906 configured to determine an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
a location information determining module 908 is configured to determine location information of a current location of the image capturing device according to the initial coordinates of the image capturing device, the capturing coordinates of the associated image.
In summary, in one embodiment of the present disclosure, the target image of the reference object is obtained by determining that the photographing device is at the current position; according to the installation position of the shooting device and a preset coordinate algorithm, determining initial coordinates and an associated area of the shooting device in a world coordinate system; according to the initial coordinates and the association area, determining an association image associated with a reference object in the target image and acquisition coordinates of the association image; and determining the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image. The method and the device have the advantages that the position information of the current position of the shooting device is determined based on the target image containing the reference object and the associated image associated with the reference object, which are acquired by the shooting device at the current position, and the shooting device is positioned, so that manual participation is not needed, and the accuracy and the efficiency of the position information determination of the current position of the shooting device are improved; the initial coordinates of the shooting device can be rectified according to the position information, and therefore rectification efficiency is improved.
The above is a schematic version of the third position information determining apparatus of the present embodiment. It should be noted that, the technical solution of the location information determining apparatus and the technical solution of the location information determining method belong to the same concept, and details of the technical solution of the location information determining apparatus, which are not described in detail, can be referred to the description of the technical solution of the location information determining method.
Fig. 10 illustrates a block diagram of a computing device 1000 provided in accordance with one embodiment of the present description. The components of the computing device 1000 include, but are not limited to, a memory 1010 and a processor 1020. Processor 1020 is coupled to memory 1010 via bus 1030 and database 1050 is used to store data.
Computing device 1000 also includes access device 1040, which access device 1040 enables computing device 1000 to communicate via one or more networks 1060. Examples of such networks include public switched telephone networks (PSTN, public Switched Telephone Network), local area networks (LAN, local Area Network), wide area networks (WAN, wide Area Network), personal area networks (PAN, personal Area Network), or combinations of communication networks such as the internet. The access device 1040 may include one or more of any type of network interface, wired or wireless, such as a network interface card (NIC, network interface controller), such as an IEEE802.11 wireless local area network (WLAN, wireless Local Area Network) wireless interface, a worldwide interoperability for microwave access (Wi-MAX, worldwide Interoperability for Microwave Access) interface, an ethernet interface, a universal serial bus (USB, universal Serial Bus) interface, a cellular network interface, a bluetooth interface, near field communication (NFC, near Field Communication).
In one embodiment of the present description, the above-described components of computing device 1000, as well as other components not shown in FIG. 10, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 10 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 1000 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or personal computer (PC, personal Computer). Computing device 1000 may also be a mobile or stationary server.
Wherein the processor 1020 is configured to execute computer-executable instructions that, when executed by the processor, perform the steps of the location information determination method described above.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the above-mentioned location information determining method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the above-mentioned location information determining method.
An embodiment of the present disclosure also provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the above-described location information determining method.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the above-mentioned location information determining method belong to the same concept, and details of the technical solution of the storage medium, which are not described in detail, can be referred to the description of the technical solution of the above-mentioned location information determining method.
An embodiment of the present specification also provides a computer program, wherein the computer program, when executed in a computer, causes the computer to perform the steps of the above-described position information determination method.
The above is an exemplary version of a computer program of the present embodiment. It should be noted that, the technical solution of the computer program and the technical solution of the above-mentioned position information determining method belong to the same conception, and details of the technical solution of the computer program, which are not described in detail, can be referred to the description of the technical solution of the above-mentioned position information determining method.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be increased or decreased appropriately according to the requirements of the patent practice, for example, in some areas, according to the patent practice, the computer readable medium does not include an electric carrier signal and a telecommunication signal.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the embodiments of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments described in the specification.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of the embodiments. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (14)

1. A location information determining method, comprising:
determining the current position of a target object and obtaining a target image of a reference object;
according to the installation position of the target object and a preset coordinate algorithm, determining initial coordinates and an associated area of the target object in a world coordinate system;
determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image.
2. The method of claim 1, the preset coordinate algorithm comprising a preset coordinate point algorithm and a preset area algorithm;
correspondingly, the determining the initial coordinates and the associated areas of the target object in the world coordinate system according to the installation position of the target object and a preset coordinate algorithm comprises the following steps:
determining initial coordinates of the target object in the world coordinate system according to the installation position of the target object and the preset coordinate point algorithm;
and determining the association area of the target object in the world coordinate system according to the initial coordinates and the preset area algorithm.
3. The method of claim 1, the determining an associated image associated with the reference object in the target image based on the initial coordinates and the associated region, and the acquisition coordinates of the associated image, comprising:
acquiring an initial image set which takes the initial coordinates as a starting point and is determined in the association area;
and selecting an associated image associated with the reference object in the target image in the initial image set, and determining acquisition coordinates of the associated image.
4. A method according to claim 3, said selecting an associated image in said initial set of images that is associated with a reference object in said target image, comprising:
determining an initial feature vector of an initial image in the initial image set, and determining a target feature vector of the target image;
and comparing the initial feature vector of the initial image with the target feature vector of the target image, and selecting an associated image associated with a reference object in the target image according to a comparison result.
5. The method according to claim 4, wherein selecting the associated image associated with the reference object in the target image according to the comparison result comprises:
Determining candidate images in the initial image according to the comparison result;
determining candidate feature points in the candidate image and reference feature points of the reference object in the target image;
and using the candidate image with the characteristic point superposition relation with the reference characteristic point as a related image related to the reference object in the target image.
6. The method of claim 5, further comprising, after said determining candidate feature points in said candidate image and reference feature points of said reference object in said target image:
determining target candidate images with characteristic point coincidence relation with the reference characteristic points in the candidate images, and determining the quantity of coincident characteristic points in the target candidate images;
and under the condition that the number of the coincident characteristic points is larger than a preset number threshold, the target candidate image is used as an associated image associated with a reference object in the target image.
7. The method of claim 1, the associated image being at least two;
correspondingly, the determining the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image includes:
Determining target position information of the target object and each associated image and associated position information between any two associated images according to the initial coordinates of the target object and the acquisition coordinates of the associated images;
and determining the position information of the current position of the target object according to the target position information of the target object and each associated image and the associated position information between any two associated images.
8. The method according to claim 7, wherein determining the position information of the current position of the target object according to the target position information of the target object and each associated image and the associated position information between any two associated images includes:
constructing an image coordinate system according to the current position, the acquisition coordinates, the target position information between the target object and each associated image and the associated position information between any two associated images;
mapping the image coordinate system to the world coordinate system, and determining position information of the current position of the target object in the world coordinate system.
9. The method of claim 1, wherein determining that the target object is at the current position, the acquired target image of the reference object, comprises:
Determining the current position of a target object and obtaining a target video;
and taking the image frame containing the reference object in the target video as a target image.
10. A location information determining method, comprising:
determining the current position of the shooting device and obtaining a target image of a reference object;
according to the installation position of the shooting device and a preset coordinate algorithm, determining initial coordinates and an associated area of the shooting device in a world coordinate system;
determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and determining the position information of the current position of the shooting device according to the initial coordinates of the shooting device and the acquisition coordinates of the associated image.
11. A position information determining method is applied to a city management platform and comprises the following steps:
determining the current position of the image acquisition equipment and obtaining a target image of a reference object;
determining initial coordinates and associated areas of the image acquisition equipment in a world coordinate system according to the installation position of the image acquisition equipment and a preset coordinate algorithm;
determining an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
And determining the position information of the current position of the image acquisition equipment according to the initial coordinates of the image acquisition equipment and the acquisition coordinates of the associated image.
12. A position information determining apparatus comprising:
the target image acquisition module is configured to determine that the target object is at the current position and acquire a target image of the reference object;
the information determining module is configured to determine initial coordinates and associated areas of the target object in a world coordinate system according to the installation position of the target object and a preset coordinate algorithm;
the associated image determining module is configured to determine an associated image associated with the reference object in the target image and acquisition coordinates of the associated image according to the initial coordinates and the associated region;
and the position information determining module is configured to determine the position information of the current position of the target object according to the initial coordinates of the target object and the acquisition coordinates of the associated image.
13. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer executable instructions, and the processor is configured to execute the computer executable instructions, which when executed by the processor, implement the steps of the location information determining method of any of claims 1 to 11.
14. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the steps of the location information determining method of any of claims 1 to 11.
CN202310559838.3A 2023-05-15 2023-05-15 Position information determining method and device Active CN116758150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310559838.3A CN116758150B (en) 2023-05-15 2023-05-15 Position information determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310559838.3A CN116758150B (en) 2023-05-15 2023-05-15 Position information determining method and device

Publications (2)

Publication Number Publication Date
CN116758150A true CN116758150A (en) 2023-09-15
CN116758150B CN116758150B (en) 2024-04-30

Family

ID=87946821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310559838.3A Active CN116758150B (en) 2023-05-15 2023-05-15 Position information determining method and device

Country Status (1)

Country Link
CN (1) CN116758150B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960453A (en) * 2017-03-22 2017-07-18 海南职业技术学院 Photograph taking fixing by gross bearings method and device
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
CN110517325A (en) * 2019-08-29 2019-11-29 的卢技术有限公司 The vehicle body surrounding objects localization method and system of a kind of coordinate transform and coordinate transform
CN113409405A (en) * 2021-07-19 2021-09-17 北京百度网讯科技有限公司 Method, device, equipment and storage medium for evaluating camera calibration position
EP3922005A1 (en) * 2020-01-22 2021-12-15 Audi AG Method for generating reproducible perspectives of photographs of an object, and mobile device with an integrated camera
CN114119389A (en) * 2021-10-18 2022-03-01 中国人民解放军陆军炮兵防空兵学院 Image restoration method, system and storage module
CN115222819A (en) * 2022-06-30 2022-10-21 北京航空航天大学 Camera self-calibration and target tracking method based on multi-mode information reference in airport large-range scene
US20230040051A1 (en) * 2020-04-07 2023-02-09 Huawei Technologies Co., Ltd. Positioning method and system, and apparatus
CN115965694A (en) * 2022-12-13 2023-04-14 宁德卓宁科技有限公司 Double-camera positioning method
CN116012428A (en) * 2022-12-23 2023-04-25 北京信息科技大学 Method, device and storage medium for combining and positioning thunder and vision

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960453A (en) * 2017-03-22 2017-07-18 海南职业技术学院 Photograph taking fixing by gross bearings method and device
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
CN110517325A (en) * 2019-08-29 2019-11-29 的卢技术有限公司 The vehicle body surrounding objects localization method and system of a kind of coordinate transform and coordinate transform
EP3922005A1 (en) * 2020-01-22 2021-12-15 Audi AG Method for generating reproducible perspectives of photographs of an object, and mobile device with an integrated camera
US20230040051A1 (en) * 2020-04-07 2023-02-09 Huawei Technologies Co., Ltd. Positioning method and system, and apparatus
CN113409405A (en) * 2021-07-19 2021-09-17 北京百度网讯科技有限公司 Method, device, equipment and storage medium for evaluating camera calibration position
CN114119389A (en) * 2021-10-18 2022-03-01 中国人民解放军陆军炮兵防空兵学院 Image restoration method, system and storage module
CN115222819A (en) * 2022-06-30 2022-10-21 北京航空航天大学 Camera self-calibration and target tracking method based on multi-mode information reference in airport large-range scene
CN115965694A (en) * 2022-12-13 2023-04-14 宁德卓宁科技有限公司 Double-camera positioning method
CN116012428A (en) * 2022-12-23 2023-04-25 北京信息科技大学 Method, device and storage medium for combining and positioning thunder and vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIDONG LIU等: "Non-scanning measurement of position and attitude using two linear cameras", 《OPTICS AND LASERS IN ENGINEERING》, vol. 112, pages 46 - 52, XP085500746, DOI: 10.1016/j.optlaseng.2018.08.019 *
MAI BUI等: "Scene Coordinate and Correspondence Learning for Image-Based Localization", 《ARXIV - COMPUTER VISION AND PATTERN RECOGNITION》, pages 1 - 8 *
王伟等: "跨相机交通场景下的车辆空间定位方法", 《计算机辅助设计与图形学学报》, vol. 33, no. 6, pages 873 - 882 *
鲁亚楠等: "一种无公共视场相机位置关系的求解方法", 《应用光学》, vol. 38, no. 3, pages 400 - 405 *

Also Published As

Publication number Publication date
CN116758150B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
US20200401617A1 (en) Visual positioning system
EP3502621B1 (en) Visual localisation
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
Chen et al. City-scale landmark identification on mobile devices
US10043097B2 (en) Image abstraction system
CN111652934A (en) Positioning method, map construction method, device, equipment and storage medium
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN102959946A (en) Augmenting image data based on related 3d point cloud data
JP2013025799A (en) Image search method, system, and program
US20170039450A1 (en) Identifying Entities to be Investigated Using Storefront Recognition
Son et al. A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments
CN113920263A (en) Map construction method, map construction device, map construction equipment and storage medium
EP3746744A1 (en) Methods and systems for determining geographic orientation based on imagery
CN113673400A (en) Real scene three-dimensional semantic reconstruction method and device based on deep learning and storage medium
CN116485856A (en) Unmanned aerial vehicle image geographic registration method based on semantic segmentation and related equipment
Park et al. Estimating the camera direction of a geotagged image using reference images
CN116758150B (en) Position information determining method and device
CN113673288A (en) Idle parking space detection method and device, computer equipment and storage medium
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium
KR20220062709A (en) System for detecting disaster situation by clustering of spatial information based an image of a mobile device and method therefor
CN112818866A (en) Vehicle positioning method and device and electronic equipment
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information
de Vries Lentsch et al. SliceMatch: Geometry-Guided Aggregation for Cross-View Pose Estimation.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant