CN113192139A - Positioning method and device, electronic equipment and storage medium - Google Patents

Positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113192139A
CN113192139A CN202110528927.2A CN202110528927A CN113192139A CN 113192139 A CN113192139 A CN 113192139A CN 202110528927 A CN202110528927 A CN 202110528927A CN 113192139 A CN113192139 A CN 113192139A
Authority
CN
China
Prior art keywords
target object
acquisition devices
information
acquisition
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110528927.2A
Other languages
Chinese (zh)
Inventor
许文航
吴佳飞
张广程
闫俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202110528927.2A priority Critical patent/CN113192139A/en
Publication of CN113192139A publication Critical patent/CN113192139A/en
Priority to KR1020227020483A priority patent/KR20220155421A/en
Priority to PCT/CN2021/125018 priority patent/WO2022237071A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The disclosure relates to a positioning method and apparatus, an electronic device and a storage medium, wherein the method comprises: acquiring relative position relations between a target object and a plurality of acquisition devices respectively, and acquiring position information of the acquisition devices, wherein the acquisition devices are used for acquiring images of the target object; and positioning the target object according to the relative position relation and the position information to obtain the position information of the target object. The embodiment of the disclosure can realize the positioning of the target object and improve the positioning accuracy.

Description

Positioning method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a positioning method and apparatus, an electronic device, and a storage medium.
Background
With the development of deep learning technology, the target detection technology based on computer vision has been greatly developed and advanced. The object detection technique may determine a target object of interest in the image, determine a category of the target object, and determine an image location. Object detection techniques may also be used as a basis for object localization tasks.
At present, target positioning based on a target detection technology is generally a binocular camera positioning mode based on a fixed interocular distance, but the positioning error of the mode is larger.
Disclosure of Invention
The present disclosure provides a positioning technical solution.
According to an aspect of the present disclosure, a positioning method is provided, which is applied in a service node, and includes: acquiring relative position relations between a target object and a plurality of acquisition devices respectively, and acquiring position information of the acquisition devices, wherein the acquisition devices are used for acquiring images of the target object; and positioning the target object according to the relative position relation and the position information to obtain the position information of the target object.
In one or more possible implementations, the method further includes: and sending target characteristics to the plurality of acquisition devices in the camera network so that the plurality of acquisition devices determine the target object in the acquired image according to the target characteristics.
In one or more possible implementation manners, the positioning the target object according to the relative position relationship and the position information to obtain the position information of the target object includes: selecting two acquisition devices from the plurality of acquisition devices, wherein the two acquisition devices form a triangle with the target object; and positioning the target object according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices to obtain the position information of the target object.
In one or more possible implementation manners, the positioning the target object according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices to obtain the position information of the target object includes: determining the distance between the two acquisition devices according to the position information of the two acquisition devices; and obtaining the position information of the target object according to the distance and the deflection angle information between the target object and the two acquisition devices, wherein the deflection angle information represents the deviation angle of the target object relative to the reference direction of the acquisition devices.
In one or more possible implementation manners, the obtaining the position information of the target object according to the distance and the deflection angle information between the target object and the two acquisition devices includes: acquiring azimuth information of each acquisition device in the two acquisition devices; determining a first internal angle and a second internal angle in the triangle according to the azimuth information of each acquisition device and the deflection angle information between the target object and each acquisition device, wherein one side of the first internal angle and one side of the second internal angle pass through a connecting line of the two acquisition devices; and obtaining the position information of the target object according to the first internal angle, the second internal angle and the distance.
In one or more possible implementations, the method further includes: and marking the track of the target object on the electronic map according to the position information of the target object.
In one or more possible implementations, the obtaining of the relative position relationship between the target object and the plurality of acquisition devices includes: acquiring acquired images of the plurality of acquisition devices; determining an image position of the target object in the acquired image of each acquisition device; and acquiring the relative position relation between the target object and each acquisition device based on the image position.
In one or more possible implementations, the location information includes latitude and longitude coordinates; the service node is a control device of the camera networking, or the service node is any acquisition device in the camera networking.
According to an aspect of the present disclosure, there is provided a positioning method applied to an acquisition device, including: acquiring a collected image; determining a relative positional relationship between a target object and the acquisition device based on the acquired image; and sending the relative position relationship between the target object and the acquisition devices to a service node, wherein the service node is used for positioning the target object according to the relative position relationship between the target object and the acquisition devices and the position information of the acquisition devices to obtain the position information of the target object.
In one or more possible implementations, the determining the relative positional relationship between the target object and the acquisition device based on the acquired image includes: carrying out target detection on the acquired image to obtain the image position of the target object in the acquired image; based on the image position of the target object, deflection angle information between the target object and the acquisition device is determined, wherein the deflection angle information represents a deflection angle of the target object relative to a reference direction of the acquisition device.
In one or more possible implementations, the method further includes: receiving target characteristics sent by a service node; determining the target object in the acquired image according to the target feature; or, acquiring the marking information input by the user; and determining the target object in the acquired image according to the labeling information.
According to an aspect of the present disclosure, there is provided a positioning apparatus applied in a service node, including:
the acquisition module is used for acquiring the relative position relation between a target object and a plurality of acquisition devices respectively and acquiring the position information of the acquisition devices, wherein the acquisition devices are used for acquiring images of the target object;
and the positioning module is used for positioning the target object according to the relative position relation and the position information to obtain the position information of the target object.
In one or more possible implementations, the apparatus further includes:
the sending module is used for sending the target characteristics to the plurality of acquisition devices in the camera networking so that the plurality of acquisition devices can determine the target object in the acquired image according to the target characteristics.
In one or more possible implementations, the positioning module is configured to select two acquisition devices from the plurality of acquisition devices, wherein the two acquisition devices form a triangle with the target object; and positioning the target object according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices to obtain the position information of the target object.
In one or more possible implementation manners, the relative position relationship includes deflection angle information, and the positioning module is configured to determine a distance between the two acquisition devices according to the position information of the two acquisition devices; and obtaining the position information of the target object according to the distance and the deflection angle information between the target object and the two acquisition devices, wherein the deflection angle information represents the deviation angle of the target object relative to the reference direction of the acquisition devices.
In one or more possible implementations, the positioning module is configured to obtain position information of each of the two acquisition devices; determining a first internal angle and a second internal angle in the triangle according to the azimuth information of each acquisition device and the deflection angle information between the target object and each acquisition device, wherein one side of the first internal angle and one side of the second internal angle pass through a connecting line of the two acquisition devices; and obtaining the position information of the target object according to the first internal angle, the second internal angle and the distance.
In one or more possible implementations, the apparatus further includes: and the marking module is used for marking the track of the target object on the electronic map according to the position information of the target object.
In one or more possible implementations, the acquiring module is configured to acquire the acquired images of the plurality of acquiring devices; determining an image position of the target object in the acquired image of each acquisition device; and acquiring the relative position relation between the target object and each acquisition device based on the image position.
In one or more possible implementations, the location information includes latitude and longitude coordinates; the service node is a control device of the camera networking, or the service node is any acquisition device in the camera networking.
According to an aspect of the present disclosure, there is provided a positioning apparatus including:
the acquisition module is used for acquiring an acquired image;
a determination module that determines a relative positional relationship between a target object and the acquisition device based on the acquired image;
and the sending module is used for sending the relative position relationship between the target object and the acquisition devices to a service node, wherein the service node is used for positioning the target object according to the relative position relationship between the target object and the acquisition devices and the position information of the acquisition devices to obtain the position information of the target object.
In one or more possible implementation manners, the relative position relationship includes deflection angle information, and the determining module is configured to perform target detection on the acquired image to obtain an image position of the target object in the acquired image; based on the image position of the target object, deflection angle information between the target object and the acquisition device is determined, wherein the deflection angle information represents a deflection angle of the target object relative to a reference direction of the acquisition device.
In one or more possible implementations, the determining module is further configured to receive a target feature sent by a serving node; determining the target object in the acquired image according to the target feature; or, acquiring the marking information input by the user; and determining the target object in the acquired image according to the labeling information.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the present disclosure, the relative position relationship between the target object and the plurality of collecting devices, and the position information of the plurality of collecting devices may be obtained, and then the target object is positioned according to the relative position relationship between the target object and the plurality of collecting devices, and the position information of the plurality of collecting devices, so as to obtain the position information of the target object. Therefore, the target object can be positioned through a camera network formed by a plurality of acquisition devices, and the positioning accuracy is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a scenario diagram of a service node interacting with multiple capture devices according to an embodiment of the disclosure.
Fig. 2 shows a flow chart of a positioning method according to an embodiment of the present disclosure.
Fig. 3 shows a flow chart of a positioning method according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of camera networking positioning, according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a positioning device according to an embodiment of the present disclosure.
FIG. 6 shows a block diagram of a positioning device according to an embodiment of the present disclosure.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
FIG. 8 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The positioning scheme provided by the embodiment of the disclosure can be applied to scenes such as a positioning system, multi-camera networking, edge nodes and the like. For example, in a wider scene such as a square, a garden, a classroom, etc., a multi-point camera may be used to arrange a camera network, and the position information of the target object is obtained in real time through the camera network, and is marked on an electronic map in real time. Under a security scene, the functions of target tracking and positioning can be realized by utilizing the networking of multiple cameras, and the method can be suitable for target positioning under different scenes.
Fig. 1 illustrates a scenario diagram of a service node interacting with multiple capture devices according to an embodiment of the disclosure. In the embodiment of the present disclosure, the acquisition device may be an edge device, and the service node may be a cloud device, or a central device or a control device in a plurality of edge devices. The service node may perform information interaction with each acquisition device, and the service node may summarize information of multiple acquisition devices, such as obtaining orientation information and/or position information of multiple acquisition devices (acquisition device 1-acquisition device 5). In some implementations, the acquisition devices may have processing capabilities, the relative positional relationship between the target object and the acquisition devices may be determined based on the acquired images, and the service node may determine the position of the target object according to the relative positional relationship between the target object and the acquisition devices sent by the acquisition devices. In some implementations, the service node may obtain the collected images of the plurality of collecting devices under the condition that the processing capability of the collecting devices is insufficient, and determine the position of the target object through the collected images sent by the plurality of collecting devices. By the method, the influence caused by insufficient perception capability of the edge devices can be reduced, effective association of information of each edge device is realized, and the target object is positioned.
The positioning method provided by the embodiment of the present disclosure may be performed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the data processing method may be implemented by a processor calling computer readable instructions stored in a memory. Alternatively, the method may be performed by a server.
Fig. 2 is a flowchart illustrating a positioning method according to an embodiment of the present disclosure, and as shown in fig. 2, the positioning method is applied in a service node, and includes:
step S11, acquiring relative positional relationships between the target object and the plurality of acquisition devices, respectively, and acquiring positional information of the plurality of acquisition devices.
In the embodiment of the disclosure, a plurality of acquisition devices can form a camera network, different acquisition devices can communicate with each other, and information sharing among different acquisition devices can be realized. The acquisition device may be an apparatus having an image acquisition function, for example, the acquisition device may be a terminal apparatus having a photographing function, an edge apparatus, a server, or the like. The acquisition device in the camera networking can acquire images for target objects in a visual field, and in some implementation modes, the acquisition device can determine the relative position relationship between the target objects and the acquisition device according to the acquired images. The target object may be a person or an object. The service node can acquire the relative position relationship determined by a plurality of acquisition devices in the camera networking, and one acquisition device can correspond to one relative position relationship. The service node may further obtain the location information of the plurality of acquisition devices, for example, may obtain the location information of the plurality of acquisition devices stored in advance in the storage unit, or may obtain the location information of the acquisition devices transmitted by the acquisition devices.
Here, the relative positional relationship may be understood as a position of the target object with respect to the acquisition device with reference to the acquisition device, for example, distance information and/or deflection angle information of the acquisition device with respect to the acquisition device, and the like. The deflection angle information may indicate a deflection angle of the target object relative to a reference direction of the acquisition device, that is, the deflection angle information may indicate an angle of the target object deflected from the reference direction of the acquisition device. The reference directions of different acquisition devices can be the same or different, and can be set according to actual application scenes. For example, the north direction may be 0 °, the horizontal direction may be divided into 360 ° clockwise, the reference direction of the acquisition device may be uniformly set as the north direction, and the declination information at this time may be regarded as the azimuth angle of the target object with respect to the acquisition device. For another example, the reference direction of the acquisition device may be set as the azimuth angle of the orientation of the acquisition device, and the deflection angle information at this time may be regarded as the deflection angle between the target object and the orientation of the acquisition device.
Here, the service node may be a control device of a camera network, for example, a server, a control terminal, and the like, and the service node may collect information of a plurality of acquisition devices and issue some control instructions to the plurality of acquisition devices. In some implementation manners, the service node may be any one acquisition device in the camera networking, so that information of a plurality of acquisition devices can be collected through one acquisition device in the camera networking, and control over other acquisition devices in the camera networking is realized, and the method is applicable to various application scenarios.
And step S12, positioning the target object according to the relative position relation and the position information to obtain the position information of the target object.
In the embodiment of the present disclosure, the service node may position the target object according to the relative position relationship between the target object and the plurality of acquisition devices and the position information of the plurality of acquisition devices. For example, two acquisition devices may be arbitrarily selected from the plurality of acquisition devices, or two acquisition devices corresponding to the relative positional relationship acquired first may be selected from the plurality of acquisition devices, and the positional information of the target object may be calculated from the positional information of the two acquisition devices and the relative positional relationship of the target object with respect to the two acquisition devices. For another example, the position information of the target object may be obtained according to the relative position relationship between each two of the plurality of acquisition devices and the target object and the position information of the two corresponding acquisition devices, for example, in 3 acquisition devices, two acquisition devices are combined to form 2 groups, and one position information of the target object may be determined according to the relative position relationship between the two acquisition devices and the target object in each group and the position information of the two acquisition devices. And averaging the plurality of position information of the target object, wherein the obtained average value is the final position information of the target object.
For example, each acquisition device may be used as a vertex, a ray emitted from the vertex is determined according to the deflection angle information between the target object and each acquisition device, then two rays emitted from the vertices of the two acquisition devices may intersect at a point, and the position of the intersection may be regarded as the position of the target object, so as to obtain the position information of the target object, thereby positioning the target object. For another example, each acquisition device may be used as a vertex, a circle with each vertex as a center of circle may be determined according to distance information between the target object and each acquisition device, then the circles with the vertices of the two acquisition devices as centers of circle may intersect at a point, and the position of the intersection may be regarded as the position of the target object, so as to obtain position information of the target object, thereby positioning the target object.
In the embodiment of the disclosure, the target object can be positioned by a camera network formed by a plurality of acquisition devices. The acquisition device can be used as an edge node to be applied to an edge device scene, so that the information of each edge device can be effectively correlated, and compared with some schemes which are difficult to effectively utilize the information of the edge device due to insufficient perception capability of the edge device in the related art, the effective utilization of the information of the edge device can be enhanced.
In some implementations, the service node may further acquire acquired images of a plurality of acquisition devices, and further determine an image position of the target object in the acquired image of each acquisition device. Then, based on the image position, the relative positional relationship between the target object and each acquisition device is acquired.
For the acquired image of each acquisition device, the service node may perform target detection on the acquired image, for example, some target detection algorithms, such as YOLO, SSD, and other target detection algorithms, may be used to perform target detection on the acquired image, so as to obtain an image position of the target object in the acquired image. The relative positional relationship between the target object and the captured image may then be determined from the image position of the target object. For example, a corresponding relationship between each pixel position in the captured image and the offset angle of the standard direction of the pixel position may be preset, and based on the corresponding relationship, the pixel position where the center of the target object is located, the corresponding offset angle, that is, the drift angle information of the target object may be obtained. For another example, the captured image may be a depth image, and the distance information between the target object and the capturing device may be determined according to the image depth of the image position where the target object is located. In this way, the service node can quickly determine the relative positional relationship between the target object and each acquisition device.
In some implementation manners, when the target object is located according to the relative position relationship between the target object and the plurality of acquisition devices and the position information of the plurality of acquisition devices, two acquisition devices may be selected from the plurality of acquisition devices, the two acquisition devices may form a triangle with the target object, and then the target object is located according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices, so as to obtain the position information of the target object. For example, two acquisition devices may be randomly selected from a plurality of acquisition devices, and then it is determined whether the two acquisition devices can form a triangle with the target object according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices. For example, a first connection line of the two acquisition devices may be established according to the position information of the two acquisition devices, and then according to the deflection angle information of the target object relative to the acquisition devices, it is determined whether an included angle between the target object and a second connection line between the two acquisition devices is different from 0, and if an included angle between the second connection line between the target object and the two acquisition devices is different from 0, it may be considered that the two acquisition devices may form a triangle with the target object on the same plane, otherwise, it may be considered that the two acquisition devices may not form a triangle with the target object on the same plane. Under the condition that the two acquisition devices and the target object form a triangle, the position information of the target object can be further calculated according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices, so that the target object can be accurately positioned.
In some examples, the relative positional relationship includes declination information. When the target object is located, the distance between the two acquisition devices can be determined according to the position information of the two acquisition devices, then the position information of the target object can be obtained according to the distance between the two acquisition devices and the deflection angle information between the target object and the two acquisition devices, for example, a distance formula can be adopted, the distance between the two acquisition devices can be calculated according to the position information of the two acquisition devices, and then the position coordinates of the target object can be calculated according to the deflection angle information of the target object relative to the two acquisition devices and the distance between the two acquisition devices by utilizing a triangle location formula. Therefore, the position coordinates of the target object can be obtained through simple calculation, and the positioning efficiency is improved. Meanwhile, the camera networking is less limited by the scene, and the positioning accuracy can be improved. In some examples, the information of the deflection angle between the target object and the two capturing devices may be an azimuth angle of the target object, so that two internal angles of the triangle may be determined directly according to the information of the deflection angle between the target object and the two capturing devices, and further using a triangulation formula, the target object may be positioned according to the two internal angles of the triangle and the distance between the two capturing devices.
In one example, the information of the deviation angle between the target object and the acquisition devices may be a deviation angle between the target object and the orientations of the acquisition devices, so that the orientation information of each of the two acquisition devices may be acquired, and the first internal angle and the second internal angle in the triangle may be determined according to the orientation information of each acquisition device and the information of the deviation angle between the target object and each acquisition device. And then, according to the first internal angle, the second angle and the distance between the two acquisition devices, the position information of the target object can be obtained, for example, the first internal angle, the second angle and the distance between the two acquisition devices can be substituted into a triangulation positioning formula, and the position information of the target object can be obtained. One side of the first inner angle and one side of the second inner angle are connected through the two collecting devices. By the method, the target object can be quickly and accurately positioned.
In some implementations, a Global Navigation Satellite System (GNSS) and an electronic compass sensor may be configured in the acquisition device, so that the acquisition device may have the capability of sensing its position and orientation. The acquisition device can obtain high-precision geographical coordinates through a global positioning system differential or static positioning algorithm. In some implementations, after the acquisition devices in the camera networking are installed, the positions of the acquisition devices may not be changed any more, so that after the acquisition devices are installed, the position information of the acquisition devices may be stored in the acquisition devices or in the service nodes.
In some implementations, the location information may include latitude and longitude coordinates. Accordingly, the longitude and latitude coordinates of the target object can be obtained through the deflection angle information of the target object relative to the plurality of acquisition devices and the longitude and latitude coordinates of the plurality of acquisition devices. Therefore, the positioning scheme provided by the embodiment of the disclosure can obtain accurate geographic coordinates of the target object, so that the position information obtained by positioning can be unified, the use of other devices is facilitated, the situation that position sharing is difficult to perform due to the fact that the coordinate systems of the position information are not unified is reduced, the operation of converting the coordinate systems or the position information during position sharing is reduced, and the sharing efficiency of the position information is improved.
In the embodiment of the disclosure, a plurality of acquisition devices in a camera network can identify the same target object, so that the target tracking and real-time positioning of the same target object are realized. In some implementations, the service node may send the target feature to a plurality of acquisition devices in the camera network, so that the plurality of acquisition devices determine the target object in the acquired image according to the target feature. The target features may indicate target objects, and after the target features are received by the plurality of acquisition devices, the target features may be matched with image features and target features of the plurality of objects in the acquired image to determine target objects in the plurality of objects, so that target tracking and positioning may be further performed for the target objects.
In some examples, the target feature may be a feature value of the target object. The service node may obtain a feature value of the target object in a feature database, where the feature value of the target object may be extracted using a deep learning network. For example, the service node may obtain, according to the user instruction, a target feature corresponding to the user instruction from the feature database. For another example, the user may click a certain image or a certain image area displayed in the display interface of the service node, and the service node may perform feature extraction on the image or the image area clicked by the user by using the deep learning network to obtain the target feature. Or, under the condition that the service node is used as the acquisition device, feature extraction can be performed on one or more objects appearing in the current acquisition picture to obtain the target features.
In some implementations, after the step S12, the track of the target object may be marked on the electronic map according to the position information of the target object, for example, the position information of the target object at multiple times may be sequentially linked in the electronic map in a time sequence, so as to obtain the track of the target object. In some implementation manners, the trajectory of the target object may be labeled by using a mark, a line segment, a symbol and the like with a special color, so that the position or the motion trajectory of the target object may be visually reflected on the electronic map for the user to perform subsequent analysis. In some implementation manners, the tracks of the target object can be integrated, that is, both the historical motion track and the current motion track of the target object can be marked on the electronic map, and the positions that the target user passes through many times can be marked with emphasis by colors, so that the analysis of the user can be better facilitated.
Fig. 3 shows a flowchart of a positioning method according to an embodiment of the present disclosure, and as shown in fig. 3, the positioning method is applied to an acquisition apparatus in a camera networking, and includes:
step S21, a captured image is acquired.
In embodiments of the present disclosure, a camera network may include a plurality of acquisition devices. Each acquisition device can be used for acquiring images of scenes in the own visual field. The position of each acquisition device can be set reasonably according to practical application scenes or requirements, for example, each acquisition device can be used as the vertex of a polygon, and a plurality of acquisition devices can form the polygon. Any two acquisition devices can communicate with each other through a wireless or wired network. In order to achieve good communication between any two acquisition devices, a plurality of acquisition devices may be provided at equal intervals. Each acquisition device can take an object entering a visual field as a target object, and performs image acquisition on the target object to obtain an acquired image.
Step S22, determining a relative positional relationship between the target object and the acquisition device based on the acquired image.
In the embodiment of the disclosure, the acquisition device may perform target detection on the acquired image to obtain an image position of the target object in the acquired image, and then may determine a relative position relationship between the target object and the acquisition device according to the image position of the target object. In some implementations, the acquisition device may perform target detection on the acquired image, for example, some target detection algorithms, such as a YOLO algorithm and an SSD algorithm, may be used to perform target detection on the acquired image, so as to obtain an image position of the target object in the acquired image. A relative positional relationship between the target object and the captured image may then be determined based on the image position of the target object.
In some implementations, the relative positional relationship may include declination information. The deflection angle information is used for representing the deflection angle of the target object relative to the reference direction of the acquisition device. When the deflection angle information between the target object and the acquisition device is determined, the corresponding relation between each pixel position in the acquired image and the deflection angle of the standard direction of the target object can be preset, then the pixel position corresponding to the central point of the target object can be determined according to the image position of the target object, and then the deflection angle corresponding to the pixel position corresponding to the central point of the target object can be determined according to the corresponding relation between each pixel position and the deflection angle of the standard direction of the target object, so that the deflection angle information of the target object can be obtained.
In some implementations, the relative positional relationship can also include distance information, and the captured image can be a depth image. When determining the distance information between the target object and the acquisition device, the distance information between the target object and the acquisition device may be determined according to the image depth of the image position where the target object is located.
In some implementations, the capture device can rotate within a range of angles, i.e., the orientation of the capture device can change within a range of angles, thereby increasing the field of view of the capture device image capture. In the case where the reference direction of the acquisition device is determined from the orientation information of the acquisition device, the deflection angle information of the target object with respect to the acquisition device may also be determined based on the image position of the target object and the orientation of the acquisition device. For example, an offset angle of each pixel position of the captured image with respect to a center pixel position may be set in advance, wherein the center pixel position may correspond to a reference direction toward which the capturing device is directed. Then, the pixel position corresponding to the central point of the target object is determined according to the image position of the target object, the offset angle of the pixel position corresponding to the central point of the target object relative to the central pixel position can be determined according to the offset angle of each pixel position relative to the central pixel position, and the offset angle information of the target object can be obtained by adding the orientation of the acquisition device to the offset angle of the pixel position corresponding to the central point relative to the central pixel position. Wherein, collection system can dispose electronic compass sensor, can acquire self orientation through electronic compass sensor. By the method, the acquisition device can quickly and accurately determine the deflection angle information of the target object, and is suitable for application scenes in which the acquisition device can rotate by itself.
Step S23, sending the relative position relationship between the target object and the acquisition device to a service node.
In the embodiment of the present disclosure, after determining the relative position relationship between the target object and the acquisition device, the acquisition device may send the relative position relationship between the target object and itself to the service node. The service node can receive the relative position relations sent by the plurality of acquisition devices, and further can position the target object according to the relative position relations between the target object and the plurality of acquisition devices and the position information of the plurality of acquisition devices to obtain the position information of the target object.
The embodiment of the disclosure can utilize the acquisition device of the camera network to acquire the image of the target object, realize the positioning of the target object, has simple positioning mode and is applicable to various application scenes. For example, in a wider scene such as a square, a garden, a classroom, etc., the geographic coordinates of the target object can be acquired in real time through the camera networking, and the real-time annotation is performed on the electronic map.
In some implementations, the acquisition device may further receive a target feature sent by the service node, and then determine the target object in the acquired image according to the target feature. For example, the service node may send the target feature to a plurality of acquisition devices in the camera network through a wired or wireless network, and the acquisition devices may match the target feature with an image feature of at least one object in the acquired image to determine a target object corresponding to the target feature. Here, the service node may determine the target feature according to a captured image obtained by itself or a user instruction. Therefore, the plurality of acquisition devices can identify the same target object according to the target characteristics issued by the service node, so that the target object is tracked.
In some implementations, the acquisition device can also acquire annotation information input by a user, and then determine the target object in the acquired image according to the annotation information. For example, a user may input annotation information indicating a target object into at least two acquisition devices, respectively, and then the at least two acquisition devices may determine the target object indicated by the annotation information in the acquired image according to the annotation information. The annotation information may include an image, an image feature, and the like of the target object, and the acquisition device may match the acquired image with the image included in the annotation information, or match the image feature included in the annotation information with the image feature of the acquired image to determine the target object. In some examples, the capturing device may further obtain a user-selected image or image area in the display interface, and the user-selected image or image area may serve as annotation information for indicating the target user. By the mode, the plurality of acquisition devices can identify the same target object according to the marking information input by the user, so that the target object is tracked.
It should be noted that the target object may appear in the field of view of a portion of the capturing device in the camera network, and the portion of the capturing device may recognize the target object in the captured image. However, due to the fact that a part of the acquisition devices in the camera networking are oriented or shielded, a target object does not exist in the field of view of image acquisition, and therefore the target object cannot be identified. For the acquisition device without the target object in the visual field, the scene in the visual field can be continuously shot so as to acquire the information in the scene in real time.
The following describes a positioning scheme provided by an embodiment of the present disclosure by way of an example. Fig. 4 shows a schematic diagram of camera networking positioning, according to an embodiment of the present disclosure. As shown in fig. 4, in the present example, a camera group may be set in a square. The camera group network may include 4 cameras (acquisition devices). Each camera may be equipped with GNSS and electronic compass sensors that may acquire their own geographic coordinates and orientation information. The geographic coordinates of the camera 1 are (x1, y1), the geographic coordinates of the camera 2 are (x2, y2), the geographic coordinates of the camera 3 are (x3, y3), and the geographic coordinates of the camera 4 are (x4, y 4).
The service node of the camera networking can send the characteristic value of the target object to the camera, and the camera in the camera networking automatically identifies and tracks the target object. Or at least two cameras in the camera networking can receive identification information input by a user, and identify and track the target object according to the identification information input by the user. The cameras 3 in the camera network cannot acquire images of the target object due to occlusion. The cameras 1, 2 and 4 in the camera network can acquire images of the target object, determine the image position of the target object in the acquired images, and then calculate the declination angle information of the target object relative to the cameras according to the image position of the target object.
The service node acquires the declination information calculated by the cameras 1, 2 and 4, the geographic coordinates and the orientation information of the cameras through the network, then can randomly select two cameras which can form a triangle with the target object, for example, can select the cameras 1 and 2, calculate the position information of the target object according to the object azimuth angle a11 (declination information) of the target object relative to the camera 1, the object azimuth angle a21 (declination information) of the target object relative to the camera 2 and the geographic coordinates of the cameras 1 and 2, and can obtain the longitude and latitude coordinates (x, y) of the target object. Here, the reference direction of the camera 1 may be a true east direction, and the reference direction of the camera 2 may be a true west direction, so that the camera 1 and the camera 2 obtain the declination information, that is, two inner angles of a triangle, and the position information of the target object can be calculated according to a trigonometric function relationship. Further, the position information or the movement track of the target object can be displayed in a labeling way on the electronic map, as shown by a curve extending at the target object in the map. The cameras may be edge nodes deployed at the edges of the square.
The example provides a positioning method based on multi-camera interconnection in combination with a use scene of an edge node, so that geographic coordinates (longitude and latitude coordinates) of a target object can be obtained, coordinate unification in a geographic space is realized, integration processing of other devices or platforms is facilitated, and automatic position marking on an electronic map is facilitated. And the positioning mode is simple, and the method is suitable for positioning the target in different scenes.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a positioning apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the positioning methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 5 shows a block diagram of a positioning apparatus according to an embodiment of the present disclosure, as shown in fig. 5, the apparatus comprising:
the acquiring module 31 is configured to acquire relative position relationships between a target object and a plurality of acquiring devices respectively, and acquire position information of the acquiring devices, where the acquiring devices are configured to acquire an image of the target object;
and the positioning module 32 is configured to position the target object according to the relative position relationship and the position information, so as to obtain the position information of the target object.
In one or more possible implementations, the apparatus further includes:
the sending module is used for sending the target characteristics to the plurality of acquisition devices in the camera networking so that the plurality of acquisition devices can determine the target object in the acquired image according to the target characteristics.
In one or more possible implementations, the positioning module 32 is configured to select two acquisition devices from the plurality of acquisition devices, where the two acquisition devices form a triangle with the target object; and positioning the target object according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices to obtain the position information of the target object.
In one or more possible implementations, the relative position relationship includes deflection angle information, and the positioning module 32 is configured to determine a distance between the two acquisition devices according to the position information of the two acquisition devices; and obtaining the position information of the target object according to the distance and the deflection angle information between the target object and the two acquisition devices, wherein the deflection angle information represents the deviation angle of the target object relative to the reference direction of the acquisition devices.
In one or more possible implementations, the positioning module 32 is configured to obtain position information of each of the two acquisition devices; determining a first internal angle and a second internal angle in the triangle according to the azimuth information of each acquisition device and the deflection angle information between the target object and each acquisition device, wherein one side of the first internal angle and one side of the second internal angle pass through a connecting line of the two acquisition devices; and obtaining the position information of the target object according to the first internal angle, the second internal angle and the distance.
In one or more possible implementations, the apparatus further includes: and the marking module is used for marking the track of the target object on the electronic map according to the position information of the target object.
In one or more possible implementations, the obtaining module 31 is configured to obtain the collected images of the plurality of collecting apparatuses; determining an image position of the target object in the acquired image of each acquisition device; and acquiring the relative position relation between the target object and each acquisition device based on the image position.
In one or more possible implementations, the location information includes latitude and longitude coordinates; the service node is a control device of the camera networking, or the service node is any acquisition device in the camera networking.
Fig. 6 shows a block diagram of a positioning apparatus according to an embodiment of the present disclosure, as shown in fig. 6, the apparatus comprising:
an obtaining module 41, configured to obtain a collected image;
a determination module 42 for determining a relative positional relationship between the target object and the acquisition device based on the acquired image;
a sending module 43, configured to send the relative position relationship between the target object and the acquisition devices to a service node, where the service node is configured to locate the target object according to the relative position relationships between the target object and the multiple acquisition devices and the position information of the multiple acquisition devices, so as to obtain the position information of the target object.
In one or more possible implementation manners, the relative position relationship includes deflection angle information, and the determining module 42 is configured to perform target detection on the acquired image to obtain an image position of the target object in the acquired image; based on the image position of the target object, deflection angle information between the target object and the acquisition device is determined, wherein the deflection angle information represents a deflection angle of the target object relative to a reference direction of the acquisition device.
In one or more possible implementations, the determining module 42 is further configured to receive the target feature sent by the serving node; determining the target object in the acquired image according to the target feature; or, acquiring the marking information input by the user; and determining the target object in the acquired image according to the labeling information.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the positioning method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the positioning method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 7, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
FIG. 8 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (15)

1. A positioning method applied in a service node includes:
acquiring relative position relations between a target object and a plurality of acquisition devices respectively, and acquiring position information of the acquisition devices, wherein the acquisition devices are used for acquiring images of the target object;
and positioning the target object according to the relative position relation and the position information to obtain the position information of the target object.
2. The method of claim 1, further comprising:
and sending target characteristics to the plurality of acquisition devices in the camera network so that the plurality of acquisition devices determine the target object in the acquired image according to the target characteristics.
3. The method according to claim 1 or 2, wherein the locating the target object according to the relative position relationship and the position information to obtain the position information of the target object comprises:
selecting two acquisition devices from the plurality of acquisition devices, wherein the two acquisition devices form a triangle with the target object;
and positioning the target object according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices to obtain the position information of the target object.
4. The method according to claim 3, wherein the relative position relationship includes declination information, and the locating the target object according to the relative position relationship between the target object and the two acquisition devices and the position information of the two acquisition devices to obtain the position information of the target object comprises:
determining the distance between the two acquisition devices according to the position information of the two acquisition devices;
and obtaining the position information of the target object according to the distance and the deflection angle information between the target object and the two acquisition devices, wherein the deflection angle information represents the deviation angle of the target object relative to the reference direction of the acquisition devices.
5. The method of claim 4, wherein obtaining the position information of the target object according to the distance and the deflection angle information between the target object and the two acquisition devices comprises:
acquiring azimuth information of each acquisition device in the two acquisition devices;
determining a first internal angle and a second internal angle in the triangle according to the azimuth information of each acquisition device and the deflection angle information between the target object and each acquisition device, wherein one side of the first internal angle and one side of the second internal angle pass through a connecting line of the two acquisition devices;
and obtaining the position information of the target object according to the first internal angle, the second internal angle and the distance.
6. The method according to any one of claims 1 to 5, further comprising:
and marking the track of the target object on the electronic map according to the position information of the target object.
7. The method according to any one of claims 1 to 6, wherein the acquiring of the relative position relationship between the target object and the plurality of acquisition devices respectively comprises:
acquiring acquired images of the plurality of acquisition devices;
determining an image position of the target object in the acquired image of each acquisition device;
and acquiring the relative position relation between the target object and each acquisition device based on the image position.
8. The method of any one of claims 1 to 7, wherein the location information comprises latitude and longitude coordinates; the service node is a control device of the camera networking, or the service node is any acquisition device in the camera networking.
9. A positioning method is applied to an acquisition device and comprises the following steps:
acquiring a collected image;
determining a relative positional relationship between a target object and the acquisition device based on the acquired image;
and sending the relative position relationship between the target object and the acquisition devices to a service node, wherein the service node is used for positioning the target object according to the relative position relationship between the target object and the acquisition devices and the position information of the acquisition devices to obtain the position information of the target object.
10. The method of claim 9, wherein the relative positional relationship comprises declination information, and wherein determining the relative positional relationship between the target object and the acquisition device based on the acquired images comprises:
carrying out target detection on the acquired image to obtain the image position of the target object in the acquired image;
based on the image position of the target object, deflection angle information between the target object and the acquisition device is determined, wherein the deflection angle information represents a deflection angle of the target object relative to a reference direction of the acquisition device.
11. The method according to claim 9 or 10, characterized in that the method further comprises:
receiving target characteristics sent by a service node; determining the target object in the acquired image according to the target feature;
alternatively, the first and second electrodes may be,
acquiring marking information input by a user; and determining the target object in the acquired image according to the labeling information.
12. A positioning device, comprising:
the acquisition module is used for acquiring the relative position relation between a target object and a plurality of acquisition devices respectively and acquiring the position information of the acquisition devices, wherein the acquisition devices are used for acquiring images of the target object;
and the positioning module is used for positioning the target object according to the relative position relation and the position information to obtain the position information of the target object.
13. A positioning device, comprising:
the acquisition module is used for acquiring an acquired image;
the determining module is used for determining the relative position relation of a target object and the acquisition device based on the acquired image;
and the sending module is used for sending the relative position relationship between the target object and the acquisition devices to a service node, wherein the service node is used for positioning the target object according to the relative position relationship between the target object and the acquisition devices and the position information of the acquisition devices to obtain the position information of the target object.
14. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 8 or to perform the method of any one of claims 9 to 11.
15. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8 or the method of any one of claims 9 to 11.
CN202110528927.2A 2021-05-14 2021-05-14 Positioning method and device, electronic equipment and storage medium Pending CN113192139A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110528927.2A CN113192139A (en) 2021-05-14 2021-05-14 Positioning method and device, electronic equipment and storage medium
KR1020227020483A KR20220155421A (en) 2021-05-14 2021-10-20 Positioning method and device, electronic device, storage medium and computer program
PCT/CN2021/125018 WO2022237071A1 (en) 2021-05-14 2021-10-20 Locating method and apparatus, and electronic device, storage medium and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110528927.2A CN113192139A (en) 2021-05-14 2021-05-14 Positioning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113192139A true CN113192139A (en) 2021-07-30

Family

ID=76981777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110528927.2A Pending CN113192139A (en) 2021-05-14 2021-05-14 Positioning method and device, electronic equipment and storage medium

Country Status (3)

Country Link
KR (1) KR20220155421A (en)
CN (1) CN113192139A (en)
WO (1) WO2022237071A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237071A1 (en) * 2021-05-14 2022-11-17 浙江商汤科技开发有限公司 Locating method and apparatus, and electronic device, storage medium and computer program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110595443A (en) * 2019-08-22 2019-12-20 苏州佳世达光电有限公司 Projection device
CN110928627A (en) * 2019-11-22 2020-03-27 北京市商汤科技开发有限公司 Interface display method and device, electronic equipment and storage medium
CN111524185A (en) * 2020-04-21 2020-08-11 上海商汤临港智能科技有限公司 Positioning method and device, electronic equipment and storage medium
CN112771576A (en) * 2020-05-06 2021-05-07 深圳市大疆创新科技有限公司 Position information acquisition method, device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465843A (en) * 2020-12-22 2021-03-09 深圳市慧鲤科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN113192139A (en) * 2021-05-14 2021-07-30 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110595443A (en) * 2019-08-22 2019-12-20 苏州佳世达光电有限公司 Projection device
CN110928627A (en) * 2019-11-22 2020-03-27 北京市商汤科技开发有限公司 Interface display method and device, electronic equipment and storage medium
CN111524185A (en) * 2020-04-21 2020-08-11 上海商汤临港智能科技有限公司 Positioning method and device, electronic equipment and storage medium
CN112771576A (en) * 2020-05-06 2021-05-07 深圳市大疆创新科技有限公司 Position information acquisition method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵宗贵 等: "《信息融合工程实践技术与方法》", 31 August 2015 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237071A1 (en) * 2021-05-14 2022-11-17 浙江商汤科技开发有限公司 Locating method and apparatus, and electronic device, storage medium and computer program

Also Published As

Publication number Publication date
WO2022237071A1 (en) 2022-11-17
KR20220155421A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN111983635B (en) Pose determination method and device, electronic equipment and storage medium
CN111664866A (en) Positioning display method and device, positioning method and device and electronic equipment
CN113115000B (en) Map generation method and device, electronic equipment and storage medium
CN107194968B (en) Image identification tracking method and device, intelligent terminal and readable storage medium
CN112945207B (en) Target positioning method and device, electronic equipment and storage medium
CN112146645B (en) Method and device for aligning coordinate system, electronic equipment and storage medium
CN111860373B (en) Target detection method and device, electronic equipment and storage medium
CN112433211B (en) Pose determination method and device, electronic equipment and storage medium
CN114019473A (en) Object detection method and device, electronic equipment and storage medium
US20220043164A1 (en) Positioning method, electronic device and storage medium
CN112432637A (en) Positioning method and device, electronic equipment and storage medium
KR20220130707A (en) Point cloud map construction method and apparatus, electronic device, storage medium and program
CN113077647A (en) Parking lot navigation method and device, electronic equipment and storage medium
CN114067087A (en) AR display method and apparatus, electronic device and storage medium
CN112950712B (en) Positioning method and device, electronic equipment and storage medium
CN113989469A (en) AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium
CN113345000A (en) Depth detection method and device, electronic equipment and storage medium
WO2022237071A1 (en) Locating method and apparatus, and electronic device, storage medium and computer program
CN112837372A (en) Data generation method and device, electronic equipment and storage medium
CN112432636B (en) Positioning method and device, electronic equipment and storage medium
WO2022110777A1 (en) Positioning method and apparatus, electronic device, storage medium, computer program product, and computer program
CN114519794A (en) Feature point matching method and device, electronic equipment and storage medium
CN113538700A (en) Augmented reality device calibration method and device, electronic device and storage medium
CN112948411A (en) Pose data processing method, interface, device, system, equipment and medium
CN112767541A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40049280

Country of ref document: HK

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210730