CN111553947A - Target object positioning method and device - Google Patents

Target object positioning method and device Download PDF

Info

Publication number
CN111553947A
CN111553947A CN202010305824.5A CN202010305824A CN111553947A CN 111553947 A CN111553947 A CN 111553947A CN 202010305824 A CN202010305824 A CN 202010305824A CN 111553947 A CN111553947 A CN 111553947A
Authority
CN
China
Prior art keywords
target object
positioning
image
camera
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010305824.5A
Other languages
Chinese (zh)
Inventor
刘恒进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010305824.5A priority Critical patent/CN111553947A/en
Publication of CN111553947A publication Critical patent/CN111553947A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a target object positioning method and device. The method comprises the following steps: acquiring a first image which is acquired by a first camera and contains a target object; calculating a first positioning coordinate of a target object in the first image and a positioning confidence corresponding to the first positioning coordinate, wherein the positioning confidence is used for representing the positioning precision of the first positioning coordinate; when the position confidence degree is smaller than a preset confidence degree threshold value, matching a second camera to obtain a second image containing the target object through the second camera; and calculating second positioning coordinates of the target object in the second image to take the second positioning coordinates as target positioning coordinates. According to the technical scheme, the accuracy of positioning the target object can be improved.

Description

Target object positioning method and device
Technical Field
The present application relates to the field of computer and communication technologies, and in particular, to a method and an apparatus for positioning a target object.
Background
In a positioning scene of an object, such as a scene of positioning a target object in a traffic road area, a common camera is usually used to photograph the target object at an intersection to obtain positioning coordinates of the target object. However, in some cases, the accuracy of the positioning of the target object at the edge or occlusion position of the picture is low due to limitations in camera hardware capabilities and physical field of view environments. Therefore, how to improve the accuracy of positioning the target object is an urgent technical problem to be solved.
Disclosure of Invention
Embodiments of the present application provide a target object positioning method, an apparatus, a computer-readable medium, and an electronic device, so that the accuracy of positioning a target object can be improved at least to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a target object positioning method, including: acquiring a first image which is acquired by a first camera and contains a target object; calculating a first positioning coordinate of a target object in the first image and a positioning confidence corresponding to the first positioning coordinate, wherein the positioning confidence is used for representing the positioning precision of the first positioning coordinate; when the position confidence degree is smaller than a preset confidence degree threshold value, matching a second camera to obtain a second image containing the target object through the second camera; and calculating second positioning coordinates of the target object in the second image to take the second positioning coordinates as target positioning coordinates.
According to an aspect of the embodiments of the present application, there is provided an apparatus for locating a target object, the apparatus including: an acquisition unit configured to acquire a first image including a target object captured by a first camera; the first calculation unit is used for calculating a first positioning coordinate of a target object in the first image and a positioning confidence degree corresponding to the first positioning coordinate, wherein the positioning confidence degree is used for representing the positioning accuracy of the first positioning coordinate; the matching unit is used for matching a second camera when the position confidence degree is smaller than a preset confidence degree threshold value so as to acquire a second image containing the target object through the second camera; and the second calculation unit is used for calculating second positioning coordinates of the target object in the second image so as to take the second positioning coordinates as target positioning coordinates.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: the identification unit is used for identifying a target object in the first image to obtain a target object attribute of the target object; and the generating unit is used for generating an attribute label for the target object according to the target object attribute of the target object, wherein the attribute label is used for identifying the target object.
In some embodiments of the present application, based on the foregoing solution, the first computing unit is configured to: acquiring an image parameter corresponding to the first image; and calculating a first positioning coordinate of the target object in the first image based on the image parameter.
In some embodiments of the present application, based on the foregoing solution, the image parameter corresponding to the first image includes a positioning coordinate of the first camera, and the first computing unit is configured to: acquiring the position of the target object in the first image; calculating a positioning coordinate difference between the target object and a first camera according to the position of the target object in the first image; and calculating a first positioning coordinate of the target object in the first image according to the positioning coordinate of the first camera and the positioning coordinate difference between the target object and the first camera.
In some embodiments of the present application, based on the foregoing solution, the first computing unit is configured to: acquiring the position of the target object in the first image; and calculating a positioning confidence degree corresponding to the first positioning coordinate according to the position of the target object in the first image, wherein the positioning confidence degree is inversely proportional to the length of the position of the target object in the first image from the central position of the image.
In some embodiments of the present application, based on the foregoing solution, the matching unit is configured to: determining positioning coordinate intervals corresponding to the vision areas of other cameras except the first camera; and matching a second camera in other cameras except the first camera based on the positioning coordinate interval.
In some embodiments of the present application, based on the foregoing solution, the matching unit is configured to: detecting whether the first positioning coordinate falls into the positioning coordinate interval or not; and when the first positioning coordinate falls into the positioning coordinate interval, determining the camera corresponding to the positioning coordinate interval as a matched second camera.
In some embodiments of the present application, based on the foregoing solution, the matching unit is configured to: sending an instant state request message to other cameras except the first camera, wherein the instant state request message is used for requesting instant state information of the other cameras; acquiring the other fed back instant state information; and matching a second camera in other cameras except the first camera based on the instant state information.
In some embodiments of the present application, based on the foregoing solution, the second calculating unit is configured to: when the number of the matched second cameras is at least two, acquiring at least two second images which are acquired by the at least two second cameras and contain the target object; calculating at least two third positioning coordinates of the target object in the at least two second images; and carrying out mathematical statistics on the at least two third positioning coordinates to obtain the second positioning coordinates.
According to an aspect of embodiments of the present application, there is provided a computer-readable medium, on which a computer program is stored, which, when being executed by a processor, implements the method for locating a target object as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of locating a target object as described in the embodiments above.
In the technical solutions provided in some embodiments of the present application, a first positioning coordinate of a target object in a first image acquired by a first camera and a position reliability corresponding to the first positioning coordinate are calculated, and when the position reliability is smaller than a predetermined confidence threshold, a second camera is matched to obtain a second image including the target object, a second positioning coordinate of the target object in the second image is calculated again, and the second positioning coordinate is used as a target positioning coordinate. The positioning confidence coefficient can be used for representing the positioning accuracy of the first positioning coordinate, so that a target object needing to acquire images again and calculate the positioning coordinate can be screened out through a preset confidence coefficient threshold value, further, a second image of the target object is acquired through matching a second camera, and a second positioning coordinate of the target object can be calculated through the second image, so that the technical scheme provided by some embodiments of the application can improve the accuracy of positioning the target object.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
FIG. 2 illustrates an application scenario diagram implementing a method of positioning a target object according to an embodiment of the present application;
FIG. 3 shows a flow chart of a method of positioning a target object according to an embodiment of the present application;
FIG. 4 illustrates a flow chart before calculating a first location coordinate of a target object in the first image and a location confidence corresponding to the first location coordinate, according to an embodiment of the present application;
FIG. 5 illustrates a detailed flow chart for calculating first location coordinates of a target object in the first image according to one embodiment of the present application;
FIG. 6 illustrates a schematic diagram of similar triangle based monocular visual positioning ranging according to one embodiment of the present application
FIG. 7 illustrates a presentation of a target object in a first image captured by a camera according to one embodiment of the present application;
FIG. 8 shows a flow chart of matching a second camera according to an embodiment of the present application;
FIG. 9 shows a detailed flow diagram of matching a second camera according to one embodiment of the present application;
FIG. 10 illustrates a flow chart of matching a second camera according to one embodiment of the present application;
FIG. 11 illustrates a flow chart for calculating second location coordinates of a target object in the second image according to an embodiment of the present application;
FIG. 12 illustrates an overall flow diagram for locating a target object in a doorway area based on an MEC in accordance with one embodiment of the present application;
FIG. 13 illustrates an overall flow diagram for locating a target object in an intersection region based on an MEC in accordance with one embodiment of the present application;
FIG. 14 shows a block diagram of a target object locating device according to an embodiment of the present application;
FIG. 15 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture may include terminal devices (e.g., one or more of the smartphone 101, the tablet 102, and the portable computer 103 shown in fig. 1, but other terminal devices, etc.), a network 104, and a server 105. The network 104 serves as a medium for providing communication links between terminal devices and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
It should be noted that the target object positioning method provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the target object positioning device is generally disposed in the server 105. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the positioning scheme of the target object provided in the embodiments of the present application.
It should be further noted that, in addition to being executed by the aforementioned server 105 or terminal device, the target object positioning method provided in the embodiment of the present application may also be executed by a cloud server having a cloud computing function.
Specifically, the cloud computing (cloud computing) is a computing mode, which distributes computing tasks on a resource pool formed by a large number of computers, so that various application systems can acquire computing power, storage space and information services as required. The network that provides the resources is referred to as the "cloud". Resources in the cloud can be infinitely expanded to users, and can be acquired at any time, used as required and expanded at any time. The cloud computing resource pool mainly comprises computing equipment (which is a virtualization machine and comprises an operating system), storage equipment and network equipment.
In an embodiment of the present application, an application scenario of the target object locating method may be a scenario of locating a target object in an intersection area as shown in fig. 2.
Referring to fig. 2, an application scenario diagram implementing a positioning method of a target object according to an embodiment of the present application is shown. Specifically, in the intersection area 200 shown in fig. 2, at least two cameras (e.g., a first camera and a second camera) and at least two target objects (e.g., a target object 1 and a target object 2) are included, where the target object 1 is an automobile and the target object 2 is a pedestrian.
In a specific implementation of this embodiment, first, a first image including an automobile (target object 1) and a pedestrian (target object 2) collected by a first camera may be obtained, then, first location coordinates of the automobile and the pedestrian in the first image and location reliability corresponding to the first location coordinates of the automobile and the pedestrian may be calculated, when the location reliability is smaller than a predetermined confidence threshold, a second camera (such as a second camera shown in fig. 2) may be matched in other cameras except the first camera, then, a second image including the automobile and the pedestrian collected by the second camera may be obtained, and second location coordinates of the automobile and the pedestrian in the first image are calculated, so that the second location coordinates are used as the target location coordinates.
It is noted that the terms first, second and the like in the description and claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the objects so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than those illustrated or described herein.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
according to a first aspect of the present disclosure, a method for locating a target object is provided.
Referring to fig. 3, a flow chart of a method of locating a target object according to an embodiment of the present application is shown. The target object positioning method may be performed by a device having a computing processing function, such as the server 105 shown in fig. 1, the terminal device shown in fig. 1, or a cloud server having a cloud computing function.
As shown in fig. 3, the method for locating a target object at least includes steps 310 to 370:
in step 310, a first image including a target object captured by a first camera is obtained.
Step 330, calculating a first positioning coordinate of the target object in the first image and a positioning confidence corresponding to the first positioning coordinate, where the positioning confidence is used to represent the positioning accuracy of the first positioning coordinate.
And step 350, when the position confidence degree is smaller than a preset confidence degree threshold value, matching a second camera to obtain a second image containing the target object through the second camera.
Step 370, calculating a second positioning coordinate of the target object in the second image, so as to use the second positioning coordinate as a target positioning coordinate.
In this application, the first camera and the second camera may refer to a common camera device having a photographing or shooting function, and the first image, the second image, and the like may be a video or an image shot by the common camera device.
In this application, the first positioning coordinate and the second positioning coordinate may refer to a three-dimensional coordinate of the target object in an area, or may refer to a longitude and latitude and a height coordinate of the target object.
In this application, it should be noted that when the position confidence is not less than a predetermined confidence threshold, the first positioning coordinate is used as the target positioning coordinate of the target object.
The above steps will be described in detail below:
in step 310, a first image captured by a first camera and containing a target object is acquired.
In this application, a first camera may take a picture of a target object in a certain area or record a video to obtain a first image (picture or video stream) including the target object. For example, the first camera takes a picture of a target object in the intersection area, which may be a pedestrian, an electric vehicle, a pet, an automobile, or the like.
In this application, when first camera gathered the first image that includes the target object, can also take notes the acquisition time of image, image acquisition time stamp t promptly1,t2,……,tn
In the present application, the first camera may also have positioning coordinates, and it should be understood to those skilled in the art that other cameras than the first camera may also have corresponding positioning coordinates.
With reference to fig. 3, in step 330, a first positioning coordinate of the target object in the first image and a positioning confidence corresponding to the first positioning coordinate are calculated, where the positioning confidence is used to represent a positioning accuracy of the first positioning coordinate.
In an embodiment of the present application, before calculating the first location coordinate of the target object in the first image and the location reliability corresponding to the first location coordinate, the method shown in fig. 4 may be further implemented.
Referring to fig. 4, a flowchart illustrating a process before calculating a first location coordinate of a target object in the first image and a location reliability corresponding to the first location coordinate according to an embodiment of the present application is shown, which may specifically include steps 321 to 322:
step 321, identifying a target object in the first image, and obtaining a target object attribute of the target object.
In the present application, the target object in the first image may be identified by a visual recognition algorithm, and the visual recognition algorithm may be trained in advance in a supervised or unsupervised manner based on the attribute features (e.g., shape profile, size, color, etc.) of various types of target objects (e.g., pedestrians, electric vehicles, pets, automobiles, etc.). Further, the trained visual recognition algorithm may recognize the target object in the first image and obtain the attributes of the target object (e.g., classification, shape contour, size, color, etc. of the target object).
In order to make the technical principle of identifying the target object in the first image more understandable to those skilled in the art, the following briefly describes a visual recognition algorithm in conjunction with the embodiments of the present application:
the first step is as follows: a number of candidate regions are extracted from the first image (image) using a Selective Search algorithm.
Specifically, the Selective Search algorithm may include 5 steps: (1) determining a region set R containing a plurality of initialization regions in a first image (image), and calculating the similarity S of each adjacent region in the region set R, wherein the similarity S is { S1, S2, … }; (2) finding out the adjacent regions with the highest similarity, combining the adjacent regions into a new region, and adding the new region into the R; (3) removing all data related to the adjacent regions with the highest similarity in the step 2 from the S; (4) calculating the similarity between the new region and other adjacent regions; (5) and jumping to the step 2 until S is empty.
In the above (1), the similarity between the adjacent regions i and j in the region set R can be calculated by the following formula: s (r)i,rj)=a1scolour(ri,rj)+a2stexture(ri,rj)+a3ssize(ri,rj)+a4sfill(ri,rj)
Wherein s iscolour(ri,rj)、stexture(ri,rj)、ssize(ri,rj)、sfill(ri,rj) Respectively representing the color similarity, the texture similarity, the size similarity and the spatial overlapping similarity between adjacent regions i and j. The method comprises the following specific steps:
color similarity: the color space of the first image (image) is converted into three color channels (i.e., hue, saturation, and brightness), and a histogram is calculated with bins 25 for each channel, wherein the color histogram of each region in the region set R has 25 × 3-75 bins. The color similarity between adjacent regions i and j is calculated using the following equation after normalizing the histogram by the region size:
Figure BDA0002455749690000091
wherein,
Figure BDA0002455749690000092
respectively representing the color histograms of the i-region and the j-region in the k-th interval.
Texture similarity: gradient statistics are performed in 8 directions by using a gaussian distribution with variance of 1, and then a histogram is calculated by using bins of 10 as a statistical result (the size is consistent with the size of the region), and the number of intervals of the histogram is 8 x 3 x 10 of 240 (using an RGB color space). The texture similarity between neighboring regions i and j is calculated by the following equation:
Figure BDA0002455749690000093
wherein,
Figure BDA0002455749690000094
respectively representing the texture histograms of the i area and the j area in the k interval.
Size similarity: in order to ensure that the dimension of the region merging operation is uniform, the following formula is used for calculating the size similarity between adjacent regions i and j, and the aim is to merge small regions as much as possible.
Figure BDA0002455749690000095
Where size (im) represents the size (in pixels) in the region.
Spatial overlapping similarity: to measure the overlap ratio of two regions, the spatial overlap similarity between adjacent regions i and j can be calculated by the following formula:
Figure BDA0002455749690000101
the second step is that: all candidate areas are scaled to a fixed size, for example using a fixed size of 227 x 227.
The third step: features of the candidate region image are extracted using (neural network) CNN (including 5 convolutional layers and 2 fully-connected layers) to obtain fixed-length feature vectors.
The fourth step: and inputting the feature vector into a pre-trained SVM classifier, judging the input class, and identifying the target object. In the present application, by the above-mentioned embodiments, the target object in the first image can be identified more accurately.
Step 322, generating an attribute tag for the target object according to the target object attribute of the target object, where the attribute tag is used to identify the target object.
Specifically, for example, the attribute of the target object is "Pedestrian, medium height, short hair, red jacket, black trousers, white shoes", then, the attribute tag generated for the target object may be a letter tag "peer-medium height-short jacket-black reusers-white shoes", or may be a digital tag "001-.
In an embodiment of the present application, calculating the first location coordinates of the target object in the first image may be implemented by the steps shown in fig. 5.
Referring to fig. 5, a detailed flowchart illustrating a process of calculating the first location coordinates of the target object in the first image according to an embodiment of the present application is shown, which may specifically include steps 331 to 332:
in step 331, image parameters corresponding to the first image are obtained.
Step 332, calculating a first positioning coordinate of the target object in the first image based on the image parameter.
In a specific implementation of an embodiment, the image parameter corresponding to the first image may include a positioning coordinate of a first camera that captures the first image.
The specific process of calculating the first positioning coordinate of the target object in the first image may be as follows: firstly, according to the first image, the position of the target object in the first image is obtained. Then, according to the position of the target object in the first image, a positioning coordinate difference between the target object and the first camera is calculated. And finally, calculating a first positioning coordinate of the target object in the first image according to the positioning coordinate of the first camera and the positioning coordinate difference between the target object and the first camera. For example, if the difference in the positioning coordinates between the target object and the first camera is "(" Δ X, ", Δ Y,", Δ Z) "is (1,1, 1)", and the positioning coordinates of the first camera is "(X1, Y1, Z1)" is (2,2,2) ", then the first positioning coordinates of the target object is" (X1, Y1, Z1) "is (3,3, 3)".
It should be noted that, in the present application, calculating the positioning coordinate difference between the target object and the first camera according to the position of the target object in the first image may be implemented based on a visual positioning algorithm, and the principle of the visual positioning algorithm will be briefly described with reference to fig. 6 below:
referring to fig. 6, a schematic diagram of similar triangle based monocular visual positioning according to one embodiment of the present application is shown.
In fig. 6, from the geometric relationship of similar triangles, it can be seen that,
EG/BD=AG/AD
GF/DC=AG/AD
EG/EB=AE/AB
as can be seen from the transfer of the equation,
EG/BD=GF/DC=AE/AB
furthermore, the positioning coordinate difference between the target object corresponding to each pixel point and the camera can be obtained by measuring equidistant array points (such as a calibration plate) at a higher distance, interpolating and then carrying out equal-proportion amplification.
In a specific implementation of an embodiment, the image parameter corresponding to the first image may also include an image capturing time when the first camera captures the first image.
In this application, the image capturing time of the first image may be used to identify the sequence of the images, and in addition, may also be used to represent a time characteristic when the target object is at the first location coordinate, that is, it may be known when the target object is at the first location coordinate.
In a specific implementation of an embodiment, the image parameters corresponding to the first image may further include an image type identifier of the first image (e.g., MP3 format video, JPG format picture) and an image resolution (e.g., 1920 × 1080) of the first image.
In an embodiment of the present application, the position reliability corresponding to the first positioning coordinate may be calculated as follows: acquiring the position of the target object in the first image; and calculating a positioning confidence degree corresponding to the first positioning coordinate according to the position of the target object in the first image, wherein the positioning confidence degree is inversely proportional to the length of the position of the target object in the first image from the central position of the image.
In an embodiment of the present application, the position reliability corresponding to the first positioning coordinate may also be calculated as follows: acquiring the resolution of the first image; and calculating a positioning confidence corresponding to the first positioning coordinate according to the resolution of the first image, wherein the positioning confidence is in direct proportion to the resolution of the first image, namely the higher the resolution of the image is, the higher the positioning confidence is.
In an embodiment of the present application, the position reliability corresponding to the first positioning coordinate may be further calculated by: acquiring the position of the target object in the first image and the resolution of the first image; and calculating a positioning confidence degree corresponding to the first positioning coordinate according to the position of the target object in the first image and the resolution of the first image, wherein the positioning confidence degree is in negative correlation with the length of the position of the target object in the first image from the central position of the image and in positive correlation with the resolution of the first image.
For a better understanding of the principle of calculating the first location coordinates of the target object in the first image and the location confidence corresponding to the first location coordinates in the present application, the following explanation will be made with reference to fig. 7:
referring to fig. 7, a presentation of a target object in a first image captured by a camera is shown, according to one embodiment of the present application.
As shown in fig. 7, the first image is an image collected by a camera and including a target object 1 and a target object 2, in this application, there is a relative position relationship between the camera and the target object, and the relative position relationship includes a distance, a relative angle, an orientation, and the like between the camera and the target object.
Since the first image in fig. 7 is obtained by shooting with the first camera, the first image may represent that there is a relative position relationship between the camera and the target object, so that a positioning coordinate difference between the target object and the first camera may be calculated according to the relative position relationship between the camera represented in the first image and the target object, and further, the first positioning coordinate of the target object may be calculated according to the positioning coordinate difference and the positioning coordinate of the first camera.
In the present application, the positioning confidence corresponding to the first positioning coordinate of the target object may be obtained according to the position of the target object in the first image, as shown in fig. 7, due to the influence of the physical field of view environment of the camera, the target object 2 close to the image center of the first image may generally obtain a high-precision position, while the positioning accuracy of the target object 1 at the edge position of the first image is low, so the positioning confidence corresponding to the first positioning coordinate may be calculated according to the position of the target object in the first image, where the positioning confidence is inversely proportional to the length of the position of the target object in the first image from the image center position.
It should be noted that the position confidence level is used to characterize the degree of positioning accuracy, such as 0.1 for less accuracy and 0.99 for very accuracy. The positioning confidence may be expressed by a confidence level, in addition to a specific number. For example, the total number is 5, 5 indicates the highest positioning accuracy, and 1 indicates the lowest positioning accuracy.
With continued reference to fig. 3, in step 350, when the location reliability is smaller than the predetermined confidence threshold, a second camera is matched to obtain a second image including the target object through the second camera.
In the application, the first positioning coordinate with low confidence coefficient is screened by setting a confidence coefficient threshold, and the low confidence coefficient means poor positioning precision, so that secondary positioning is needed to improve the precision of positioning the target object.
In one embodiment of the present application, matching the second camera may be achieved by the steps shown in fig. 8.
Referring to fig. 8, a flowchart of matching a second camera according to an embodiment of the present application is shown, which may specifically include steps 351 to 352:
step 351, determining a positioning coordinate interval corresponding to the visual field areas of other cameras except the first camera.
In the present application, there may be at least two cameras including the first camera, for example 4 cameras in the intersection area shown in fig. 2. Each camera corresponds to a certain view area, further, the view area of each camera corresponds to a positioning coordinate interval, and in step 351, the positioning coordinate intervals corresponding to the view areas of the other cameras except the first camera are determined.
And step 352, matching a second camera in other cameras except the first camera based on the positioning coordinate interval.
In a specific implementation of an embodiment, matching a second camera among other cameras than the first camera based on the positioning coordinate interval may be implemented by the steps shown in fig. 9.
Referring to fig. 9, a detailed flowchart of matching the second camera according to an embodiment of the present application is shown, which may specifically include steps 3521 to 3522:
step 3521, detecting whether the first positioning coordinate falls into the positioning coordinate interval.
Step 3522, when the first positioning coordinate falls into the positioning coordinate interval, determining the camera corresponding to the positioning coordinate interval as a matched second camera.
Further, in the above embodiment, the field of view region of the other camera may be a partial field of view region of the camera, wherein the partial field of view region is a middle portion of the entire field of view region of the camera. The benefits of this are: because the target object in the middle view field of the camera is close to the center of the image in the corresponding image, the positioning coordinate of the target object with higher precision can be obtained. Therefore, a better camera can be matched through the method, and the accuracy of positioning the target object is further improved.
In one embodiment of the present application, matching the second camera may be achieved by the steps shown in fig. 10.
Referring to fig. 10, a flowchart of matching a second camera according to an embodiment of the present application is shown, which may specifically include steps 353 to 355:
and 353, sending an instant state request message to other cameras except the first camera, wherein the instant state request message is used for requesting instant state information of the other cameras.
And step 354, acquiring the instant state information fed back by the other cameras.
And step 355, matching a second camera in other cameras except the first camera based on the instant state information.
In this application, the instant status information may include a camera identifier, an instant view picture of the camera, and a timestamp corresponding to the instant view picture. Further, the instant status information may further include camera parameter information such as a downward inclination angle of the camera, whether the camera can rotate, and the like.
In an embodiment of the present application, before the second camera acquires the second image containing the target object, the following scheme may be further implemented: and sending a control message for acquiring a second image of the target object to the matched second camera.
Specifically, the control message may include a camera identifier (for identifying a camera for performing secondary photographing), a first positioning coordinate of the target object, the number of the target objects, an attribute tag of the target object (that is, a basic feature of the target object, which is used to assist the camera to perform photographing, such as focusing), a return period (which is used to indicate a period in which the camera returns second image data), and an image acquisition time of the first image of the target object (which is used to perform motion estimation on the camera, so that the camera motion is known).
Further, the control message may further include an image format for capturing a second image of the target object, an image resolution, and the like.
With continued reference to fig. 3, in step 370, second positioning coordinates of the target object in the second image are calculated to use the second positioning coordinates as target positioning coordinates.
In an embodiment of the present application, when the number of the matched second cameras is at least two, the calculating of the second positioning coordinates of the target object in the second image may be implemented by the steps shown in fig. 11.
Referring to fig. 11, a flowchart of calculating second positioning coordinates of the target object in the second image according to an embodiment of the present application is shown, which may specifically include steps 371 to 373:
step 371, at least two second images including the target object collected by the at least two second cameras are obtained.
Step 372, calculating at least two third positioning coordinates of the target object in the at least two second images.
Step 373, performing mathematical statistics on the at least two third positioning coordinates to obtain the second positioning coordinates.
In a specific implementation of an embodiment, the performing mathematical statistics on the at least two third positioning coordinates may be averaging the at least two third positioning coordinates to obtain the second positioning coordinates.
Specifically, for example, when the number of the matched second cameras is 3,3 second images including the target object acquired by the 3 second cameras are acquired, 3 third positioning coordinates of the target object in the 3 second images are calculated, where the 3 third positioning coordinates are "(1.1, 1.1,1.1), (1.2,1.2,1.2), (1.3,1.3, 1.3)", and the 3 third positioning coordinates are averaged to obtain the second positioning coordinate "(1.2, 1.2, 1.2)".
In a specific implementation of an embodiment, before performing mathematical statistics on the at least two third positioning coordinates, positioning confidence degrees corresponding to the at least two third positioning coordinates may also be calculated. Further, the performing mathematical statistics on the at least two third positioning coordinates may be determining a third positioning coordinate with a highest position reliability of the at least two third positioning coordinates as the second positioning coordinate.
Specifically, for example, when the number of the matched second cameras is 3,3 second images including the target object acquired by the 3 second cameras are acquired, and 3 third positioning coordinates of the target object in the 3 second images are calculated, where the 3 third positioning coordinates and corresponding confidence degrees are [ (1.1,1.1,1.1), and the confidence degrees are 0.91], [ (1.2,1.2,1.2), and the confidence degree is 0.92], [ (1.3,1.3,1.3), and the confidence degree is 0.93] ", respectively, and thus the second positioning coordinates are" (1.3,1.3,1.3 ").
In an embodiment of the present application, when the number of the second cameras matched is one, the calculated positioning coordinates may be directly determined as the second positioning coordinates.
In order to make those skilled in the art better understand the present application, the technical solution of the present application will be combed from the overall process in combination with the application scenario of MEC-based target object location in the intersection region.
First, the concepts "MEC", "camera", and "target object" need to be explained:
target object: including objects such as pedestrians, electric vehicles, and vehicles.
A camera: the system is deployed at the intersection and used for monitoring the pedestrian flow and the vehicle flow at the intersection in real time. And sending the shot picture or video stream to the MEC for target detection and positioning. The control instruction of the MEC can be received, and secondary photographing and positioning of the specified target are achieved.
And MEC: in the application, the MEC refers to a mobile edge computing platform, which effectively integrates the technologies of a wireless network and the internet, adds functions such as computation, storage, processing and the like on the wireless network side, constructs an open platform to implant applications, opens information interaction between the wireless network and a service server through a wireless API, integrates the wireless network and services, and upgrades a traditional wireless base station into an intelligent base station. In the embodiment of the application, the MEC can be deployed at a position such as an intersection, and the MEC connects and receives pictures or video streams of a camera at the intersection in a wired or wireless mode, and runs a visual algorithm to position an intersection target. And judging the position credibility of each target, for low-confidence, indicating that the positioning accuracy is poor, selecting a camera with a proper visual field, and sending a control instruction to perform secondary photographing and positioning on the targets.
Referring to fig. 12, an overall flowchart of locating a target object in an intersection region based on MEC according to an embodiment of the present application is shown, where a specific process includes the following steps:
step 1: the first camera 1201 collects image data 1 of an intersection target object.
Step 2: the first camera 1201 transmits the acquired image data 1 to the MEC.
And step 3: MEC1202 runs a vision recognition and localization algorithm to analyze image data 1.
And 4, step 4: the MEC1202 matches the camera and sends control instructions to acquire image data about the target object.
And 5: the second camera 1203 acquires image data 2 of the target object according to the control instruction.
Step 6: the second camera 1203 transmits the acquired image data 2 to the MEC.
And 7: the MEC1202 runs a visual recognition and positioning algorithm to analyze the image data 2 to obtain target positioning coordinates of the target object.
Referring to fig. 13, an overall flowchart of locating a target object in an intersection region based on MEC according to an embodiment of the present application is shown, where a specific process includes the following steps:
step 1: the first camera 1301 acquires image data 1 of an intersection target object.
Step 2: the first camera 1301 transmits the acquired image data 1 to the MEC.
And step 3: MEC1302 runs a visual recognition and localization algorithm to analyze image data 1.
And 4, step 4: the MEC1302 requests and obtains the instant status information of the other cameras.
And 5: the MEC1302 matches the cameras and sends control instructions to collect image data about the target object based on the instant status information.
Step 6: the second camera 1303 acquires the image data 2 of the target object according to the control instruction.
And 7: the second camera 1303 sends the acquired image data 2 to the MEC.
And 8: the MEC1302 operates a visual recognition and positioning algorithm to analyze the image data 2 to obtain target positioning coordinates of the target object.
In a scene of an intersection area, the high-precision positioning coordinates of the target objects such as automobiles, pedestrians and electric vehicles can be obtained through the method, so that the traffic order can be optimized, for example, traffic lights can be dispatched by using the high-precision positioning coordinates of the target objects, and for example, a driver can be reminded of the presence of people behind the driver, and the driver can pay attention to safety.
In the technical solutions provided in some embodiments of the present application, a first positioning coordinate of a target object in a first image acquired by a first camera and a position reliability corresponding to the first positioning coordinate are calculated, and when the position reliability is smaller than a predetermined confidence threshold, a second camera is matched to obtain a second image including the target object, a second positioning coordinate of the target object in the second image is calculated again, and the second positioning coordinate is used as a target positioning coordinate. The positioning confidence coefficient can be used for representing the positioning accuracy of the first positioning coordinate, so that a target object needing to acquire images again and calculate the positioning coordinate can be screened out through a preset confidence coefficient threshold value, further, a second image of the target object is acquired through matching a second camera, and a second positioning coordinate of the target object can be calculated through the second image, so that the technical scheme provided by some embodiments of the application can improve the accuracy of positioning the target object.
Embodiments of the apparatus of the present application are described below, which may be used to perform the method for locating a target object in the above-described embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the target object positioning method described above in the present application.
FIG. 14 shows a block diagram of a target object locating device according to an embodiment of the present application.
Referring to fig. 14, there is shown a target object locating apparatus 1400 according to an embodiment of the present application, including: an acquisition unit 1401, a first calculation unit 1402, a matching unit 1403, and a second calculation unit 1404.
The acquiring unit 1401 is configured to acquire a first image including a target object and acquired by a first camera; a first calculating unit 1402, configured to calculate a first positioning coordinate of a target object in the first image and a positioning confidence corresponding to the first positioning coordinate, where the positioning confidence is used to represent a positioning accuracy of the first positioning coordinate; a matching unit 1403, configured to match a second camera when the location reliability is smaller than a predetermined confidence threshold, so as to obtain a second image including the target object through the second camera; a second calculating unit 1404, configured to calculate second positioning coordinates of the target object in the second image, so as to use the second positioning coordinates as target positioning coordinates.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: the identification unit is used for identifying a target object in the first image to obtain a target object attribute of the target object; and the generating unit is used for generating an attribute label for the target object according to the target object attribute of the target object, wherein the attribute label is used for identifying the target object.
In some embodiments of the present application, based on the foregoing solution, the first computing unit 1402 is configured to: acquiring an image parameter corresponding to the first image; and calculating a first positioning coordinate of the target object in the first image based on the image parameter.
In some embodiments of the present application, based on the foregoing solution, the image parameter corresponding to the first image includes a positioning coordinate of the first camera, and the first calculating unit 1402 is configured to: acquiring the position of the target object in the first image; calculating a positioning coordinate difference between the target object and a first camera according to the position of the target object in the first image; and calculating a first positioning coordinate of the target object in the first image according to the positioning coordinate of the first camera and the positioning coordinate difference between the target object and the first camera.
In some embodiments of the present application, based on the foregoing solution, the first computing unit 1402 is configured to: acquiring the position of the target object in the first image; and calculating a positioning confidence degree corresponding to the first positioning coordinate according to the position of the target object in the first image, wherein the positioning confidence degree is inversely proportional to the length of the position of the target object in the first image from the central position of the image.
In some embodiments of the present application, based on the foregoing scheme, the matching unit 1403 is configured to: determining positioning coordinate intervals corresponding to the vision areas of other cameras except the first camera; and matching a second camera in other cameras except the first camera based on the positioning coordinate interval.
In some embodiments of the present application, based on the foregoing scheme, the matching unit 1403 is configured to: detecting whether the first positioning coordinate falls into the positioning coordinate interval or not; and when the first positioning coordinate falls into the positioning coordinate interval, determining the camera corresponding to the positioning coordinate interval as a matched second camera.
In some embodiments of the present application, based on the foregoing scheme, the matching unit 1403 is configured to: sending an instant state request message to other cameras except the first camera, wherein the instant state request message is used for requesting instant state information of the other cameras; acquiring the other fed back instant state information; and matching a second camera in other cameras except the first camera based on the instant state information.
In some embodiments of the present application, based on the foregoing scheme, the second calculating unit 1404 is configured to: when the number of the matched second cameras is at least two, acquiring at least two second images which are acquired by the at least two second cameras and contain the target object; calculating at least two third positioning coordinates of the target object in the at least two second images; and carrying out mathematical statistics on the at least two third positioning coordinates to obtain the second positioning coordinates.
FIG. 15 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1500 of the electronic device shown in fig. 15 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 15, the computer system 1500 includes a Central Processing Unit (CPU)1501 which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1502 or a program loaded from a storage section 1508 into a Random Access Memory (RAM) 1503. In the RAM 1503, various programs and data necessary for system operation are also stored. The CPU 1501, the ROM 1502, and the RAM 1503 are connected to each other by a bus 1504. An Input/Output (I/O) interface 1505 is also connected to bus 1504.
The following components are connected to the I/O interface 1505: an input portion 1506 including a keyboard, a mouse, and the like; an output section 1507 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1508 including a hard disk and the like; and a communication section 1509 including a network interface card such as a LAN (Local area network) card, a modem, or the like. The communication section 1509 performs communication processing via a network such as the internet. A drive 1510 is also connected to the I/O interface 1505 as needed. A removable medium 1511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1510 as necessary, so that a computer program read out therefrom is mounted into the storage section 1508 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1509, and/or installed from the removable medium 1511. When the computer program is executed by a Central Processing Unit (CPU)1501, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or at least two wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or at least two programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by at least two modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method for locating a target object, the method comprising:
acquiring a first image which is acquired by a first camera and contains a target object;
calculating a first positioning coordinate of a target object in the first image and a positioning confidence corresponding to the first positioning coordinate, wherein the positioning confidence is used for representing the positioning precision of the first positioning coordinate;
when the position confidence degree is smaller than a preset confidence degree threshold value, matching a second camera to obtain a second image containing the target object through the second camera;
and calculating second positioning coordinates of the target object in the second image to take the second positioning coordinates as target positioning coordinates.
2. The method of claim 1, wherein prior to calculating a first location coordinate of a target object in the first image and a location confidence corresponding to the first location coordinate, the method further comprises:
identifying a target object in the first image to obtain a target object attribute of the target object;
and generating an attribute label for the target object according to the target object attribute of the target object, wherein the attribute label is used for identifying the target object.
3. The method of claim 1, wherein the calculating the first location coordinates of the target object in the first image comprises:
acquiring an image parameter corresponding to the first image;
and calculating a first positioning coordinate of the target object in the first image based on the image parameter.
4. The method of claim 3, wherein the image parameters corresponding to the first image comprise positioning coordinates of the first camera, and wherein calculating the first positioning coordinates of the target object in the first image based on the image parameters comprises:
acquiring the position of the target object in the first image;
calculating a positioning coordinate difference between the target object and a first camera according to the position of the target object in the first image;
and calculating a first positioning coordinate of the target object in the first image according to the positioning coordinate of the first camera and the positioning coordinate difference between the target object and the first camera.
5. The method of claim 1, wherein calculating the location confidence for the first location coordinate comprises:
acquiring the position of the target object in the first image;
and calculating a positioning confidence degree corresponding to the first positioning coordinate according to the position of the target object in the first image, wherein the positioning confidence degree is inversely proportional to the length of the position of the target object in the first image from the central position of the image.
6. The method of claim 1, wherein the matching a second camera comprises:
determining positioning coordinate intervals corresponding to the visual field areas of other cameras except the first camera;
and matching a second camera in other cameras except the first camera based on the positioning coordinate interval.
7. The method of claim 6, wherein matching a second camera among cameras other than a first camera based on the location coordinate interval comprises:
detecting whether the first positioning coordinate falls into the positioning coordinate interval or not;
and when the first positioning coordinate falls into the positioning coordinate interval, determining the camera corresponding to the positioning coordinate interval as a matched second camera.
8. The method of claim 1, wherein the matching a second camera comprises:
sending an instant state request message to other cameras except the first camera, wherein the instant state request message is used for requesting instant state information of the other cameras;
acquiring instant state information fed back by the other cameras;
and matching a second camera in other cameras except the first camera based on the instant state information.
9. The method according to any one of claims 1 to 8, wherein when the number of the second cameras matched is at least two, the calculating of the second positioning coordinates of the target object in the second image comprises:
acquiring at least two second images which are acquired by at least two second cameras and contain a target object;
calculating at least two third positioning coordinates of the target object in the at least two second images;
and carrying out mathematical statistics on the at least two third positioning coordinates to obtain the second positioning coordinates.
10. An apparatus for locating a target object, the apparatus comprising:
an acquisition unit configured to acquire a first image including a target object captured by a first camera;
the first calculation unit is used for calculating a first positioning coordinate of a target object in the first image and a positioning confidence degree corresponding to the first positioning coordinate, wherein the positioning confidence degree is used for representing the positioning accuracy of the first positioning coordinate;
the matching unit is used for matching a second camera when the position confidence degree is smaller than a preset confidence degree threshold value so as to acquire a second image containing the target object through the second camera;
and the second calculation unit is used for calculating second positioning coordinates of the target object in the second image so as to take the second positioning coordinates as target positioning coordinates.
CN202010305824.5A 2020-04-17 2020-04-17 Target object positioning method and device Pending CN111553947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010305824.5A CN111553947A (en) 2020-04-17 2020-04-17 Target object positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010305824.5A CN111553947A (en) 2020-04-17 2020-04-17 Target object positioning method and device

Publications (1)

Publication Number Publication Date
CN111553947A true CN111553947A (en) 2020-08-18

Family

ID=72000013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010305824.5A Pending CN111553947A (en) 2020-04-17 2020-04-17 Target object positioning method and device

Country Status (1)

Country Link
CN (1) CN111553947A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188393A (en) * 2020-09-30 2021-01-05 南京鹰视星大数据科技有限公司 Non-inductive video positioning method and system
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN113115216A (en) * 2021-02-22 2021-07-13 浙江大华技术股份有限公司 Indoor positioning method, service management server and computer storage medium
CN113160144A (en) * 2021-03-25 2021-07-23 平安科技(深圳)有限公司 Target detection method and device, electronic equipment and storage medium
CN113286086A (en) * 2021-05-26 2021-08-20 南京领行科技股份有限公司 Camera use control method and device, electronic equipment and storage medium
CN113467451A (en) * 2021-07-01 2021-10-01 美智纵横科技有限责任公司 Robot recharging method and device, electronic equipment and readable storage medium
CN114445661A (en) * 2022-01-24 2022-05-06 电子科技大学 Embedded image identification method based on edge calculation

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188393A (en) * 2020-09-30 2021-01-05 南京鹰视星大数据科技有限公司 Non-inductive video positioning method and system
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN112417977B (en) * 2020-10-26 2023-01-17 青岛聚好联科技有限公司 Target object searching method and terminal
CN113115216A (en) * 2021-02-22 2021-07-13 浙江大华技术股份有限公司 Indoor positioning method, service management server and computer storage medium
CN113115216B (en) * 2021-02-22 2022-09-06 浙江大华技术股份有限公司 Indoor positioning method, service management server and computer storage medium
CN113160144A (en) * 2021-03-25 2021-07-23 平安科技(深圳)有限公司 Target detection method and device, electronic equipment and storage medium
CN113160144B (en) * 2021-03-25 2023-05-26 平安科技(深圳)有限公司 Target object detection method, target object detection device, electronic equipment and storage medium
CN113286086A (en) * 2021-05-26 2021-08-20 南京领行科技股份有限公司 Camera use control method and device, electronic equipment and storage medium
CN113286086B (en) * 2021-05-26 2022-02-18 南京领行科技股份有限公司 Camera use control method and device, electronic equipment and storage medium
CN113467451A (en) * 2021-07-01 2021-10-01 美智纵横科技有限责任公司 Robot recharging method and device, electronic equipment and readable storage medium
CN114445661A (en) * 2022-01-24 2022-05-06 电子科技大学 Embedded image identification method based on edge calculation
CN114445661B (en) * 2022-01-24 2023-08-18 电子科技大学 Embedded image recognition method based on edge calculation

Similar Documents

Publication Publication Date Title
CN111553947A (en) Target object positioning method and device
Tsakanikas et al. Video surveillance systems-current status and future trends
US10740964B2 (en) Three-dimensional environment modeling based on a multi-camera convolver system
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
US9426449B2 (en) Depth map generation from a monoscopic image based on combined depth cues
CN108229419B (en) Method and apparatus for clustering images
WO2018059408A1 (en) Cross-line counting method, and neural network training method and apparatus, and electronic device
CN111242097A (en) Face recognition method and device, computer readable medium and electronic equipment
CN111444744A (en) Living body detection method, living body detection device, and storage medium
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
Ciampi et al. Multi-camera vehicle counting using edge-AI
CN114267041B (en) Method and device for identifying object in scene
CN112085534B (en) Attention analysis method, system and storage medium
CN111325107B (en) Detection model training method, device, electronic equipment and readable storage medium
CN114926766A (en) Identification method and device, equipment and computer readable storage medium
CN111563398A (en) Method and device for determining information of target object
CN113034586A (en) Road inclination angle detection method and detection system
WO2023279799A1 (en) Object identification method and apparatus, and electronic system
CN111753766A (en) Image processing method, device, equipment and medium
CN114550117A (en) Image detection method and device
CN111310595B (en) Method and device for generating information
CN110781730B (en) Intelligent driving sensing method and sensing device
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN114463685B (en) Behavior recognition method, behavior recognition device, electronic equipment and storage medium
CN112819859B (en) Multi-target tracking method and device applied to intelligent security

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination