WO2022228321A1 - Method and apparatus for identifying and positioning object within large range in video - Google Patents

Method and apparatus for identifying and positioning object within large range in video Download PDF

Info

Publication number
WO2022228321A1
WO2022228321A1 PCT/CN2022/088672 CN2022088672W WO2022228321A1 WO 2022228321 A1 WO2022228321 A1 WO 2022228321A1 CN 2022088672 W CN2022088672 W CN 2022088672W WO 2022228321 A1 WO2022228321 A1 WO 2022228321A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving object
lens
video
latitude
position information
Prior art date
Application number
PCT/CN2022/088672
Other languages
French (fr)
Chinese (zh)
Inventor
何佳林
Original Assignee
何佳林
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 何佳林 filed Critical 何佳林
Publication of WO2022228321A1 publication Critical patent/WO2022228321A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present invention relates to the technical field of moving object identification and processing, and in particular, to a method and device for object identification and positioning in a large range of video.
  • Existing cameras can record video over a wide range of areas. However, the user cannot know the position of the moving object relative to the camera in the video, and the absolute position of the moving object. In some scenarios, the location information is very important to the user.
  • An embodiment of the present invention provides a method for recognizing and locating objects in a large range of videos, which is used to know the position of a moving object in the video relative to a camera and the absolute position of the moving object.
  • the method includes:
  • the position information of the moving object relative to the imaging device is determined.
  • the embodiment of the present invention also provides a device for identifying and positioning objects in a large range of videos, which is used to know the position of the moving object in the video relative to the camera and the absolute position of the moving object.
  • the device includes:
  • a video image acquisition module for acquiring video images
  • a video image recognition module used for recognizing the video image and determining the position information of the moving object in the image
  • the state parameter acquisition module is used to acquire the state parameters of the camera device
  • the relative position information determination module of the moving object is configured to determine the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image.
  • An embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implements the above-mentioned object recognition in a large range of video when the processor executes the computer program positioning method.
  • Embodiments of the present invention further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the above-mentioned method for recognizing and locating objects in a large range of video.
  • the moving object is determined by acquiring a video image; identifying the video image position information in the image; obtain the state parameters of the camera device; determine the position information of the moving object relative to the camera device based on the state parameters and the position information of the moving object in the image, so that the user can easily obtain the position information, and can Allows users to clearly grasp the situation of the video shooting area.
  • FIG. 1 is a flowchart (1) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 2 is a flowchart (2) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 3 is a flowchart (3) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 4 is a side view of a lens and a shooting range in an embodiment of the present invention.
  • FIG. 5 is a top view of a lens and a shooting range in an embodiment of the present invention.
  • FIG. 6 is a perspective view of a lens and a shooting range in an embodiment of the present invention.
  • FIG. 7 is a top view of an area where lenses are sequentially shot in an embodiment of the present invention.
  • FIG. 8 is a top view of an effective monitoring area in an embodiment of the present invention.
  • Fig. 9 is the display image on the screen when the program is running in the embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a program running and solving process in an embodiment of the present invention.
  • FIG. 12 is a flowchart (4) of a method for identifying and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 13 is a flowchart (5) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 14 is a structural block diagram (1) of a device for identifying and locating objects within a large range of video according to an embodiment of the present invention
  • 15 is a structural block diagram (2) of a device for recognizing and positioning objects within a large range of video according to an embodiment of the present invention
  • FIG. 16 is a structural block diagram (3) of a device for recognizing and positioning objects in a large range of video according to an embodiment of the present invention.
  • FIG. 1 is a flowchart (1) of a method for identifying and locating objects in a large range of videos in an embodiment of the present invention. As shown in FIG. 1 , the method includes:
  • Step 101 acquire a video image
  • Step 102 Identify the video image, and determine the position information (ie, pixel position) of the moving object in the image;
  • Step 103 Acquire state parameters of the camera device
  • Step 104 Based on the state parameters and the position information of the moving object in the image, determine the position information of the moving object relative to the imaging device.
  • the state parameters of the imaging device include the height of the lens, the depression angle of the lens, the zoom factor of the lens (corresponding to the horizontal field of view of the lens and the vertical field of view of the lens), and the facing direction of the lens.
  • You can set the flying height of the drone that is, the height of the lens from the ground to 50 meters.
  • the H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively.
  • FIG. 4 The specific side view of the lens and the shooting range for shooting video is shown in FIG. 4 , the top view of the lens and the shooting range is shown in FIG. 5 , and the oblique view of the lens and the shooting range is shown in FIG. 6 .
  • A represents the position of the lens
  • B represents the center of the shooting range of the lens.
  • Figures 4 to 6 also describe the positional relationship between A and B.
  • FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The above time varies according to the machine parameters of different infrared lenses.
  • step 104 determines the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image, specifically including:
  • Step 1041 Match the corresponding coordinate database according to the height of the lens, the depression angle of the lens and the zoom factor of the lens;
  • Step 1042 According to the position information of the moving object in the image, match the position of the moving object relative to the facing direction of the lens from the corresponding coordinate database;
  • Step 1043 Determine the position of the moving object relative to the imaging device according to the facing direction of the lens and the position of the matched moving object relative to the facing direction of the lens.
  • the polar coordinate database is taken as an example for description below.
  • the polar coordinate database of the moving object corresponding to the pixel point (that is, the position information) relative to the facing direction of the lens is obtained through the preliminary measurement.
  • the polar axis of this polar coordinate system is the positive direction of the Y axis, and the positive direction of the angle is the clockwise direction.
  • the database contains polar coordinate data of the moving object photographed in the image corresponding to each pixel point relative to the facing direction of the lens.
  • the azimuth angle is defined as a negative value on the left side of the lens axis, and a positive value is defined on the right side of the lens axis, which is represented by ⁇ ; the distance is represented by L.
  • This set of databases is stored in a computer. That is to say, when using a lens of the same specification, the lens height, the angle of depression under the lens, and the zoom factor are the same as the three parameters in the measurement. Then, in use, the polar coordinate data of the moving object corresponding to the pixel point relative to the facing direction of the lens is the same as the polar coordinate data of the moving object corresponding to the pixel point relative to the facing direction of the lens during measurement.
  • the height of the lens, the depression angle of the lens, and the zoom factor are sent back to the computer during the identification and positioning process.
  • the corresponding (L, ⁇ ) is obtained by matching in the corresponding polar coordinate database.
  • the computer obtains the facing direction ⁇ of the lens when the moving target is captured.
  • Calculate the azimuth angle ⁇ of the moving object relative to the lens, ⁇ ⁇ + ⁇ .
  • the polar coordinate position (L, ⁇ ) of the moving object relative to the lens is obtained.
  • the method further includes:
  • Step 105 obtain the latitude and longitude coordinates of the camera device
  • Step 106 Determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device.
  • step 106 determines the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device, including:
  • Step 1061 Convert the polar coordinate position of the moving object relative to the imaging device into the rectangular coordinate position of the moving object relative to the imaging device;
  • Step 1062 Determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the rectangular coordinate position of the moving object relative to the camera device.
  • the computer obtains the latitude and longitude coordinates (A, B) returned by the camera device.
  • the method further includes:
  • Step 201 Obtain the latitude and longitude coordinates of the existing object
  • Step 202 Compare the latitude and longitude coordinates of the moving object with the latitude and longitude coordinates of the existing object, and differentiate the moving objects based on the comparison result.
  • the moving object that returns the coordinates of latitude and longitude is recognized by the camera.
  • the moving object that calculates the latitude and longitude is the same moving object.
  • step 202 distinguishes the moving objects based on the comparison result, including:
  • different labeling forms may be, for example, different colors, underlines in different formats, and the like.
  • the first color is used to mark the position of the moving object on the display screen
  • the second color is used to mark the position of the moving object on the display screen.
  • the flying height of the drone is 50 meters above the ground.
  • the H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively.
  • A stands for the lens position
  • B stands for the center of the shot range of the lens.
  • the positional relationship between the two is shown in Figure 4, Figure 5, and Figure 6.
  • the range taken is a trapezoid with an upper base of 37 meters, a lower base of 77 meters and a height of 287 meters.
  • the horizontal distance from the center of the ground shot by the lens to the lens is 356 meters, of which the farthest shooting distance is 549 meters, and the closest shooting distance is 262 meters.
  • FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The whole video is transmitted to the computer in real time, and the present invention processes the video.
  • the user enters the following parameters: the drone is the lens height of 50 meters, the drone is the longitude and latitude of the lens A (XXX.5000000E, XXX.5000000N), the gimbal pitch angle is horizontally down 8 degrees, and the zoom factor is 4 times.
  • the direction the camera is facing is transmitted back by the drone in real time.
  • the schematic diagram of each shooting area shown in FIG. 8 can be obtained, wherein MN is the border, the north is the outside, and the south is the inside.
  • MN is the border
  • the target needs to pass at least 287 meters to pass through the shadow area. If a person tries to cross the border from north to south, the speed of movement is 6 km/h, which is 100 meters in 1 minute. Then the person appears in the shooting area at least 2 times during a large cycle. That is to say, a drone and a computer can use the present invention to monitor the 936-meter national border.
  • Step (1) When the lens is rotated to the 10th position, a moving object P appears in the captured image, the moving object P is identified, and the pixel position of P in the image is (-240PX, +156PX). As shown in Figure 10.
  • Step (2) Select the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times.
  • a database with three parameters corresponding to the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times is matched. Then match in this database to obtain the polar coordinate position (453.4m, -3.0°) of the moving object P relative to the direction facing the lens.
  • step (3) the facing direction -16° when the lens captures the point P is retrieved, and the polar coordinate position (453.4m, -19.0°) of the lens corresponding to the moving object P is calculated.
  • Step (4) The latitude and longitude coordinates of the lens are retrieved, and the latitude and longitude of the moving object P is calculated (XXX.4980943E, XXX.5038578N).
  • step (5) the position and position parameters of P are displayed on the user display screen, as shown in FIG. 9 .
  • the flying height of the drone is 50 meters above the ground.
  • the H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively.
  • A represents the position of the lens
  • B represents the center of the shooting range of the lens.
  • the positional relationship between the two is shown in Figure 4, Figure 5, and Figure 6.
  • the range taken is a trapezoid with an upper base of 37 meters, a lower base of 77 meters and a height of 287 meters.
  • the horizontal distance from the center of the ground shot by the lens to the lens is 356 meters, of which the farthest shooting distance is 549 meters, and the closest shooting distance is 262 meters.
  • FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The whole video is transmitted to the computer in real time, and the present invention processes the video.
  • the user enters the following parameters: the drone is the lens height of 50 meters, the drone is the latitude and longitude of the lens (XXX.5000000E, XXX.5000000N), the gimbal pitch angle is 8 degrees horizontally downward, and the zoom factor is 4 times, the direction the camera is facing is transmitted back by the drone in real time.
  • our personnel wear the receiver of the satellite positioning system, and send the position to the computer in real time (that is, the known latitude and longitude coordinates).
  • the longitude and latitude of the moving target identified in the computer are consistent with the sent back longitude and latitude, a red border will be displayed on the screen around the moving target.
  • the moving target 4 displays a blue border on the screen.
  • Step (1) When the lens is rotated to the 14th position, a moving object appears in the captured image, and the moving object is identified, and its pixel position in the image is C (50PX, 50PX).
  • C 50PX, 50PX
  • E 80PX, 80PX
  • Step (2) Select the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times.
  • a database with three parameters corresponding to the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times is matched.
  • the polar coordinate position (382.7m, 0.6°) of the moving object C relative to the facing direction of the lens is obtained by matching
  • the polar coordinate position (400.3m, 1.0°) of the moving object E relative to the facing direction of the lens is obtained by matching. .
  • step (3) the facing direction 16.0° when the moving object C is captured by the lens is retrieved, and the polar coordinate position (453.4m, -19.0°) of the lens corresponding to the moving object P is calculated.
  • the facing direction 24.0° when the moving object E is captured by the lens is retrieved, and the polar coordinate position (400.3m, 25.0°) of the lens corresponding to the moving object P is calculated.
  • Step (4) The latitude and longitude coordinates of the lens are retrieved, and the latitude and longitude C (XXX.5014125E, XXX.5032996N) and E (XXX.5021824E, XXX.5032643N) of the moving object are calculated respectively.
  • Step (5) Compare the latitude and longitude coordinates returned by the receiver of our personnel satellite positioning system to distinguish different moving objects.
  • the moving object C matches the latitude and longitude returned by our personnel, and the object E does not match the latitude and longitude returned by our personnel.
  • the moving object C is our personnel.
  • Step (6) On the user display screen, the position and position parameters of C are displayed in red, and the position and position parameters of E are displayed in blue. As shown in Figure 9.
  • the moving object in the above example is a single person, and in other examples, the detection of a larger range, where the moving object can also be a vehicle, etc., should also be within the protection scope of the present invention.
  • Embodiments of the present invention also provide a device for recognizing and locating objects in a large range of video, as described in the following embodiments. Since the principle of the device for solving the problem is similar to the method for recognizing and locating objects in a large range of videos, the implementation of the device can refer to the implementation of the method for recognizing and locating objects in a large range of videos, and the repetition will not be repeated.
  • FIG. 14 is a structural block diagram (1) of a device for identifying and positioning objects in a large range of video in an embodiment of the present invention. As shown in FIG. 12 , the device includes:
  • a video image acquisition module 02 for acquiring video images
  • the video image recognition module 04 is used for recognizing the video image and determining the position information of the moving object in the image;
  • a state parameter acquisition module 06 used to acquire the state parameters of the camera device
  • the relative position information determination module 08 of the moving object is configured to determine the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image. .
  • the device further includes:
  • the latitude and longitude coordinate acquisition module 10 of the camera device is used to obtain the longitude and latitude coordinates of the camera device;
  • the moving object latitude and longitude coordinate determination module 12 is configured to determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device.
  • the state parameters of the imaging device include the height of the lens, the depression angle of the lens, the zoom factor of the lens, and the facing direction of the lens.
  • the relative position information determination module 08 of the moving object is specifically used for:
  • the position of the moving object relative to the facing direction of the lens is matched from the corresponding coordinate database
  • the position of the moving object relative to the imaging device is determined.
  • the latitude and longitude coordinate determination module 12 of the moving object is specifically used for:
  • the latitude and longitude coordinates of the moving object are determined based on the latitude and longitude coordinates of the imaging device and the rectangular coordinate position of the moving object relative to the imaging device.
  • the device further includes:
  • the latitude and longitude coordinate obtaining module 14 of the existing object is used to obtain the latitude and longitude coordinates of the existing object;
  • the comparison and differentiation module 16 is configured to compare the latitude and longitude coordinates of the moving object with the latitude and longitude coordinates of the existing object, and differentiate the moving object based on the comparison result.
  • the comparison and differentiation module 16 is specifically used for:
  • the comparison and differentiation module 16 is specifically used for:
  • the latitude and longitude coordinates of the moving object are the same as the latitude and longitude coordinates of the existing object, use the first color to mark the position of the moving object on the display screen;
  • a second color is used to mark the position of the moving object on the display screen.
  • An embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implements the above-mentioned object recognition in a large range of video when the processor executes the computer program positioning method.
  • Embodiments of the present invention further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the above-mentioned method for recognizing and locating objects in a large range of video.
  • the moving object is determined by acquiring a video image; identifying the video image position information in the image; obtain the state parameters of the camera device; determine the position information of the moving object relative to the camera device based on the state parameters and the position information of the moving object in the image; obtain the longitude and latitude coordinates of the camera device; based on the longitude and latitude of the camera device.
  • the coordinates and the position information of the moving object relative to the camera device determine the latitude and longitude coordinates of the moving object, so that the user can easily obtain the position information, and the user can clearly grasp the situation of the video shooting area.
  • embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Abstract

A method and apparatus for identifying and positioning an object within a large range in a video. The method comprises: acquiring a video image; identifying the video image, so as to determine position information of a moving object in the image; acquiring state parameters of a photographic device; and on the basis of the state parameters and the position information of the moving object in the image, determining position information of the moving object relative to the photographic device. According to the present invention, a moving part identified from a video acquired by a lens is processed, and the relative position of the moving part relative to the lens is solved, such that a user can clearly grasp the situation of a video recording area.

Description

视频大范围内物体识别定位方法及装置Object recognition and positioning method and device in a large range of video 技术领域technical field
本发明涉及移动物体识别处理技术领域,尤其涉及视频大范围内物体识别定位方法及装置。The present invention relates to the technical field of moving object identification and processing, and in particular, to a method and device for object identification and positioning in a large range of video.
背景技术Background technique
本部分旨在为权利要求书中陈述的本发明实施例提供背景或上下文。此处的描述不因为包括在本部分中就承认是现有技术。This section is intended to provide a background or context to the embodiments of the invention recited in the claims. The descriptions herein are not admitted to be prior art by inclusion in this section.
现有的摄像机可以实现对大范围的区域进行视频的摄录。但是使用者无法获知视频中移动物体相对于摄像机的位置,以及移动物体的绝对位置。在一些场景下这些位置信息对于使用者是十分重要的。Existing cameras can record video over a wide range of areas. However, the user cannot know the position of the moving object relative to the camera in the video, and the absolute position of the moving object. In some scenarios, the location information is very important to the user.
发明内容SUMMARY OF THE INVENTION
本发明实施例提供一种视频大范围内物体识别定位方法,用以获知视频中移动物体相对于摄像机的位置,以及移动物体的绝对位置,该方法包括:An embodiment of the present invention provides a method for recognizing and locating objects in a large range of videos, which is used to know the position of a moving object in the video relative to a camera and the absolute position of the moving object. The method includes:
获取视频图像;Get video images;
对所述视频图像进行识别,确定移动物体在图像中的位置信息;Identifying the video image to determine the position information of the moving object in the image;
获取摄像设备的状态参数;Get the status parameters of the camera device;
基于所述状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息。Based on the state parameters and the position information of the moving object in the image, the position information of the moving object relative to the imaging device is determined.
本发明实施例还提供一种视频大范围内物体识别定位装置,用以获知视频中移动物体相对于摄像机的位置,以及移动物体的绝对位置,该装置包括:The embodiment of the present invention also provides a device for identifying and positioning objects in a large range of videos, which is used to know the position of the moving object in the video relative to the camera and the absolute position of the moving object. The device includes:
视频图像获取模块,用于获取视频图像;A video image acquisition module for acquiring video images;
视频图像识别模块,用于对所述视频图像进行识别,确定移动物体在图像中的位置信息;A video image recognition module, used for recognizing the video image and determining the position information of the moving object in the image;
状态参数获取模块,用于获取摄像设备的状态参数;The state parameter acquisition module is used to acquire the state parameters of the camera device;
移动物体相对位置信息确定模块,用于基于所述状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息。The relative position information determination module of the moving object is configured to determine the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image.
本发明实施例还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可 在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述视频大范围内物体识别定位方法。An embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implements the above-mentioned object recognition in a large range of video when the processor executes the computer program positioning method.
本发明实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述所述视频大范围内物体识别定位方法的步骤。Embodiments of the present invention further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the above-mentioned method for recognizing and locating objects in a large range of video.
本发明实施例中,与现有技术中使用者无法获知视频中移动物体相对于摄像机的位置,以及移动物体的绝对位置的技术方案相比,通过获取视频图像;对视频图像进行识别,确定移动物体在图像中的位置信息;获取摄像设备的状态参数;基于状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息,可以让使用者方便地得到这些位置信息,可以让使用者清晰地掌握视频拍摄区域的情况。In the embodiment of the present invention, compared with the technical solution in the prior art in which the user cannot know the position of the moving object in the video relative to the camera and the absolute position of the moving object, the moving object is determined by acquiring a video image; identifying the video image position information in the image; obtain the state parameters of the camera device; determine the position information of the moving object relative to the camera device based on the state parameters and the position information of the moving object in the image, so that the user can easily obtain the position information, and can Allows users to clearly grasp the situation of the video shooting area.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts. In the attached image:
图1为本发明实施例中视频大范围内物体识别定位方法流程图(一);1 is a flowchart (1) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention;
图2为本发明实施例中视频大范围内物体识别定位方法流程图(二);2 is a flowchart (2) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention;
图3为本发明实施例中视频大范围内物体识别定位方法流程图(三);3 is a flowchart (3) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention;
图4为本发明实施例中镜头与拍摄范围的侧视图;4 is a side view of a lens and a shooting range in an embodiment of the present invention;
图5为本发明实施例中镜头与拍摄范围的俯视图;5 is a top view of a lens and a shooting range in an embodiment of the present invention;
图6为本发明实施例中镜头与拍摄范围的斜视图;6 is a perspective view of a lens and a shooting range in an embodiment of the present invention;
图7为本发明实施例中镜头依次顺序拍摄区域俯视图;FIG. 7 is a top view of an area where lenses are sequentially shot in an embodiment of the present invention;
图8为本发明实施例中有效监控区域俯视图;8 is a top view of an effective monitoring area in an embodiment of the present invention;
图9为本发明实施例中程序运行时在屏幕上的显示图像;Fig. 9 is the display image on the screen when the program is running in the embodiment of the present invention;
图10为本发明实施例中镜头拍摄的图像;10 is an image captured by a lens in an embodiment of the present invention;
图11为本发明实施例中程序运行解算过程示意图;FIG. 11 is a schematic diagram of a program running and solving process in an embodiment of the present invention;
图12为本发明实施例中视频大范围内物体识别定位方法流程图(四);12 is a flowchart (4) of a method for identifying and locating objects within a large range of video in an embodiment of the present invention;
图13为本发明实施例中视频大范围内物体识别定位方法流程图(五);13 is a flowchart (5) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention;
图14为本发明实施例中视频大范围内物体识别定位装置结构框图(一);14 is a structural block diagram (1) of a device for identifying and locating objects within a large range of video according to an embodiment of the present invention;
图15为本发明实施例中视频大范围内物体识别定位装置结构框图(二);15 is a structural block diagram (2) of a device for recognizing and positioning objects within a large range of video according to an embodiment of the present invention;
图16为本发明实施例中视频大范围内物体识别定位装置结构框图(三)。FIG. 16 is a structural block diagram (3) of a device for recognizing and positioning objects in a large range of video according to an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚明白,下面结合附图对本发明实施例做进一步详细说明。在此,本发明的示意性实施例及其说明用于解释本发明,但并不作为对本发明的限定。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention more clearly understood, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings. Here, the exemplary embodiments of the present invention and their descriptions are used to explain the present invention, but not to limit the present invention.
图1为本发明实施例中视频大范围内物体识别定位方法流程图(一),如图1所示,该方法包括:FIG. 1 is a flowchart (1) of a method for identifying and locating objects in a large range of videos in an embodiment of the present invention. As shown in FIG. 1 , the method includes:
步骤101:获取视频图像;Step 101: acquire a video image;
步骤102:对所述视频图像进行识别,确定移动物体在图像中的位置信息(即像素位置);Step 102: Identify the video image, and determine the position information (ie, pixel position) of the moving object in the image;
步骤103:获取摄像设备的状态参数;Step 103: Acquire state parameters of the camera device;
步骤104:基于所述状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息。Step 104: Based on the state parameters and the position information of the moving object in the image, determine the position information of the moving object relative to the imaging device.
在本发明实施例中,摄像设备的状态参数包括镜头高度、镜头下俯角、镜头变焦倍数(对应镜头水平视场角和镜头垂直视场角)和镜头正对方向。例如:可以使用大疆经纬M300型无人机搭载H20T红外镜头获取视频。可以设置无人机飞行高度即镜头离地高度为50米。H20T镜头设置为4倍变焦,对应的镜头水平视场角和镜头垂直视场角分别为8度和5.6度。设置云台从水平向下转动俯仰角8度。这些状态参数可以依据情况的不同而改变设定值。具体的拍摄视频的镜头与拍摄范围的侧视图如图4所示,镜头与拍摄范围的俯视图如图5所示,镜头与拍摄范围的斜视图如图6所示。其中,A代表镜头位置,B代表镜头拍摄范围的中心,图4至图6中还描述出了A、B两者位置关系。In the embodiment of the present invention, the state parameters of the imaging device include the height of the lens, the depression angle of the lens, the zoom factor of the lens (corresponding to the horizontal field of view of the lens and the vertical field of view of the lens), and the facing direction of the lens. For example, you can use DJI Jingwei M300 UAV equipped with H20T infrared lens to obtain video. You can set the flying height of the drone, that is, the height of the lens from the ground to 50 meters. The H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively. Set the gimbal to rotate the pitch angle 8 degrees downward from the horizontal. These status parameters can be changed to set values depending on the situation. The specific side view of the lens and the shooting range for shooting video is shown in FIG. 4 , the top view of the lens and the shooting range is shown in FIG. 5 , and the oblique view of the lens and the shooting range is shown in FIG. 6 . Among them, A represents the position of the lens, and B represents the center of the shooting range of the lens. Figures 4 to 6 also describe the positional relationship between A and B.
在设定的摄像设备的状态参数下拍摄视频,可以按照如下方式拍摄视频:To shoot a video under the set state parameters of the camera device, you can shoot a video as follows:
无人机升空后启动红外镜头,首先拍摄区域1,拍摄时间2秒钟,然后顺时针旋转8度,旋转用时1秒。到位后拍摄区域2,拍摄时间2秒钟,再顺时针旋转8度……依此顺序拍摄区域1至23。拍摄完区域23后逆时针旋转回区域1,用时3秒。再循环以上程序进行拍摄。从开始拍摄区域1到旋转至区域1准备拍摄为一个大循环,一个大循环用时71秒。其中46秒获取各个区域的稳定视频。图7为拍摄循环中各拍摄区域示意图。上述的时间依据不同的红外镜头的机器参数的不同而不同。After the drone lifts off, start the infrared lens, firstly shoot the area 1, the shooting time is 2 seconds, and then rotate 8 degrees clockwise, and the rotation time is 1 second. Shoot area 2 after it is in place, shoot for 2 seconds, then rotate 8 degrees clockwise... Shoot areas 1 to 23 in this order. After shooting area 23, rotate back to area 1 counterclockwise, which takes 3 seconds. Repeat the above procedure to shoot. From the start of shooting area 1 to the rotation to area 1 ready to shoot is a large cycle, and a large cycle takes 71 seconds. Of these, 46 seconds get stabilized video for each area. FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The above time varies according to the machine parameters of different infrared lenses.
在本发明实施例中,如图12所示,步骤104基于所述状态参数和移动物体在图像中 的位置信息,确定移动物体相对于摄像设备的位置信息,具体包括:In an embodiment of the present invention, as shown in Figure 12, step 104 determines the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image, specifically including:
步骤1041:根据镜头高度、镜头下俯角和镜头变焦倍数匹配相应的坐标数据库;Step 1041: Match the corresponding coordinate database according to the height of the lens, the depression angle of the lens and the zoom factor of the lens;
步骤1042:根据移动物体在图像中的位置信息,从相应的坐标数据库中匹配移动物体相对于镜头正对方向的位置;Step 1042: According to the position information of the moving object in the image, match the position of the moving object relative to the facing direction of the lens from the corresponding coordinate database;
步骤1043:根据所述镜头正对方向和所匹配的移动物体相对于镜头正对方向的位置,确定移动物体相对摄像设备的位置。Step 1043: Determine the position of the moving object relative to the imaging device according to the facing direction of the lens and the position of the matched moving object relative to the facing direction of the lens.
下面以极坐标数据库为例进行说明。The polar coordinate database is taken as an example for description below.
计算机中对于确定型号的镜头,对应镜头高度,镜头下俯角,变焦倍数,通过前期测定,获得了像素点(即位置信息)所对应移动物体相对于镜头正对方向的极坐标数据库。此极坐标系的极轴为Y轴的正方向,且角度的正方向为顺时针方向。数据库中包含了图像各个像素点所对应其中拍摄到的移动物体相对于镜头正对方向的极坐标数据。其中,方位角在镜头轴线左侧定义为负值,在镜头轴线右侧定义为正值,用α表示;距离用L表示。对于不同的镜头高度,镜头下俯角,变焦倍数测试得到不同的数据库。这一组数据库储存于计算机中。那么也就是说,相同规格的镜头,在使用时,镜头高度,镜头下俯角,变焦倍数这三个参数和在测定时的这三个参数一致。那么在使用时,像素点所对应移动物体相对于镜头正对方向的极坐标数据和在测定时像素点所对应移动物体相对于镜头正对方向的极坐标数据是相同的。For the lens of a certain type in the computer, corresponding to the height of the lens, the depression angle of the lens, and the zoom factor, the polar coordinate database of the moving object corresponding to the pixel point (that is, the position information) relative to the facing direction of the lens is obtained through the preliminary measurement. The polar axis of this polar coordinate system is the positive direction of the Y axis, and the positive direction of the angle is the clockwise direction. The database contains polar coordinate data of the moving object photographed in the image corresponding to each pixel point relative to the facing direction of the lens. Among them, the azimuth angle is defined as a negative value on the left side of the lens axis, and a positive value is defined on the right side of the lens axis, which is represented by α; the distance is represented by L. For different lens heights, lens depression angles, and zoom ratios, different databases are obtained. This set of databases is stored in a computer. That is to say, when using a lens of the same specification, the lens height, the angle of depression under the lens, and the zoom factor are the same as the three parameters in the measurement. Then, in use, the polar coordinate data of the moving object corresponding to the pixel point relative to the facing direction of the lens is the same as the polar coordinate data of the moving object corresponding to the pixel point relative to the facing direction of the lens during measurement.
在使用时,进行识别定位过程中将镜头高度,镜头下俯角,变焦倍数传回计算机。首先匹配到相应的极坐标数据库。再在相应的极坐标数据库中匹配得出相应(L,α)。计算机获取拍摄到移动目标时的镜头正对方向β。计算出移动物体相对镜头的方位角γ,γ=α+β。从而得到移动物体相对于镜头的极坐标位置(L,γ)。When in use, the height of the lens, the depression angle of the lens, and the zoom factor are sent back to the computer during the identification and positioning process. First match to the corresponding polar coordinate database. Then, the corresponding (L, α) is obtained by matching in the corresponding polar coordinate database. The computer obtains the facing direction β of the lens when the moving target is captured. Calculate the azimuth angle γ of the moving object relative to the lens, γ=α+β. Thereby, the polar coordinate position (L, γ) of the moving object relative to the lens is obtained.
在本发明实施例中,如图2所示,该方法还包括:In an embodiment of the present invention, as shown in FIG. 2 , the method further includes:
步骤105:获得摄像设备的经纬度坐标;Step 105: obtain the latitude and longitude coordinates of the camera device;
步骤106:基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的位置信息,确定移动物体的经纬度坐标。Step 106: Determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device.
在本发明实施例中,如图13所示,步骤106基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的位置信息,确定移动物体的经纬度坐标,包括:In this embodiment of the present invention, as shown in FIG. 13 , step 106 determines the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device, including:
步骤1061:将移动物体相对于摄像设备的极坐标位置转换成移动物体相对于摄像设备的直角坐标位置;Step 1061: Convert the polar coordinate position of the moving object relative to the imaging device into the rectangular coordinate position of the moving object relative to the imaging device;
步骤1062:基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的直角坐标 位置,确定移动物体的经纬度坐标。Step 1062: Determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the rectangular coordinate position of the moving object relative to the camera device.
具体的,计算机通过坐标转化公式x=L·sinγ,y=L·cosγ将移动物体相对于镜头的极坐标位置(L,γ)转化为直角坐标位置(x,y)。Specifically, the computer converts the polar coordinate position (L, γ) of the moving object relative to the lens into a rectangular coordinate position (x, y) through the coordinate transformation formula x=L·sinγ, y=L·cosγ.
计算机获得摄像设备传回的经纬度坐标(A,B)。根据移动物体相对镜头的直角坐标位置,通过公式C=A+x/(cos B·111120),D=B+y/111120计算出移动物体的经纬度坐标(C,D)。The computer obtains the latitude and longitude coordinates (A, B) returned by the camera device. According to the rectangular coordinate position of the moving object relative to the lens, the latitude and longitude coordinates (C, D) of the moving object are calculated by the formula C=A+x/(cos B·111120), D=B+y/111120.
在本发明实施例中,如图3所示,该方法还包括:In this embodiment of the present invention, as shown in FIG. 3 , the method further includes:
步骤201:获得已有物体经纬度坐标;Step 201: Obtain the latitude and longitude coordinates of the existing object;
步骤202:将移动物体的经纬度坐标与已有物体经纬度坐标进行对比,基于对比结果对所述移动物体进行区分。Step 202: Compare the latitude and longitude coordinates of the moving object with the latitude and longitude coordinates of the existing object, and differentiate the moving objects based on the comparison result.
具体的,就是获得一个移动物体传回的经纬度坐标,与步骤106计算所得的移动物体的经纬度坐标相对比,如果其中两个经纬度坐标数值一致,说明传回经纬度坐标的移动物体和通过镜头识别并计算经纬度的移动物体是同一个移动物体。Specifically, it is to obtain the latitude and longitude coordinates returned by a moving object, and compare them with the latitude and longitude coordinates of the moving object calculated in step 106. If two of the coordinates of the latitude and longitude are the same, it means that the moving object that returns the coordinates of latitude and longitude is recognized by the camera. The moving object that calculates the latitude and longitude is the same moving object.
在本发明实施例中,步骤202基于对比结果对所述移动物体进行区分,包括:In this embodiment of the present invention, step 202 distinguishes the moving objects based on the comparison result, including:
基于对比结果,采用不同标注形式对移动物体进行区分。Based on the comparison results, different annotation forms are used to distinguish moving objects.
具体的,不同标注形式比如可以是不同颜色,可以是不同格式的下划线等等。Specifically, different labeling forms may be, for example, different colors, underlines in different formats, and the like.
具体的,采用不同颜色区分时:Specifically, when using different colors to distinguish:
若所述两个经纬度坐标数值一致,在显示屏上采用第一颜色标注移动物体的位置;If the two latitude and longitude coordinate values are consistent, the first color is used to mark the position of the moving object on the display screen;
若所述所有两个经纬度坐标数值不一致,在显示屏上采用第二颜色标注移动物体的位置。If all the two latitude and longitude coordinate values are inconsistent, the second color is used to mark the position of the moving object on the display screen.
下面从举实例对本发明提出的视频大范围内物体识别定位方法进行说明。The method for recognizing and locating objects in a large range of video proposed by the present invention will be described below by taking an example.
具体实施方式1Embodiment 1
使用大疆经纬M300型无人机搭载H20T红外镜头获取视频。无人机飞行高度即镜头离地高度为50米。H20T镜头设置为4倍变焦,对应的镜头水平视场角和镜头垂直视场角分别为8度和5.6度。设置云台从水平向下转动俯仰角8度。A代表镜头位置,B代表镜头拍摄范围的中心。两者位置关系如图4,图5,图6所示。所拍摄的范围为一个上底边为37米,下底边为77米,高为287米的梯形。镜头所拍摄地面中心距离镜头的水平距离为356米,其中最远拍摄距离为549米,最近拍摄距离为262米。Use DJI Jingwei M300 UAV equipped with H20T infrared lens to obtain video. The flying height of the drone is 50 meters above the ground. The H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively. Set the gimbal to rotate the pitch angle 8 degrees downward from the horizontal. A stands for the lens position, and B stands for the center of the shot range of the lens. The positional relationship between the two is shown in Figure 4, Figure 5, and Figure 6. The range taken is a trapezoid with an upper base of 37 meters, a lower base of 77 meters and a height of 287 meters. The horizontal distance from the center of the ground shot by the lens to the lens is 356 meters, of which the farthest shooting distance is 549 meters, and the closest shooting distance is 262 meters.
设定无人机升空后启动红外镜头,首先拍摄区域1,拍摄时间2秒钟,然后顺时针旋转8度,旋转用时1秒。到位后拍摄区域2,拍摄时间2秒钟,再顺时针旋转8 度……依此顺序拍摄区域1至23。拍摄完区域23后逆时针旋转回区域1,用时3秒。再循环以上程序进行拍摄。从开始拍摄区域1到旋转至区域1准备拍摄为一个大循环,一个大循环用时71秒。其中46秒获取各个区域的稳定视频。图7为拍摄循环中各拍摄区域示意图。整个视频实时传至电脑,本发明对视频进行处理。在输入界面使用者分别输入以下参数:无人机即镜头高度50米,无人机即镜头的经纬度A(XXX.5000000E,XXX.5000000N),云台俯仰角水平向下8度,变焦倍数为4倍。镜头正对方向由无人机实时传回。Set the infrared lens to be activated after the drone takes off, first to shoot area 1, the shooting time is 2 seconds, and then rotate 8 degrees clockwise, and the rotation time is 1 second. After it is in place, shoot zone 2, shoot for 2 seconds, and then rotate 8 degrees clockwise... Shoot zones 1 to 23 in this order. After shooting area 23, rotate back to area 1 counterclockwise, which takes 3 seconds. Repeat the above procedure to shoot. From the start of shooting area 1 to the rotation to area 1 ready to shoot is a large cycle, and a large cycle takes 71 seconds. Of these, 46 seconds get stabilized video for each area. FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The whole video is transmitted to the computer in real time, and the present invention processes the video. In the input interface, the user enters the following parameters: the drone is the lens height of 50 meters, the drone is the longitude and latitude of the lens A (XXX.5000000E, XXX.5000000N), the gimbal pitch angle is horizontally down 8 degrees, and the zoom factor is 4 times. The direction the camera is facing is transmitted back by the drone in real time.
假设使用本发明对国境线进行监控,可以获得图8所示的各拍摄区域示意图,其中,MN为边境线,北面为境外,南面为境内。当目标从北面向南面移动时,将经过图8所示的阴影区域。目标穿过阴影区域至少需要经过287米,如果一个人从北向南试图穿越国境线,其移动速度为6千米/小时,既1分钟移动100米。那么在一个大循环过程中这个人至少有2次出现在拍摄区域。也就是说一台无人机和一台电脑,使用本发明就可以对936米的国境线进行监控。Assuming that the present invention is used to monitor the national border, the schematic diagram of each shooting area shown in FIG. 8 can be obtained, wherein MN is the border, the north is the outside, and the south is the inside. As the target moves from north to south, it will pass through the shaded area shown in Figure 8. The target needs to pass at least 287 meters to pass through the shadow area. If a person tries to cross the border from north to south, the speed of movement is 6 km/h, which is 100 meters in 1 minute. Then the person appears in the shooting area at least 2 times during a large cycle. That is to say, a drone and a computer can use the present invention to monitor the 936-meter national border.
步骤(1)当镜头转动到10号位置时,拍摄的图像中出现移动物体P,识别出这一移动物体P,并得到P在图像中的像素位置为(-240PX,+156PX)。如图10所示。Step (1) When the lens is rotated to the 10th position, a moving object P appears in the captured image, the moving object P is identified, and the pixel position of P in the image is (-240PX, +156PX). As shown in Figure 10.
步骤(2)调取镜头高度50米,镜头下俯角8度,变焦倍数4倍这三个参数。在一组数据库中,匹配到对应镜头高度50米,镜头下俯角8度,变焦倍数4倍这三个参数的这一个数据库。再在这个数据库中匹配得到移动物体P相对于镜头正对方向的极坐标位置(453.4m,-3.0°)。Step (2) Select the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times. In a set of databases, a database with three parameters corresponding to the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times is matched. Then match in this database to obtain the polar coordinate position (453.4m, -3.0°) of the moving object P relative to the direction facing the lens.
步骤(3)调取镜头拍摄到P点时的正对方向-16°,解算出移动物体P对应镜头的极坐标位置(453.4m,-19.0°)。In step (3), the facing direction -16° when the lens captures the point P is retrieved, and the polar coordinate position (453.4m, -19.0°) of the lens corresponding to the moving object P is calculated.
步骤(4)调取镜头的经纬度坐标,解算出移动物体P的经纬度(XXX.4980943E,XXX.5038578N)。Step (4) The latitude and longitude coordinates of the lens are retrieved, and the latitude and longitude of the moving object P is calculated (XXX.4980943E, XXX.5038578N).
步骤(5)在使用者显示屏幕上显示出P的位置以及位置参数,如图9所示。In step (5), the position and position parameters of P are displayed on the user display screen, as shown in FIG. 9 .
软件运行逻辑以图11所示。The software operation logic is shown in Figure 11.
具体实施方式2 Embodiment 2
使用大疆经纬M300型无人机搭载H20T红外镜头获取视频。无人机飞行高度即镜头离地高度为50米。H20T镜头设置为4倍变焦,对应的镜头水平视场角和镜头垂直视场角分别为8度和5.6度。设置云台从水平向下转动俯仰角8度。图中A代表镜头位 置,B代表镜头拍摄范围的中心。两者位置关系如图4,图5,图6所示。所拍摄的范围为一个上底边为37米,下底边为77米,高为287米的梯形。镜头所拍摄地面中心距离镜头的水平距离为356米,其中最远拍摄距离为549米,最近拍摄距离为262米。Use DJI Jingwei M300 UAV equipped with H20T infrared lens to obtain video. The flying height of the drone is 50 meters above the ground. The H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively. Set the gimbal to rotate the pitch angle 8 degrees downward from the horizontal. In the figure, A represents the position of the lens, and B represents the center of the shooting range of the lens. The positional relationship between the two is shown in Figure 4, Figure 5, and Figure 6. The range taken is a trapezoid with an upper base of 37 meters, a lower base of 77 meters and a height of 287 meters. The horizontal distance from the center of the ground shot by the lens to the lens is 356 meters, of which the farthest shooting distance is 549 meters, and the closest shooting distance is 262 meters.
设定无人机升空后启动红外镜头,首先拍摄区域1,拍摄时间2秒钟,然后顺时针旋转8度,旋转用时1秒。到位后拍摄区域2,拍摄时间2秒钟,再顺时针旋转8度……依此顺序拍摄区域1至23。拍摄完区域23后逆时针旋转回区域1,用时3秒。再循环以上程序进行拍摄。从开始拍摄区域1到旋转至区域1准备拍摄为一个大循环,一个大循环用时71秒。其中46秒获取各个区域的稳定视频。图7为拍摄循环中各拍摄区域示意图。整个视频实时传至电脑,本发明对视频进行处理。在输入界面使用者分别输入以下参数:无人机即镜头高度50米,无人机即镜头的经纬度(XXX.5000000E,XXX.5000000N),云台俯仰角水平向下8度,变焦倍数为4倍,镜头正对方向由无人机实时传回。Set the infrared lens to be activated after the drone takes off, first to shoot area 1, the shooting time is 2 seconds, and then rotate 8 degrees clockwise, and the rotation time is 1 second. Shoot area 2 after it is in place, shoot for 2 seconds, then rotate 8 degrees clockwise... Shoot areas 1 to 23 in this order. After shooting area 23, rotate back to area 1 counterclockwise, which takes 3 seconds. Repeat the above procedure to shoot. From the start of shooting area 1 to the rotation to area 1 ready to shoot is a large cycle, and a large cycle takes 71 seconds. Of these, 46 seconds get stabilized video for each area. FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The whole video is transmitted to the computer in real time, and the present invention processes the video. In the input interface, the user enters the following parameters: the drone is the lens height of 50 meters, the drone is the latitude and longitude of the lens (XXX.5000000E, XXX.5000000N), the gimbal pitch angle is 8 degrees horizontally downward, and the zoom factor is 4 times, the direction the camera is facing is transmitted back by the drone in real time.
设定在此区域我方人员佩戴卫星定位系统接收端,并实时将位置发送到电脑(即已知的经纬度坐标)。当电脑中识别出的移动目标的经纬度和发送回的经纬度一致时,移动目标四周在屏幕上显示一个红色边框。当电脑中识别出的移动目标的经纬度和发送回的经纬度不一致时,移动目标四在屏幕上显示一个蓝色边框。Set in this area, our personnel wear the receiver of the satellite positioning system, and send the position to the computer in real time (that is, the known latitude and longitude coordinates). When the longitude and latitude of the moving target identified in the computer are consistent with the sent back longitude and latitude, a red border will be displayed on the screen around the moving target. When the latitude and longitude of the moving target identified in the computer is inconsistent with the latitude and longitude sent back, the moving target 4 displays a blue border on the screen.
步骤(1)当镜头转动到14号位置时,拍摄的图像中出现一个移动物体,识别出这个移动物体,其在图像中的像素位置为C(50PX,50PX),当镜头转动到15号位置时,拍摄的图像中出现另一个移动物体,识别出这个移动物体,其在图像中的像素位置为E(80PX,80PX),如图10所示。Step (1) When the lens is rotated to the 14th position, a moving object appears in the captured image, and the moving object is identified, and its pixel position in the image is C (50PX, 50PX). When the lens is rotated to the 15th position , another moving object appears in the captured image, and the moving object is identified, and its pixel position in the image is E (80PX, 80PX), as shown in Figure 10 .
步骤(2)调取镜头高度50米,镜头下俯角8度,变焦倍数4倍这三个参数。在一组数据库中,匹配到对应镜头高度50米,镜头下俯角8度,变焦倍数4倍这三个参数的这一个数据库。再在这个数据库中匹配得到移动物体C相对于镜头正对方向的极坐标位置(382.7m,0.6°),匹配得到移动物体E相对于镜头正对方向的极坐标位置(400.3m,1.0°)。Step (2) Select the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times. In a set of databases, a database with three parameters corresponding to the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times is matched. In this database, the polar coordinate position (382.7m, 0.6°) of the moving object C relative to the facing direction of the lens is obtained by matching, and the polar coordinate position (400.3m, 1.0°) of the moving object E relative to the facing direction of the lens is obtained by matching. .
步骤(3)调取镜头拍摄到移动物体C时的正对方向16.0°,解算出移动物体P对应镜头的极坐标位置(453.4m,-19.0°)。调取镜头拍摄到移动物体E时的正对方向24.0°,解算出移动物体P对应镜头的极坐标位置(400.3m,25.0°)。In step (3), the facing direction 16.0° when the moving object C is captured by the lens is retrieved, and the polar coordinate position (453.4m, -19.0°) of the lens corresponding to the moving object P is calculated. The facing direction 24.0° when the moving object E is captured by the lens is retrieved, and the polar coordinate position (400.3m, 25.0°) of the lens corresponding to the moving object P is calculated.
步骤(4)调取镜头的经纬度坐标,分别解算出移动物体的经纬度C(XXX.5014125E,XXX.5032996N),E(XXX.5021824E,XXX.5032643N)。Step (4) The latitude and longitude coordinates of the lens are retrieved, and the latitude and longitude C (XXX.5014125E, XXX.5032996N) and E (XXX.5021824E, XXX.5032643N) of the moving object are calculated respectively.
步骤(5)对比我方人员卫星定位系统接收端传回的经纬度坐标,区分出不同移动物体。其中移动物体C与我方人员传回经纬度匹配,物体E与我方人员传回经纬度不匹配。说明移动物体C为我方人员。Step (5) Compare the latitude and longitude coordinates returned by the receiver of our personnel satellite positioning system to distinguish different moving objects. The moving object C matches the latitude and longitude returned by our personnel, and the object E does not match the latitude and longitude returned by our personnel. Explain that the moving object C is our personnel.
步骤(6)在使用者显示屏幕上以红色显示出C的位置以及位置参数,以蓝色显示出E的位置及位置参数。如图9所示。Step (6) On the user display screen, the position and position parameters of C are displayed in red, and the position and position parameters of E are displayed in blue. As shown in Figure 9.
软件运行逻辑以图11所示。The software operation logic is shown in Figure 11.
以上例子中的移动物体为单个人,在其他例子中对更大范围的检测,其中移动物体也可以为车辆等,也应在本发明的保护范围之内。The moving object in the above example is a single person, and in other examples, the detection of a larger range, where the moving object can also be a vehicle, etc., should also be within the protection scope of the present invention.
本发明实施例中还提供了一种视频大范围内物体识别定位装置,如下面的实施例所述。由于该装置解决问题的原理与视频大范围内物体识别定位方法相似,因此该装置的实施可以参见视频大范围内物体识别定位方法的实施,重复之处不再赘述。Embodiments of the present invention also provide a device for recognizing and locating objects in a large range of video, as described in the following embodiments. Since the principle of the device for solving the problem is similar to the method for recognizing and locating objects in a large range of videos, the implementation of the device can refer to the implementation of the method for recognizing and locating objects in a large range of videos, and the repetition will not be repeated.
图14为本发明实施例中视频大范围内物体识别定位装置结构框图(一),如图12所示,该装置包括:FIG. 14 is a structural block diagram (1) of a device for identifying and positioning objects in a large range of video in an embodiment of the present invention. As shown in FIG. 12 , the device includes:
视频图像获取模块02,用于获取视频图像;A video image acquisition module 02, for acquiring video images;
视频图像识别模块04,用于对所述视频图像进行识别,确定移动物体在图像中的位置信息;The video image recognition module 04 is used for recognizing the video image and determining the position information of the moving object in the image;
状态参数获取模块06,用于获取摄像设备的状态参数;A state parameter acquisition module 06, used to acquire the state parameters of the camera device;
移动物体相对位置信息确定模块08,用于基于所述状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息。.The relative position information determination module 08 of the moving object is configured to determine the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image. .
在本发明实施例中,如图15所示,该装置还包括:In this embodiment of the present invention, as shown in FIG. 15 , the device further includes:
摄像设备的经纬度坐标获取模块10,用于获得摄像设备的经纬度坐标;The latitude and longitude coordinate acquisition module 10 of the camera device is used to obtain the longitude and latitude coordinates of the camera device;
移动物体经纬度坐标确定模块12,用于基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的位置信息,确定移动物体的经纬度坐标。The moving object latitude and longitude coordinate determination module 12 is configured to determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device.
在本发明实施例中,所述摄像设备的状态参数包括镜头高度、镜头下俯角、镜头变焦倍数和镜头正对方向。In the embodiment of the present invention, the state parameters of the imaging device include the height of the lens, the depression angle of the lens, the zoom factor of the lens, and the facing direction of the lens.
在本发明实施例中,移动物体相对位置信息确定模块08具体用于:In the embodiment of the present invention, the relative position information determination module 08 of the moving object is specifically used for:
根据镜头高度、镜头下俯角和镜头变焦倍数匹配相应的坐标数据库;Match the corresponding coordinate database according to the height of the lens, the depression angle under the lens and the zoom factor of the lens;
根据移动物体在图像中的位置信息,从相应的坐标数据库中匹配移动物体相对于镜头正对方向的位置;According to the position information of the moving object in the image, the position of the moving object relative to the facing direction of the lens is matched from the corresponding coordinate database;
根据所述镜头正对方向和所匹配的移动物体相对于镜头正对方向的位置,确定移动 物体相对摄像设备的位置。According to the facing direction of the lens and the position of the matched moving object relative to the facing direction of the lens, the position of the moving object relative to the imaging device is determined.
在本发明实施例中,移动物体经纬度坐标确定模块12具体用于:In the embodiment of the present invention, the latitude and longitude coordinate determination module 12 of the moving object is specifically used for:
将移动物体相对于摄像设备的极坐标位置转换成移动物体相对于摄像设备的直角坐标位置;Convert the polar coordinate position of the moving object relative to the camera device into the rectangular coordinate position of the moving object relative to the camera device;
基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的直角坐标位置,确定移动物体的经纬度坐标。The latitude and longitude coordinates of the moving object are determined based on the latitude and longitude coordinates of the imaging device and the rectangular coordinate position of the moving object relative to the imaging device.
在本发明实施例中,如图16所示,该装置还包括:In this embodiment of the present invention, as shown in FIG. 16 , the device further includes:
已有物体经纬度坐标获得模块14,用于获得已有物体经纬度坐标;The latitude and longitude coordinate obtaining module 14 of the existing object is used to obtain the latitude and longitude coordinates of the existing object;
对比区分模块16,用于将移动物体的经纬度坐标与已有物体经纬度坐标进行对比,基于对比结果对所述移动物体进行区分。The comparison and differentiation module 16 is configured to compare the latitude and longitude coordinates of the moving object with the latitude and longitude coordinates of the existing object, and differentiate the moving object based on the comparison result.
在本发明实施例中,对比区分模块16具体用于:In the embodiment of the present invention, the comparison and differentiation module 16 is specifically used for:
基于对比结果,采用不同标注形式对移动物体进行区分。Based on the comparison results, different annotation forms are used to distinguish moving objects.
在本发明实施例中,对比区分模块16具体用于:In the embodiment of the present invention, the comparison and differentiation module 16 is specifically used for:
若所述移动物体的经纬度坐标与已有物体经纬度坐标相同,在显示屏上采用第一颜色标注移动物体的位置;If the latitude and longitude coordinates of the moving object are the same as the latitude and longitude coordinates of the existing object, use the first color to mark the position of the moving object on the display screen;
若所述移动物体的经纬度坐标与已有物体经纬度坐标不同,在显示屏上采用第二颜色标注移动物体的位置。If the latitude and longitude coordinates of the moving object are different from the latitude and longitude coordinates of the existing object, a second color is used to mark the position of the moving object on the display screen.
本发明实施例还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述视频大范围内物体识别定位方法。An embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implements the above-mentioned object recognition in a large range of video when the processor executes the computer program positioning method.
本发明实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述所述视频大范围内物体识别定位方法的步骤。Embodiments of the present invention further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the above-mentioned method for recognizing and locating objects in a large range of video.
本发明实施例中,与现有技术中使用者无法获知视频中移动物体相对于摄像机的位置,以及移动物体的绝对位置的技术方案相比,通过获取视频图像;对视频图像进行识别,确定移动物体在图像中的位置信息;获取摄像设备的状态参数;基于状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息;获得摄像设备的经纬度坐标;基于摄像设备的经纬度坐标和移动物体相对于摄像设备的位置信息,确定移动物体的经纬度坐标,可以让使用者方便地得到这些位置信息,可以让使用者清晰地掌握视频拍摄区域的情况。In the embodiment of the present invention, compared with the technical solution in the prior art in which the user cannot know the position of the moving object in the video relative to the camera and the absolute position of the moving object, the moving object is determined by acquiring a video image; identifying the video image position information in the image; obtain the state parameters of the camera device; determine the position information of the moving object relative to the camera device based on the state parameters and the position information of the moving object in the image; obtain the longitude and latitude coordinates of the camera device; based on the longitude and latitude of the camera device The coordinates and the position information of the moving object relative to the camera device determine the latitude and longitude coordinates of the moving object, so that the user can easily obtain the position information, and the user can clearly grasp the situation of the video shooting area.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序 产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above further describe the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above-mentioned specific embodiments are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (10)

  1. 一种视频大范围内物体识别定位方法,其特征在于,包括:A method for identifying and locating objects in a large range of videos, characterized in that it includes:
    获取视频图像;Get video images;
    对所述视频图像进行识别,确定移动物体在图像中的位置信息;Identifying the video image to determine the position information of the moving object in the image;
    获取摄像设备的状态参数;Get the status parameters of the camera device;
    基于所述状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息。Based on the state parameters and the position information of the moving object in the image, the position information of the moving object relative to the imaging device is determined.
  2. 如权利要求1所述的视频大范围内物体识别定位方法,其特征在于,还包括:The method for identifying and locating objects within a large range of video as claimed in claim 1, further comprising:
    获得摄像设备的经纬度坐标;Obtain the latitude and longitude coordinates of the camera device;
    基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的位置信息,确定移动物体的经纬度坐标。The latitude and longitude coordinates of the moving object are determined based on the latitude and longitude coordinates of the imaging device and the position information of the moving object relative to the imaging device.
  3. 如权利要求1所述的视频大范围内物体识别定位方法,其特征在于,所述摄像设备的状态参数包括镜头高度、镜头下俯角、镜头变焦倍数和镜头正对方向。The method for recognizing and locating objects in a large range of video according to claim 1, wherein the state parameters of the camera device include the height of the lens, the depression angle of the lens, the zoom factor of the lens and the facing direction of the lens.
  4. 如权利要求3所述的视频大范围内物体识别定位方法,其特征在于,基于所述状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息,包括:The method for recognizing and locating objects within a large range of video according to claim 3, wherein, based on the state parameters and the position information of the moving object in the image, determining the position information of the moving object relative to the camera device, comprising:
    根据镜头高度、镜头下俯角和镜头变焦倍数匹配相应的坐标数据库;Match the corresponding coordinate database according to the height of the lens, the depression angle under the lens and the zoom factor of the lens;
    根据移动物体在图像中的位置信息,从相应的坐标数据库中匹配移动物体相对于镜头正对方向的位置;According to the position information of the moving object in the image, the position of the moving object relative to the facing direction of the lens is matched from the corresponding coordinate database;
    根据所述镜头正对方向和所匹配的移动物体相对于镜头正对方向的位置,确定移动物体相对摄像设备的位置。The position of the moving object relative to the imaging device is determined according to the facing direction of the lens and the position of the matched moving object relative to the facing direction of the lens.
  5. 如权利要求2所述的视频大范围内物体识别定位方法,其特征在于,基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的位置信息,确定移动物体的经纬度坐标,包括:The method for recognizing and locating objects within a large range of video according to claim 2, wherein determining the latitude and longitude coordinates of the moving object based on the latitude and longitude coordinates of the camera device and the position information of the moving object relative to the camera device, comprising:
    将移动物体相对于摄像设备的极坐标位置转换成移动物体相对于摄像设备的直角坐标位置;Convert the polar coordinate position of the moving object relative to the camera device into the rectangular coordinate position of the moving object relative to the camera device;
    基于所述摄像设备的经纬度坐标和移动物体相对于摄像设备的直角坐标位置,确定移动物体的经纬度坐标。The latitude and longitude coordinates of the moving object are determined based on the latitude and longitude coordinates of the imaging device and the rectangular coordinate position of the moving object relative to the imaging device.
  6. 如权利要求2所述的视频大范围内物体识别定位方法,其特征在于,还包括:The method for identifying and locating objects within a large range of video as claimed in claim 2, further comprising:
    获得已有物体经纬度坐标;Get the latitude and longitude coordinates of an existing object;
    将移动物体的经纬度坐标与已有物体经纬度坐标进行对比,基于对比结果对所述移动物体进行区分。The latitude and longitude coordinates of the moving object are compared with the latitude and longitude coordinates of the existing object, and the moving objects are distinguished based on the comparison result.
  7. 如权利要求6所述的视频大范围内物体识别定位方法,其特征在于,基于对比结果对所述移动物体进行区分,包括:The method for recognizing and locating objects in a large range of video according to claim 6, wherein the moving objects are distinguished based on the comparison result, comprising:
    基于对比结果,采用不同的标注形式对移动物体进行区分。Based on the comparison results, different annotation forms are used to distinguish moving objects.
  8. 一种视频大范围内物体识别定位装置,其特征在于,包括:A device for identifying and locating objects in a large range of videos, characterized in that it includes:
    视频图像获取模块,用于获取视频图像;A video image acquisition module for acquiring video images;
    视频图像识别模块,用于对所述视频图像进行识别,确定移动物体在图像中的位置信息;A video image recognition module, used for recognizing the video image and determining the position information of the moving object in the image;
    状态参数获取模块,用于获取摄像设备的状态参数;The state parameter acquisition module is used to acquire the state parameters of the camera device;
    移动物体相对位置信息确定模块,用于基于所述状态参数和移动物体在图像中的位置信息,确定移动物体相对于摄像设备的位置信息。The relative position information determination module of the moving object is configured to determine the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image.
  9. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7任一所述视频大范围内物体识别定位方法。A computer device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, characterized in that, when the processor executes the computer program, any one of claims 1 to 7 is implemented A method for object recognition and localization in a large range of video.
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至7任一所述视频大范围内物体识别定位方法的步骤。A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the method for recognizing and locating objects in a large range of video according to any one of claims 1 to 7 are implemented.
PCT/CN2022/088672 2021-04-25 2022-04-24 Method and apparatus for identifying and positioning object within large range in video WO2022228321A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110446325.2A CN113518179A (en) 2021-04-25 2021-04-25 Method and device for identifying and positioning objects in large range of video
CN202110446325.2 2021-04-25

Publications (1)

Publication Number Publication Date
WO2022228321A1 true WO2022228321A1 (en) 2022-11-03

Family

ID=78062782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088672 WO2022228321A1 (en) 2021-04-25 2022-04-24 Method and apparatus for identifying and positioning object within large range in video

Country Status (2)

Country Link
CN (1) CN113518179A (en)
WO (1) WO2022228321A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113518179A (en) * 2021-04-25 2021-10-19 何佳林 Method and device for identifying and positioning objects in large range of video

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104034316A (en) * 2013-03-06 2014-09-10 深圳先进技术研究院 Video analysis-based space positioning method
CN109782786A (en) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 A kind of localization method and unmanned plane based on image procossing
CN111385467A (en) * 2019-10-25 2020-07-07 视云融聚(广州)科技有限公司 System and method for calculating longitude and latitude of any position of video picture of camera
KR102166784B1 (en) * 2020-05-22 2020-10-16 주식회사 서경산업 System for cctv monitoring and managing on bicycle road
CN111953937A (en) * 2020-07-31 2020-11-17 云洲(盐城)创新科技有限公司 Drowning person lifesaving system and drowning person lifesaving method
CN113223087A (en) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 Target object geographic coordinate positioning method and device based on video monitoring
CN113518179A (en) * 2021-04-25 2021-10-19 何佳林 Method and device for identifying and positioning objects in large range of video

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493457A (en) * 2017-09-06 2017-12-19 天津飞眼无人机科技有限公司 A kind of unmanned plane monitoring system
CN107749957A (en) * 2017-11-07 2018-03-02 高域(北京)智能科技研究院有限公司 Unmanned plane image display system and method
CN108981670B (en) * 2018-09-07 2021-05-11 成都川江信息技术有限公司 Method for automatically positioning coordinates of scene in real-time video
CN109558809B (en) * 2018-11-12 2021-02-23 沈阳世纪高通科技有限公司 Image processing method and device
CN111402324B (en) * 2019-01-02 2023-08-18 中国移动通信有限公司研究院 Target measurement method, electronic equipment and computer storage medium
CN110806198A (en) * 2019-10-25 2020-02-18 北京前沿探索深空科技有限公司 Target positioning method and device based on remote sensing image, controller and medium
CN111046762A (en) * 2019-11-29 2020-04-21 腾讯科技(深圳)有限公司 Object positioning method, device electronic equipment and storage medium
CN111354046A (en) * 2020-03-30 2020-06-30 北京芯龙德大数据科技有限公司 Indoor camera positioning method and positioning system
CN111652072A (en) * 2020-05-08 2020-09-11 北京嘀嘀无限科技发展有限公司 Track acquisition method, track acquisition device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104034316A (en) * 2013-03-06 2014-09-10 深圳先进技术研究院 Video analysis-based space positioning method
CN109782786A (en) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 A kind of localization method and unmanned plane based on image procossing
CN111385467A (en) * 2019-10-25 2020-07-07 视云融聚(广州)科技有限公司 System and method for calculating longitude and latitude of any position of video picture of camera
KR102166784B1 (en) * 2020-05-22 2020-10-16 주식회사 서경산업 System for cctv monitoring and managing on bicycle road
CN111953937A (en) * 2020-07-31 2020-11-17 云洲(盐城)创新科技有限公司 Drowning person lifesaving system and drowning person lifesaving method
CN113518179A (en) * 2021-04-25 2021-10-19 何佳林 Method and device for identifying and positioning objects in large range of video
CN113223087A (en) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 Target object geographic coordinate positioning method and device based on video monitoring

Also Published As

Publication number Publication date
CN113518179A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
CN109887040B (en) Moving target active sensing method and system for video monitoring
JP4926817B2 (en) Index arrangement information measuring apparatus and method
US10410089B2 (en) Training assistance using synthetic images
CN104125372B (en) Target photoelectric search and detection method
WO2020093436A1 (en) Three-dimensional reconstruction method for inner wall of pipe
WO2017167282A1 (en) Target tracking method, electronic device, and computer storage medium
CN107357286A (en) Vision positioning guider and its method
Momeni-k et al. Height estimation from a single camera view
WO2019127518A1 (en) Obstacle avoidance method and device and movable platform
CN106370160A (en) Robot indoor positioning system and method
CN109739239A (en) A kind of planing method of the uninterrupted Meter recognition for crusing robot
CN110910460A (en) Method and device for acquiring position information and calibration equipment
WO2022228321A1 (en) Method and apparatus for identifying and positioning object within large range in video
CN110796032A (en) Video fence based on human body posture assessment and early warning method
WO2016183954A1 (en) Calculation method and apparatus for movement locus, and terminal
WO2023070312A1 (en) Image processing method
CN111242988A (en) Method for tracking target by using double pan-tilt coupled by wide-angle camera and long-focus camera
WO2023173950A1 (en) Obstacle detection method, mobile robot, and machine readable storage medium
CN104330075B (en) Rasterizing polar coordinate system object localization method
WO2021238070A1 (en) Three-dimensional image generation method and apparatus, and computer device
JP7035272B2 (en) Shooting system
WO2022052409A1 (en) Automatic control method and system for multi-camera filming
CN112073640B (en) Panoramic information acquisition pose acquisition method, device and system
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
Junejo Using pedestrians walking on uneven terrains for camera calibration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794792

Country of ref document: EP

Kind code of ref document: A1