WO2022228321A1 - Procédé et appareil pour identifier et positionner un objet dans une large plage dans une vidéo - Google Patents

Procédé et appareil pour identifier et positionner un objet dans une large plage dans une vidéo Download PDF

Info

Publication number
WO2022228321A1
WO2022228321A1 PCT/CN2022/088672 CN2022088672W WO2022228321A1 WO 2022228321 A1 WO2022228321 A1 WO 2022228321A1 CN 2022088672 W CN2022088672 W CN 2022088672W WO 2022228321 A1 WO2022228321 A1 WO 2022228321A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving object
lens
video
latitude
position information
Prior art date
Application number
PCT/CN2022/088672
Other languages
English (en)
Chinese (zh)
Inventor
何佳林
Original Assignee
何佳林
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 何佳林 filed Critical 何佳林
Publication of WO2022228321A1 publication Critical patent/WO2022228321A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present invention relates to the technical field of moving object identification and processing, and in particular, to a method and device for object identification and positioning in a large range of video.
  • Existing cameras can record video over a wide range of areas. However, the user cannot know the position of the moving object relative to the camera in the video, and the absolute position of the moving object. In some scenarios, the location information is very important to the user.
  • An embodiment of the present invention provides a method for recognizing and locating objects in a large range of videos, which is used to know the position of a moving object in the video relative to a camera and the absolute position of the moving object.
  • the method includes:
  • the position information of the moving object relative to the imaging device is determined.
  • the embodiment of the present invention also provides a device for identifying and positioning objects in a large range of videos, which is used to know the position of the moving object in the video relative to the camera and the absolute position of the moving object.
  • the device includes:
  • a video image acquisition module for acquiring video images
  • a video image recognition module used for recognizing the video image and determining the position information of the moving object in the image
  • the state parameter acquisition module is used to acquire the state parameters of the camera device
  • the relative position information determination module of the moving object is configured to determine the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image.
  • An embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implements the above-mentioned object recognition in a large range of video when the processor executes the computer program positioning method.
  • Embodiments of the present invention further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the above-mentioned method for recognizing and locating objects in a large range of video.
  • the moving object is determined by acquiring a video image; identifying the video image position information in the image; obtain the state parameters of the camera device; determine the position information of the moving object relative to the camera device based on the state parameters and the position information of the moving object in the image, so that the user can easily obtain the position information, and can Allows users to clearly grasp the situation of the video shooting area.
  • FIG. 1 is a flowchart (1) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 2 is a flowchart (2) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 3 is a flowchart (3) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 4 is a side view of a lens and a shooting range in an embodiment of the present invention.
  • FIG. 5 is a top view of a lens and a shooting range in an embodiment of the present invention.
  • FIG. 6 is a perspective view of a lens and a shooting range in an embodiment of the present invention.
  • FIG. 7 is a top view of an area where lenses are sequentially shot in an embodiment of the present invention.
  • FIG. 8 is a top view of an effective monitoring area in an embodiment of the present invention.
  • Fig. 9 is the display image on the screen when the program is running in the embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a program running and solving process in an embodiment of the present invention.
  • FIG. 12 is a flowchart (4) of a method for identifying and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 13 is a flowchart (5) of a method for recognizing and locating objects within a large range of video in an embodiment of the present invention
  • FIG. 14 is a structural block diagram (1) of a device for identifying and locating objects within a large range of video according to an embodiment of the present invention
  • 15 is a structural block diagram (2) of a device for recognizing and positioning objects within a large range of video according to an embodiment of the present invention
  • FIG. 16 is a structural block diagram (3) of a device for recognizing and positioning objects in a large range of video according to an embodiment of the present invention.
  • FIG. 1 is a flowchart (1) of a method for identifying and locating objects in a large range of videos in an embodiment of the present invention. As shown in FIG. 1 , the method includes:
  • Step 101 acquire a video image
  • Step 102 Identify the video image, and determine the position information (ie, pixel position) of the moving object in the image;
  • Step 103 Acquire state parameters of the camera device
  • Step 104 Based on the state parameters and the position information of the moving object in the image, determine the position information of the moving object relative to the imaging device.
  • the state parameters of the imaging device include the height of the lens, the depression angle of the lens, the zoom factor of the lens (corresponding to the horizontal field of view of the lens and the vertical field of view of the lens), and the facing direction of the lens.
  • You can set the flying height of the drone that is, the height of the lens from the ground to 50 meters.
  • the H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively.
  • FIG. 4 The specific side view of the lens and the shooting range for shooting video is shown in FIG. 4 , the top view of the lens and the shooting range is shown in FIG. 5 , and the oblique view of the lens and the shooting range is shown in FIG. 6 .
  • A represents the position of the lens
  • B represents the center of the shooting range of the lens.
  • Figures 4 to 6 also describe the positional relationship between A and B.
  • FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The above time varies according to the machine parameters of different infrared lenses.
  • step 104 determines the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image, specifically including:
  • Step 1041 Match the corresponding coordinate database according to the height of the lens, the depression angle of the lens and the zoom factor of the lens;
  • Step 1042 According to the position information of the moving object in the image, match the position of the moving object relative to the facing direction of the lens from the corresponding coordinate database;
  • Step 1043 Determine the position of the moving object relative to the imaging device according to the facing direction of the lens and the position of the matched moving object relative to the facing direction of the lens.
  • the polar coordinate database is taken as an example for description below.
  • the polar coordinate database of the moving object corresponding to the pixel point (that is, the position information) relative to the facing direction of the lens is obtained through the preliminary measurement.
  • the polar axis of this polar coordinate system is the positive direction of the Y axis, and the positive direction of the angle is the clockwise direction.
  • the database contains polar coordinate data of the moving object photographed in the image corresponding to each pixel point relative to the facing direction of the lens.
  • the azimuth angle is defined as a negative value on the left side of the lens axis, and a positive value is defined on the right side of the lens axis, which is represented by ⁇ ; the distance is represented by L.
  • This set of databases is stored in a computer. That is to say, when using a lens of the same specification, the lens height, the angle of depression under the lens, and the zoom factor are the same as the three parameters in the measurement. Then, in use, the polar coordinate data of the moving object corresponding to the pixel point relative to the facing direction of the lens is the same as the polar coordinate data of the moving object corresponding to the pixel point relative to the facing direction of the lens during measurement.
  • the height of the lens, the depression angle of the lens, and the zoom factor are sent back to the computer during the identification and positioning process.
  • the corresponding (L, ⁇ ) is obtained by matching in the corresponding polar coordinate database.
  • the computer obtains the facing direction ⁇ of the lens when the moving target is captured.
  • Calculate the azimuth angle ⁇ of the moving object relative to the lens, ⁇ ⁇ + ⁇ .
  • the polar coordinate position (L, ⁇ ) of the moving object relative to the lens is obtained.
  • the method further includes:
  • Step 105 obtain the latitude and longitude coordinates of the camera device
  • Step 106 Determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device.
  • step 106 determines the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device, including:
  • Step 1061 Convert the polar coordinate position of the moving object relative to the imaging device into the rectangular coordinate position of the moving object relative to the imaging device;
  • Step 1062 Determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the rectangular coordinate position of the moving object relative to the camera device.
  • the computer obtains the latitude and longitude coordinates (A, B) returned by the camera device.
  • the method further includes:
  • Step 201 Obtain the latitude and longitude coordinates of the existing object
  • Step 202 Compare the latitude and longitude coordinates of the moving object with the latitude and longitude coordinates of the existing object, and differentiate the moving objects based on the comparison result.
  • the moving object that returns the coordinates of latitude and longitude is recognized by the camera.
  • the moving object that calculates the latitude and longitude is the same moving object.
  • step 202 distinguishes the moving objects based on the comparison result, including:
  • different labeling forms may be, for example, different colors, underlines in different formats, and the like.
  • the first color is used to mark the position of the moving object on the display screen
  • the second color is used to mark the position of the moving object on the display screen.
  • the flying height of the drone is 50 meters above the ground.
  • the H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively.
  • A stands for the lens position
  • B stands for the center of the shot range of the lens.
  • the positional relationship between the two is shown in Figure 4, Figure 5, and Figure 6.
  • the range taken is a trapezoid with an upper base of 37 meters, a lower base of 77 meters and a height of 287 meters.
  • the horizontal distance from the center of the ground shot by the lens to the lens is 356 meters, of which the farthest shooting distance is 549 meters, and the closest shooting distance is 262 meters.
  • FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The whole video is transmitted to the computer in real time, and the present invention processes the video.
  • the user enters the following parameters: the drone is the lens height of 50 meters, the drone is the longitude and latitude of the lens A (XXX.5000000E, XXX.5000000N), the gimbal pitch angle is horizontally down 8 degrees, and the zoom factor is 4 times.
  • the direction the camera is facing is transmitted back by the drone in real time.
  • the schematic diagram of each shooting area shown in FIG. 8 can be obtained, wherein MN is the border, the north is the outside, and the south is the inside.
  • MN is the border
  • the target needs to pass at least 287 meters to pass through the shadow area. If a person tries to cross the border from north to south, the speed of movement is 6 km/h, which is 100 meters in 1 minute. Then the person appears in the shooting area at least 2 times during a large cycle. That is to say, a drone and a computer can use the present invention to monitor the 936-meter national border.
  • Step (1) When the lens is rotated to the 10th position, a moving object P appears in the captured image, the moving object P is identified, and the pixel position of P in the image is (-240PX, +156PX). As shown in Figure 10.
  • Step (2) Select the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times.
  • a database with three parameters corresponding to the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times is matched. Then match in this database to obtain the polar coordinate position (453.4m, -3.0°) of the moving object P relative to the direction facing the lens.
  • step (3) the facing direction -16° when the lens captures the point P is retrieved, and the polar coordinate position (453.4m, -19.0°) of the lens corresponding to the moving object P is calculated.
  • Step (4) The latitude and longitude coordinates of the lens are retrieved, and the latitude and longitude of the moving object P is calculated (XXX.4980943E, XXX.5038578N).
  • step (5) the position and position parameters of P are displayed on the user display screen, as shown in FIG. 9 .
  • the flying height of the drone is 50 meters above the ground.
  • the H20T lens is set to 4x zoom, and the corresponding horizontal and vertical field angles of the lens are 8 degrees and 5.6 degrees, respectively.
  • A represents the position of the lens
  • B represents the center of the shooting range of the lens.
  • the positional relationship between the two is shown in Figure 4, Figure 5, and Figure 6.
  • the range taken is a trapezoid with an upper base of 37 meters, a lower base of 77 meters and a height of 287 meters.
  • the horizontal distance from the center of the ground shot by the lens to the lens is 356 meters, of which the farthest shooting distance is 549 meters, and the closest shooting distance is 262 meters.
  • FIG. 7 is a schematic diagram of each shooting area in a shooting cycle. The whole video is transmitted to the computer in real time, and the present invention processes the video.
  • the user enters the following parameters: the drone is the lens height of 50 meters, the drone is the latitude and longitude of the lens (XXX.5000000E, XXX.5000000N), the gimbal pitch angle is 8 degrees horizontally downward, and the zoom factor is 4 times, the direction the camera is facing is transmitted back by the drone in real time.
  • our personnel wear the receiver of the satellite positioning system, and send the position to the computer in real time (that is, the known latitude and longitude coordinates).
  • the longitude and latitude of the moving target identified in the computer are consistent with the sent back longitude and latitude, a red border will be displayed on the screen around the moving target.
  • the moving target 4 displays a blue border on the screen.
  • Step (1) When the lens is rotated to the 14th position, a moving object appears in the captured image, and the moving object is identified, and its pixel position in the image is C (50PX, 50PX).
  • C 50PX, 50PX
  • E 80PX, 80PX
  • Step (2) Select the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times.
  • a database with three parameters corresponding to the lens height of 50 meters, the lower lens depression angle of 8 degrees, and the zoom factor of 4 times is matched.
  • the polar coordinate position (382.7m, 0.6°) of the moving object C relative to the facing direction of the lens is obtained by matching
  • the polar coordinate position (400.3m, 1.0°) of the moving object E relative to the facing direction of the lens is obtained by matching. .
  • step (3) the facing direction 16.0° when the moving object C is captured by the lens is retrieved, and the polar coordinate position (453.4m, -19.0°) of the lens corresponding to the moving object P is calculated.
  • the facing direction 24.0° when the moving object E is captured by the lens is retrieved, and the polar coordinate position (400.3m, 25.0°) of the lens corresponding to the moving object P is calculated.
  • Step (4) The latitude and longitude coordinates of the lens are retrieved, and the latitude and longitude C (XXX.5014125E, XXX.5032996N) and E (XXX.5021824E, XXX.5032643N) of the moving object are calculated respectively.
  • Step (5) Compare the latitude and longitude coordinates returned by the receiver of our personnel satellite positioning system to distinguish different moving objects.
  • the moving object C matches the latitude and longitude returned by our personnel, and the object E does not match the latitude and longitude returned by our personnel.
  • the moving object C is our personnel.
  • Step (6) On the user display screen, the position and position parameters of C are displayed in red, and the position and position parameters of E are displayed in blue. As shown in Figure 9.
  • the moving object in the above example is a single person, and in other examples, the detection of a larger range, where the moving object can also be a vehicle, etc., should also be within the protection scope of the present invention.
  • Embodiments of the present invention also provide a device for recognizing and locating objects in a large range of video, as described in the following embodiments. Since the principle of the device for solving the problem is similar to the method for recognizing and locating objects in a large range of videos, the implementation of the device can refer to the implementation of the method for recognizing and locating objects in a large range of videos, and the repetition will not be repeated.
  • FIG. 14 is a structural block diagram (1) of a device for identifying and positioning objects in a large range of video in an embodiment of the present invention. As shown in FIG. 12 , the device includes:
  • a video image acquisition module 02 for acquiring video images
  • the video image recognition module 04 is used for recognizing the video image and determining the position information of the moving object in the image;
  • a state parameter acquisition module 06 used to acquire the state parameters of the camera device
  • the relative position information determination module 08 of the moving object is configured to determine the position information of the moving object relative to the camera device based on the state parameter and the position information of the moving object in the image. .
  • the device further includes:
  • the latitude and longitude coordinate acquisition module 10 of the camera device is used to obtain the longitude and latitude coordinates of the camera device;
  • the moving object latitude and longitude coordinate determination module 12 is configured to determine the longitude and latitude coordinates of the moving object based on the longitude and latitude coordinates of the camera device and the position information of the moving object relative to the camera device.
  • the state parameters of the imaging device include the height of the lens, the depression angle of the lens, the zoom factor of the lens, and the facing direction of the lens.
  • the relative position information determination module 08 of the moving object is specifically used for:
  • the position of the moving object relative to the facing direction of the lens is matched from the corresponding coordinate database
  • the position of the moving object relative to the imaging device is determined.
  • the latitude and longitude coordinate determination module 12 of the moving object is specifically used for:
  • the latitude and longitude coordinates of the moving object are determined based on the latitude and longitude coordinates of the imaging device and the rectangular coordinate position of the moving object relative to the imaging device.
  • the device further includes:
  • the latitude and longitude coordinate obtaining module 14 of the existing object is used to obtain the latitude and longitude coordinates of the existing object;
  • the comparison and differentiation module 16 is configured to compare the latitude and longitude coordinates of the moving object with the latitude and longitude coordinates of the existing object, and differentiate the moving object based on the comparison result.
  • the comparison and differentiation module 16 is specifically used for:
  • the comparison and differentiation module 16 is specifically used for:
  • the latitude and longitude coordinates of the moving object are the same as the latitude and longitude coordinates of the existing object, use the first color to mark the position of the moving object on the display screen;
  • a second color is used to mark the position of the moving object on the display screen.
  • An embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implements the above-mentioned object recognition in a large range of video when the processor executes the computer program positioning method.
  • Embodiments of the present invention further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the above-mentioned method for recognizing and locating objects in a large range of video.
  • the moving object is determined by acquiring a video image; identifying the video image position information in the image; obtain the state parameters of the camera device; determine the position information of the moving object relative to the camera device based on the state parameters and the position information of the moving object in the image; obtain the longitude and latitude coordinates of the camera device; based on the longitude and latitude of the camera device.
  • the coordinates and the position information of the moving object relative to the camera device determine the latitude and longitude coordinates of the moving object, so that the user can easily obtain the position information, and the user can clearly grasp the situation of the video shooting area.
  • embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil pour identifier et positionner un objet dans une large plage dans une vidéo. Le procédé comprend les étapes suivantes : acquérir une image vidéo ; identifier l'image vidéo, de façon à déterminer des informations de position d'un objet mobile dans l'image ; acquérir des paramètres d'état d'un dispositif photographique ; et sur la base des paramètres d'état et des informations de position de l'objet mobile dans l'image, déterminer des informations de position de l'objet mobile par rapport au dispositif photographique. Selon la présente invention, une partie mobile identifiée à partir d'une vidéo acquise par un objectif est traitée, et la position relative de la partie mobile par rapport à l'objectif est résolue, de sorte qu'un utilisateur peut comprendre clairement la situation d'une zone d'enregistrement vidéo.
PCT/CN2022/088672 2021-04-25 2022-04-24 Procédé et appareil pour identifier et positionner un objet dans une large plage dans une vidéo WO2022228321A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110446325.2 2021-04-25
CN202110446325.2A CN113518179A (zh) 2021-04-25 2021-04-25 视频大范围内物体识别定位方法及装置

Publications (1)

Publication Number Publication Date
WO2022228321A1 true WO2022228321A1 (fr) 2022-11-03

Family

ID=78062782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088672 WO2022228321A1 (fr) 2021-04-25 2022-04-24 Procédé et appareil pour identifier et positionner un objet dans une large plage dans une vidéo

Country Status (2)

Country Link
CN (1) CN113518179A (fr)
WO (1) WO2022228321A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113518179A (zh) * 2021-04-25 2021-10-19 何佳林 视频大范围内物体识别定位方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104034316A (zh) * 2013-03-06 2014-09-10 深圳先进技术研究院 一种基于视频分析的空间定位方法
CN109782786A (zh) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 一种基于图像处理的定位方法和无人机
CN111385467A (zh) * 2019-10-25 2020-07-07 视云融聚(广州)科技有限公司 一种计算摄像机视频画面任意位置经纬度的系统及方法
KR102166784B1 (ko) * 2020-05-22 2020-10-16 주식회사 서경산업 자전거 전용차로 cctv 단속관리 시스템
CN111953937A (zh) * 2020-07-31 2020-11-17 云洲(盐城)创新科技有限公司 落水人员救生系统及落水人员救生方法
CN113223087A (zh) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 一种基于视频监控的目标物体地理坐标定位方法和装置
CN113518179A (zh) * 2021-04-25 2021-10-19 何佳林 视频大范围内物体识别定位方法及装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493457A (zh) * 2017-09-06 2017-12-19 天津飞眼无人机科技有限公司 一种无人机监视系统
CN107749957A (zh) * 2017-11-07 2018-03-02 高域(北京)智能科技研究院有限公司 无人机航拍画面显示系统和方法
CN108981670B (zh) * 2018-09-07 2021-05-11 成都川江信息技术有限公司 一种将实时视频中的场景自动定位坐标的方法
CN109558809B (zh) * 2018-11-12 2021-02-23 沈阳世纪高通科技有限公司 一种图像处理方法及装置
CN111402324B (zh) * 2019-01-02 2023-08-18 中国移动通信有限公司研究院 一种目标测量方法、电子设备以及计算机存储介质
CN110806198A (zh) * 2019-10-25 2020-02-18 北京前沿探索深空科技有限公司 基于遥感图像的目标定位方法、装置以及控制器和介质
CN111046762A (zh) * 2019-11-29 2020-04-21 腾讯科技(深圳)有限公司 一种对象定位方法、装置电子设备及存储介质
CN111354046A (zh) * 2020-03-30 2020-06-30 北京芯龙德大数据科技有限公司 室内摄像头定位方法及定位系统
CN111652072A (zh) * 2020-05-08 2020-09-11 北京嘀嘀无限科技发展有限公司 轨迹获取方法、轨迹获取装置、存储介质和电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104034316A (zh) * 2013-03-06 2014-09-10 深圳先进技术研究院 一种基于视频分析的空间定位方法
CN109782786A (zh) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 一种基于图像处理的定位方法和无人机
CN111385467A (zh) * 2019-10-25 2020-07-07 视云融聚(广州)科技有限公司 一种计算摄像机视频画面任意位置经纬度的系统及方法
KR102166784B1 (ko) * 2020-05-22 2020-10-16 주식회사 서경산업 자전거 전용차로 cctv 단속관리 시스템
CN111953937A (zh) * 2020-07-31 2020-11-17 云洲(盐城)创新科技有限公司 落水人员救生系统及落水人员救生方法
CN113518179A (zh) * 2021-04-25 2021-10-19 何佳林 视频大范围内物体识别定位方法及装置
CN113223087A (zh) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 一种基于视频监控的目标物体地理坐标定位方法和装置

Also Published As

Publication number Publication date
CN113518179A (zh) 2021-10-19

Similar Documents

Publication Publication Date Title
CN109887040B (zh) 面向视频监控的运动目标主动感知方法及系统
JP4926817B2 (ja) 指標配置情報計測装置および方法
US10410089B2 (en) Training assistance using synthetic images
WO2020093436A1 (fr) Procédé de reconstruction tridimensionnelle pour paroi interne de tuyau
CN107357286A (zh) 视觉定位导航装置及其方法
CN104125372B (zh) 一种目标光电搜索探测方法
Momeni-k et al. Height estimation from a single camera view
CN106370160A (zh) 一种机器人室内定位系统和方法
CN109739239A (zh) 一种用于巡检机器人的不间断仪表识别的规划方法
WO2022228321A1 (fr) Procédé et appareil pour identifier et positionner un objet dans une large plage dans une vidéo
CN110796032A (zh) 基于人体姿态评估的视频围栏及预警方法
WO2016183954A1 (fr) Procédé et appareil de calcul pour un emplacement de mouvement, et terminal
WO2023070312A1 (fr) Procédé de traitement d'image
CN111242988A (zh) 一种广角相机与长焦相机联动双云台跟踪目标的方法
WO2023173950A1 (fr) Procédé de détection d'obstacle, robot mobile et support de stockage lisible par une machine
WO2021238070A1 (fr) Procédé et appareil de génération d'image en trois dimensions et dispositif informatique
Bazin et al. UAV attitude estimation by vanishing points in catadioptric images
JP7035272B2 (ja) 撮影システム
WO2020015501A1 (fr) Procédé de construction de carte, appareil, support de stockage et dispositif électronique
CN115717867A (zh) 一种基于机载双相机和目标追踪的桥梁变形测量方法
WO2022052409A1 (fr) Procédé et système de commande automatique pour prise de vues multi-caméra
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
CN109460077B (zh) 一种自动跟踪方法、自动跟踪设备及自动跟踪系统
Junejo Using pedestrians walking on uneven terrains for camera calibration
CN112073640A (zh) 全景信息采集位姿获取方法及装置、系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794792

Country of ref document: EP

Kind code of ref document: A1