WO2023155357A1 - 一种瞄准的方法、装置、终端及存储介质 - Google Patents

一种瞄准的方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2023155357A1
WO2023155357A1 PCT/CN2022/100995 CN2022100995W WO2023155357A1 WO 2023155357 A1 WO2023155357 A1 WO 2023155357A1 CN 2022100995 W CN2022100995 W CN 2022100995W WO 2023155357 A1 WO2023155357 A1 WO 2023155357A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
impact point
images
motion characteristic
Prior art date
Application number
PCT/CN2022/100995
Other languages
English (en)
French (fr)
Inventor
黄立
黄晟
周宇
方磊
严倩倩
Original Assignee
武汉高德智感科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉高德智感科技有限公司 filed Critical 武汉高德智感科技有限公司
Publication of WO2023155357A1 publication Critical patent/WO2023155357A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G1/00Sighting devices
    • F41G1/46Sighting devices for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention relates to the technical field of automatic aiming, in particular to an aiming method, device, terminal and storage medium.
  • the aiming device is an indispensable and important part of the gun.
  • the performance of the aiming device directly affects the shooting accuracy.
  • the shooter searches for the target through the human eye, and shoots after calculating according to the ballistic table.
  • the current shooting and aiming method has high requirements for the ability to track and search targets in different environments, and different weapons are equipped with different types of scopes. The shooter needs to spend a lot of time and money to complete matching training, which is inefficient and ineffective. good.
  • the present invention proposes an aiming method, device, terminal and storage medium to realize automatic aiming and solve the problems in the prior art.
  • the present invention proposes the following specific embodiments:
  • the embodiment of the present invention proposes a method for aiming, which is applied to guns, and the method includes:
  • the images are obtained by shooting the shooting area;
  • the initial impact point is corrected based on the motion characteristic data to obtain a final impact point, and shooting guidance is performed based on the final impact point.
  • the processing each of the images to determine the target in the images includes:
  • the suspected target area is identified and processed based on a deep neural network to determine the target in the image.
  • the image features of the target are obtained based on multi-angle recognition of the target selected by the user.
  • the deep neural network is obtained by training more than a certain number of sample images; the sample image is an image corresponding to a logo; the logo includes the presence of the first logo of the target and the absence of a second identification of said target;
  • the method also includes:
  • Multiple deep neural networks are pre-trained, and different deep neural networks are used to identify different targets.
  • the initial point of impact is calculated based on the position of the center of mass of the target in the current image, current environment data, gun attribute data, muzzle pose, and ballistic table; Correct the initial impact point based on the aforementioned motion characteristic data to obtain the final impact point, including:
  • the initial impact point is corrected based on the moving direction and moving distance of the target to obtain a final impact point.
  • the shooting guidance based on the final impact point includes:
  • a guide prompt is generated to guide the user to adjust the muzzle pose so that the current impact point coincides with the final impact point.
  • the embodiment of the present invention also proposes an aiming device, which is applied to guns, and the device includes:
  • the acquisition module is used to acquire real-time images, and the images are obtained by shooting the shooting area;
  • a position module configured to process each of the images, determine the target in the image, and determine the position of the center of mass of the target;
  • a motion characteristic module configured to determine motion characteristic data of the target based on the centroid positions of the target in a plurality of images of a preset time period
  • a correction module configured to correct the initial impact point based on the motion characteristic data to obtain a final impact point, and perform shooting guidance based on the final impact point.
  • the location module is used for:
  • the suspected target area is identified and processed based on a deep neural network to determine the target in the image.
  • An embodiment of the present invention proposes a terminal, including a memory and a processor, where a computer program is stored in the memory; and the above targeting method is implemented when the processor executes the computer program.
  • An embodiment of the present invention provides a storage medium, in which a computer program is stored; when the computer program is executed, the above targeting method is realized.
  • the embodiment of the present invention proposes a method, device, terminal and storage medium for aiming, which are applied to firearms.
  • the method includes: acquiring real-time images, which are obtained by shooting the shooting area; processing the image, determining the target in the image, and determining the centroid position of the target; determining the motion characteristic data of the target based on the centroid position of the target in a plurality of the images in a preset time period; The initial impact point is corrected based on the motion characteristic data to obtain a final impact point, and shooting guidance is performed based on the final impact point.
  • the target can be identified, the motion characteristic data of the target can be determined, and the target can be locked in real time based on the motion characteristic data, the behavior of the target can be estimated, and the user can be guided to adjust the trajectory based on the estimated result, so as to obtain The best shooting plan and aiming point, reducing the risk of shooting mistakes.
  • FIG. 1 shows a schematic flowchart of a targeting method proposed by an embodiment of the present invention
  • Fig. 2 shows a schematic diagram described in a targeting method proposed by an embodiment of the present invention
  • Fig. 3 shows a schematic structural diagram of an aiming device proposed by an embodiment of the present invention
  • Fig. 4 shows another schematic structural diagram of an aiming device proposed by an embodiment of the present invention.
  • 201-acquisition module 202-position module; 203-movement characteristic module; 204-correction module; 205-training module.
  • Embodiment 1 of the present invention discloses a method for aiming, which is applied to guns, as shown in Figure 1, the method includes the following steps:
  • Step S101 acquiring a real-time image, which is obtained by shooting the shooting area
  • Image sensors can be installed on specific guns. Through image sensors, such as video cameras or cameras, the shooting area and the area in front of the muzzle are continuously photographed to continuously obtain images.
  • Step S102 processing each of the images, determining the target in the image, and determining the position of the center of mass of the target;
  • processing each of the images described in step S102 to determine the target in the image includes:
  • the suspected target area is identified and processed based on a deep neural network to determine the target in the image.
  • the specific identification method may be to process the image first, and specifically first to perform a dynamic range analysis on each of the images based on the image characteristics of the target. Compression can suppress background information and highlight target information. Further, the image features of the target are obtained based on multi-angle recognition of the target selected by the user.
  • the specific target may be selected by the user, for example, the user may select the target to be a goat, as shown in FIG. 2 . Specifically, the target may also be other targets. Specific can be user-defined.
  • the compressed image can also be segmented based on an adaptive threshold, and morphological filtering is performed on the segmented image to extract suspected target areas to narrow the scope of recognition; subsequent training based on The neural network model identifies the suspected target area and determines the target.
  • the deep neural network is obtained by training more than a certain number of sample images; the sample image is an image corresponding to a logo; the logo includes the first logo where the target exists and the first logo where the target does not exist. Two identification;
  • the sample image may have 100,000 images, and the more the sample image, the better.
  • the sample image will have a mark, and the mark is used to mark whether the target exists in the sample image.
  • this method also includes:
  • a plurality of deep neural networks are pre-trained, and different deep neural networks are used to identify different targets.
  • training can be performed based on commonly used aiming targets, so that different deep neural networks can be obtained to meet the aiming needs of different targets.
  • the center of mass position of the target can be calculated by the centroid method.
  • Step S103 determining the motion characteristic data of the target based on the positions of the center of mass of the target in the plurality of images in a preset time period;
  • the shooting will be carried out continuously, and the latest images will be continuously obtained.
  • the motion characteristic data of the target can be determined, specifically such as motion The characteristic is to move to the left at a constant speed of 2km/h.
  • the preset time period may be from 8:50 to 9:00.
  • Step S104 Correct the initial impact point based on the motion characteristic data to obtain a final impact point, and conduct shooting guidance based on the final impact point; the initial impact point is based on the center of mass position of the target in the current image, current environment data, Gun attribute data, muzzle pose, and ballistic table are calculated.
  • the correction of the initial impact point based on the motion characteristic data in step S104 to obtain the final impact point includes:
  • the initial impact point is corrected based on the moving direction and moving distance of the target to obtain a final impact point.
  • the specific ballistic table records the ballistic elements of bullets under different shooting conditions, the ballistic table corresponds to the gun, based on the ballistic table combined with environmental data And the distance from the target to the muzzle can determine the point of impact, that is, the initial point of impact, as shown in point A1 in Figure 2.
  • the target may be active, and the bullet needs a certain amount of time to reach the target, it may be inaccurate to shoot based on the initial impact point.
  • the motion of the target is estimated based on the motion characteristic data, and based on The estimated result corrects the initial impact point to obtain the final impact point, such as point A2 in Figure 2.
  • the shooting guidance based on the final impact point in step S104 includes:
  • a guide prompt is generated to guide the user to adjust the muzzle pose so that the current impact point coincides with the final impact point.
  • the muzzle pose can be adjusted. If the current impact point is A1, the corresponding muzzle pose is S1, and the current final impact point is A2, the user needs to adjust the muzzle position. Specifically, in order to facilitate the user to aim, the final impact point (the final impact point is updated in real time) can be marked on the image interface or aiming interface of the gun, for example, marked as a red bright spot, while the current position of the gun button Under certain circumstances, the corresponding current impact point can also be displayed with a cursor, so that when the user adjusts the muzzle pose, feedback can be obtained based on the change of the cursor to determine whether the current adjustment is correct, so as to quickly make the current impact point consistent with the Finally, the point of impact of the bullets coincides and the aiming is completed.
  • Embodiment 2 of the present invention also discloses an aiming device, which is applied to guns, as shown in Figure 3, the device includes:
  • An acquisition module 201 configured to acquire a real-time image, which is obtained by shooting the shooting area
  • a position module 202 configured to process each of the images, determine the target in the image, and determine the position of the center of mass of the target;
  • a motion characteristic module 203 configured to determine the motion characteristic data of the target based on the centroid positions of the target in a plurality of the images in a preset time period;
  • the correction module 204 is configured to correct the initial impact point based on the motion characteristic data to obtain a final impact point, and perform shooting guidance based on the final impact point.
  • the location module 202 is used for:
  • the suspected target area is identified and processed based on a deep neural network to determine the target in the image.
  • the image features of the target are obtained based on multi-angle recognition of the target selected by the user.
  • the deep neural network is obtained by training more than a certain number of sample images; the sample image is an image corresponding to a logo; the logo includes the presence of the first logo of the target and the absence of a second identification of said target;
  • the device further includes: a training module 205, configured to pre-train multiple deep neural networks, and different deep neural networks are used to identify different targets.
  • a training module 205 configured to pre-train multiple deep neural networks, and different deep neural networks are used to identify different targets.
  • the initial point of impact is calculated based on the position of the center of mass of the target in the current image, current environment data, gun attribute data, muzzle pose and ballistic table; the correction module 204 Correct the initial impact point based on the motion characteristic data to obtain the final impact point, including:
  • the initial impact point is corrected based on the moving direction and moving distance of the target to obtain a final impact point.
  • correction module 204 performs shooting guidance based on the final impact point, including:
  • a guide prompt is generated to guide the user to adjust the muzzle pose so that the current impact point coincides with the final impact point.
  • Embodiment 3 of the present invention also discloses a terminal, including a memory and a processor, wherein a computer program is stored in the memory; and the targeting method described in Embodiment 1 is implemented when the processor executes the computer program.
  • the terminal in this solution may be an aiming device, and further may be an aiming device equipped with an image sensor and capable of image processing.
  • the terminal in this solution may also be a gun equipped with the aiming device.
  • Embodiment 4 of the present invention also discloses a storage medium, in which a computer program is stored; when the computer program is executed, the aiming method described in Embodiment 1 is implemented.
  • the embodiment of the present invention proposes a method, device, terminal and storage medium for aiming, which are applied to firearms.
  • the method includes: acquiring real-time images, which are obtained by shooting the shooting area; processing the image, determining the target in the image, and determining the centroid position of the target; determining the motion characteristic data of the target based on the centroid position of the target in a plurality of the images in a preset time period; The initial impact point is corrected based on the motion characteristic data to obtain a final impact point, and shooting guidance is performed based on the final impact point.
  • the target can be identified, the motion characteristic data of the target can be determined, and the target can be locked in real time based on the motion characteristic data, the behavior of the target can be estimated, and the user can be guided to adjust the trajectory based on the estimated result, so as to obtain The best shooting plan and aiming point, reducing the risk of shooting mistakes.
  • each block in a flowchart or block diagram may represent a module, program segment, or part of code that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagrams can be implemented by a dedicated hardware-based system that performs the specified function or action may be implemented, or may be implemented by a combination of special purpose hardware and computer instructions.
  • each functional module or unit in each embodiment of the present invention can be integrated together to form an independent part, or each module can exist independently, or two or more modules can be integrated to form an independent part.
  • the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the essence of the technical solution of the present invention or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

本发明实施例公开了一种瞄准的方法、装置、终端及存储介质,应用于枪支,该方法包括:获取实时的图像;对各图像进行处理,确定图像中的目标,并确定目标的质心位置;基于预设时间段的多个图像中的目标的质心位置确定目标的运动特性数据;基于运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于最终弹着点进行射击引导。本方案中可以对目标进行识别,确定目标的运动特性数据,且可以基于运动特性数据实时锁定该目标,对目标的行为进行预估,并基于预估的结果引导用户进行弹道调整,以此获得最佳射击方案和瞄准点,降低射击失误风险。

Description

一种瞄准的方法、装置、终端及存储介质 技术领域
本发明涉及自动瞄准技术领域,尤其涉及一种瞄准的方法、装置、终端及存储介质。
背景技术
瞄准装置是枪支不可或缺的重要组成部分,瞄准装置性能的优劣直接影响射击精度,但目前均由射手通过人眼进行目标搜查,并且根据弹道表计算后进行射击。目前的射击瞄准方式,对不同环境下跟搜目标的能力要求很高,而且不同的武器配备不同类型瞄准镜,射手均需耗费大量的时间和财力才能完成匹配性训练,效率低下,且效果不好。
由此,目前需要有一种更好的方案来解决现有技术中的问题。
发明内容
有鉴于此,本发明提出了一种瞄准的方法、装置、终端及存储介质,实现了自动瞄准,用以解决现有技术中的问题。
具体的,本发明提出了以下具体的实施例:
本发明实施例提出了一种瞄准的方法,应用于枪支,该方法包括:
获取实时的图像,所述图像是通过对射击区域进行拍摄得到;
对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;
基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;
基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导。
在一个具体的实施例中,所述对各所述图像进行处理,确定所述图像中的目标,包括:
基于目标的图像特征,对各所述图像的动态范围进行压缩;
通过预设的自适应阈值对经过压缩的所述图像进行分割;
对分割后的所述图像进行形态学滤波,以确定所述图像中的疑似目标区域;
基于深度神经网络对所述疑似目标区域进行识别处理,以确定所述图像中的目标。
在一个具体的实施例中,所述目标的图像特征是基于对用户所选择的目标进行多角度识别得到的。
在一个具体的实施例中,深度神经网络是通过对超过一定数量的样本图像进行训练得到;所述样本图像为对应有标识的图像;所述标识包括存在所述目标的第一标识以及不存在所述目标的第二标识;
该方法还包括:
预先训练有多个深度神经网络,不同的所述深度神经网络用以识别不同的所述目标。
在一个具体的实施例中,所述初始弹着点是基于当前的图像中所述目标的质心位置、当前环境数据、枪支属性数据、枪口位姿以及与弹道表进 行计算得到的;所述基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,包括:
基于所述运动特性数据以及子弹的飞行时间确定所述目标的移动方向与移动距离;
基于所述目标的移动方向与移动距离对所述初始弹着点进行修正,得到最终弹着点。
在一个具体的实施例中,所述基于所述最终弹着点进行射击引导,包括:
在当前的所述图像中显示所述最终弹着点;
基于当前环境数据、枪支属性数据、枪口位姿以及与弹道表确定当前弹着点并在当前的所述图像中进行显示;
基于所述最终弹着点与所述当前弹着点的差异,生成引导提示,以引导用户调整枪口位姿使得所述当前弹着点与所述最终弹着点重合。
本发明实施例还提出了一种瞄准的装置,应用于枪支,该装置包括:
获取模块,用于获取实时的图像,所述图像是通过对射击区域进行拍摄得到;
位置模块,用于对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;
运动特性模块,用于基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;
修正模块,用于基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导。
在一个具体的实施例中,所述位置模块,用于:
基于目标的图像特征,对各所述图像的动态范围进行压缩;
通过预设的自适应阈值对经过压缩的所述图像进行分割;
对分割后的所述图像进行形态学滤波,以确定所述图像中的疑似目标区域;
基于深度神经网络对所述疑似目标区域进行识别处理,以确定所述图像中的目标。
本发明实施例提出了一种终端,包括存储器与处理器,所述存储器中存储有计算机程序;所述处理器执行所述计算机程序时实现上述的瞄准的方法。
本发明实施例提出了一种存储介质,所述存储介质中存储有计算机程序;所述计算机程序被执行时实现上述的瞄准的方法。
以此,本发明实施例提出了一种瞄准的方法、装置、终端及存储介质,应用于枪支,该方法包括:获取实时的图像,所述图像是通过对射击区域进行拍摄得到;对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导。本方案中可以对目标进行识别,确定目标的运动特性数据,且可以基于运动特性数据实时锁定该目标,对目标的行为进行预估,并基于预估的结果引导用户进行弹道调整,以此获得最佳射击方案和瞄准点,降低射击失误风险。
附图说明
为了更清楚地说明本发明的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对本发明保护范围的限定。在各个附图中,类似的构成部分采用类似的编号。
图1示出了本发明实施例提出的一种瞄准的方法的流程示意图;
图2示出了本发明实施例提出的一种瞄准的方法中描述的示意图;
图3示出了本发明实施例提出的一种瞄准的装置的结构示意图;
图4示出了本发明实施例提出的一种瞄准的装置的另一结构示意图。
图例说明:
201-获取模块;202-位置模块;203-运动特性模块;204-修正模块;205-训练模块。
具体实施方式
下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。
通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
在下文中,可在本发明的各种实施例中使用的术语“包括”、“具有”及其同源词仅意在表示特定特征、数字、步骤、操作、元件、组件或前述 项的组合,并且不应被理解为首先排除一个或更多个其它特征、数字、步骤、操作、元件、组件或前述项的组合的存在或增加一个或更多个特征、数字、步骤、操作、元件、组件或前述项的组合的可能性。
此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
除非另有限定,否则在这里使用的所有术语(包括技术术语和科学术语)具有与本发明的各种实施例所属领域普通技术人员通常理解的含义相同的含义。所述术语(诸如在一般使用的词典中限定的术语)将被解释为具有与在相关技术领域中的语境含义相同的含义并且将不被解释为具有理想化的含义或过于正式的含义,除非在本发明的各种实施例中被清楚地限定。
实施例1
本发明实施例1公开了一种瞄准的方法,应用于枪支,如图1所示,该方法包括以下步骤:
步骤S101、获取实时的图像,所述图像是通过对射击区域进行拍摄得到;
具体的,本方案应用在枪支上,具体的枪支上可以设图像传感器,通过图像传感器,例如摄像机或相机,不断对射击区域,以及枪口前方的区域进行拍照,不断得到图像。
步骤S102、对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;
在一个具体的实施例中,步骤S102中所述对各所述图像进行处理,确定所述图像中的目标,包括:
基于目标的图像特征,对各所述图像的动态范围进行压缩;
通过预设的自适应阈值对经过压缩的所述图像进行分割;
对分割后的所述图像进行形态学滤波,以确定所述图像中的疑似目标区域;
基于深度神经网络对所述疑似目标区域进行识别处理,以确定所述图像中的目标。
具体的,在获取到图像之后,需要对图像进行处理,识别出图像中目标;具体识别的方式可以是先对图像进行处理,具体的先基于目标的图像特征对各所述图像的动态范围进行压缩,可以抑制背景信息,凸显目标信息。进一步的,所述目标的图像特征是基于对用户所选择的目标进行多角度识别得到的。具体的目标可以是用户选择的,例如用户可以选择目标是山羊,如图2所示。具体的,目标也可以是其他的目标。具体的可以用户自定义。
在进行动态范围压缩后,还可以基于自适应阈值对经过压缩的图像进行分割,并在分割后的图像中进行形态学滤波,提取出疑似目标区域,以缩小识别的范围;后续基于训练好的神经网络模型对疑似目标区进行识别,确定目标。
进一步的,深度神经网络是通过对超过一定数量的样本图像进行训练得到;所述样本图像为对应有标识的图像;所述标识包括存在所述目标的第一标识以及不存在所述目标的第二标识;
具体的,例如样本图像可以有100000张图像,具体样本图像的数量越多越好,样本图像会有标识,该标识用于标记所在的样本图像中是否存在该目标。
此外,考虑到目标是用户自定义的,可以有多个,由此该方法还包括:
预先训练有多个深度神经网络,不同的所述深度神经网络用以识别不 同的所述目标。
具体的,可以基于常用的瞄准目标来进行训练,由此可以得到不同的深度神经网络,来应对不同目标的瞄准需要。
在确定了目标之后,可以通过质心法计算目标的质心位置。
步骤S103、基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;
具体的,由前述控制,会不断进行拍摄,不断得到最新的图像,基于对预设时间段的多个图像中的目标的质心位置进行分析,可以确定该目标的运动特性数据,具体的例如运动特性是匀速2km/h向左移动。具体的预设时间段,例如当前时间为9点,则预设时间段可以为从8点50分到9点的时间。
步骤S104、基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导;所述初始弹着点是基于当前的图像中所述目标的质心位置、当前环境数据、枪支属性数据、枪口位姿以及与弹道表进行计算得到的。
在一个具体的实施例中,步骤S104中的所述基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,包括:
基于所述运动特性数据以及子弹的飞行时间确定所述目标的移动方向与移动距离;
基于所述目标的移动方向与移动距离对所述初始弹着点进行修正,得到最终弹着点。
首先如图2所示,首先基于当前的图像中的该目标的质心位置(基于此可以确定目标到枪口的距离)、当前的环境数据(如风速,温度等)以及枪支的属性数据(例如枪支的类型,枪膛压力,枪口初速等等)以及弹道 表,具体的弹道表中记载有不同射击条件下子弹的弹道诸元,该弹道表与枪支是对应的,基于弹道表结合环境数据以及目标到枪口的距离,即可确定弹着点,也即初始弹着点,如图2中的A1点。
而考虑到目标可能是活动的,而子弹需要一定的时间到达目标,因此基于初始弹着点进行射击,是可能不准的,在此情况下,基于运动特性数据对目标的运动进行预估,并基于预估的结果修正初始弹着点,得到最终弹着点,如图2中的A2点。
在一个具体的实施例中,步骤S104中的所述基于所述最终弹着点进行射击引导,包括:
在当前的所述图像中显示所述最终弹着点;
基于当前环境数据、枪支属性数据、枪口位姿以及与弹道表确定当前弹着点并在当前的所述图像中进行显示;
基于所述最终弹着点与所述当前弹着点的差异,生成引导提示,以引导用户调整枪口位姿使得所述当前弹着点与所述最终弹着点重合。
具体的,如图2所示,枪口位姿是可以调整的,若当前的弹着点为A1时,对应的枪口位姿为S1,而目前的最终弹着点是A2,则需要用户调整枪口位姿到S2,具体的,为了方便用户进行瞄准,枪支的图像界面或瞄准界面上可以将最终弹着点(最终弹着点是实时更新的)进行标记,例如标记为红色的亮点,而当前枪扣口位姿情况下对应的当前弹着点也可以以光标进行显示,由此,当用户调整枪口位姿时,可以基于光标的变化得到反馈,确定当前的调整是否正确,从而快速使得所述当前弹着点与所述最终弹着点重合,完成瞄准。
实施例2
有鉴于此,本发明实施例2还公开了一种瞄准的装置,应用于枪支, 如图3所示,该装置包括:
获取模块201,用于获取实时的图像,所述图像是通过对射击区域进行拍摄得到;
位置模块202,用于对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;
运动特性模块203,用于基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;
修正模块204,用于基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导。
进一步的,所述位置模块202,用于:
基于目标的图像特征,对各所述图像的动态范围进行压缩;
通过预设的自适应阈值对经过压缩的所述图像进行分割;
对分割后的所述图像进行形态学滤波,以确定所述图像中的疑似目标区域;
基于深度神经网络对所述疑似目标区域进行识别处理,以确定所述图像中的目标。
在一个具体的实施例中,所述目标的图像特征是基于对用户所选择的目标进行多角度识别得到的。
在一个具体的实施例中,深度神经网络是通过对超过一定数量的样本图像进行训练得到;所述样本图像为对应有标识的图像;所述标识包括存在所述目标的第一标识以及不存在所述目标的第二标识;
如图4所示,该装置还包括:训练模块205,用于预先训练有多个深度神经网络,不同的所述深度神经网络用以识别不同的所述目标。
在一个具体的实施例中,所述初始弹着点是基于当前的图像中所述目 标的质心位置、当前环境数据、枪支属性数据、枪口位姿以及与弹道表进行计算得到的;所述修正模块204基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,包括:
基于所述运动特性数据以及子弹的飞行时间确定所述目标的移动方向与移动距离;
基于所述目标的移动方向与移动距离对所述初始弹着点进行修正,得到最终弹着点。
进一步的,所述修正模块204基于所述最终弹着点进行射击引导,包括:
在当前的所述图像中显示所述最终弹着点;
基于当前环境数据、枪支属性数据、枪口位姿以及与弹道表确定当前弹着点并在当前的所述图像中进行显示;
基于所述最终弹着点与所述当前弹着点的差异,生成引导提示,以引导用户调整枪口位姿使得所述当前弹着点与所述最终弹着点重合。
实施例3
本发明实施例3还公开了一种终端,包括存储器与处理器,所述存储器中存储有计算机程序;所述处理器执行所述计算机程序时实现实施例1中所述的瞄准的方法。
具体的本方案中的终端可以为瞄准装置,进一步的可以为设置有图像传感器以及具有图像处理能力的瞄准装置,此外,本方案中的终端还可以为设置有该瞄准装置的枪支。
实施例4
本发明实施例4还公开了一种存储介质,所述存储介质中存储有计算机程序;所述计算机程序被执行时实现实施例1中所述的瞄准的方法。
以此,本发明实施例提出了一种瞄准的方法、装置、终端及存储介质,应用于枪支,该方法包括:获取实时的图像,所述图像是通过对射击区域进行拍摄得到;对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导。本方案中可以对目标进行识别,确定目标的运动特性数据,且可以基于运动特性数据实时锁定该目标,对目标的行为进行预估,并基于预估的结果引导用户进行弹道调整,以此获得最佳射击方案和瞄准点,降低射击失误风险。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和结构图显示了根据本发明的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,结构图和/或流程图中的每个方框、以及结构图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本发明各个实施例中的各功能模块或单元可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或更多个模块集成形成一个独立的部分。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是智能手机、个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。

Claims (10)

  1. 一种瞄准的方法,其特征在于,应用于枪支,该方法包括:
    获取实时的图像,所述图像是通过对射击区域进行拍摄得到;
    对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;
    基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;
    基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导。
  2. 如权利要求1所述的方法,其特征在于,所述对各所述图像进行处理,确定所述图像中的目标,包括:
    基于目标的图像特征,对各所述图像的动态范围进行压缩;
    通过预设的自适应阈值对经过压缩的所述图像进行分割;
    对分割后的所述图像进行形态学滤波,以确定所述图像中的疑似目标区域;
    基于深度神经网络对所述疑似目标区域进行识别处理,以确定所述图像中的目标。
  3. 如权利要求2所述的方法,其特征在于,所述目标的图像特征是基于对用户所选择的目标进行多角度识别得到的。
  4. 如权利要求2所述的方法,其特征在于,深度神经网络是通过对超过一定数量的样本图像进行训练得到;所述样本图像为对应有标识的图像;所述标识包括存在所述目标的第一标识以及不存在所述目标的第二标识;
    该方法还包括:
    预先训练有多个深度神经网络,不同的所述深度神经网络用以识别不同的所述目标。
  5. 如权利要求1所述的方法,其特征在于,所述初始弹着点是基于当前的图像中所述目标的质心位置、当前环境数据、枪支属性数据、枪口位姿以及与弹道表进行计算得到的;
    所述基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,包括:
    基于所述运动特性数据以及子弹的飞行时间确定所述目标的移动方向与移动距离;
    基于所述目标的移动方向与移动距离对所述初始弹着点进行修正,得到最终弹着点。
  6. 如权利要求1所述的方法,其特征在于,所述基于所述最终弹着点进行射击引导,包括:
    在当前的所述图像中显示所述最终弹着点;
    基于当前环境数据、枪支属性数据、枪口位姿以及与弹道表确定当前弹着点并在当前的所述图像中进行显示;
    基于所述最终弹着点与所述当前弹着点的差异,生成引导提示,以引导用户调整枪口位姿使得所述当前弹着点与所述最终弹着点重合。
  7. 一种瞄准的装置,其特征在于,应用于枪支,该装置包括:
    获取模块,用于获取实时的图像,所述图像是通过对射击区域进行拍摄得到;
    位置模块,用于对各所述图像进行处理,确定所述图像中的目标,并确定所述目标的质心位置;
    运动特性模块,用于基于预设时间段的多个所述图像中的所述目标的质心位置确定所述目标的运动特性数据;
    修正模块,用于基于所述运动特性数据对初始弹着点进行修正,得到最终弹着点,并基于所述最终弹着点进行射击引导。
  8. 如权利要求7所述的装置,其特征在于,所述位置模块,用于:
    基于目标的图像特征,对各所述图像的动态范围进行压缩;
    通过预设的自适应阈值对经过压缩的所述图像进行分割;
    对分割后的所述图像进行形态学滤波,以确定所述图像中的疑似目标区域;
    基于深度神经网络对所述疑似目标区域进行识别处理,以确定所述图像中的目标。
  9. 一种终端,其特征在于,包括存储器与处理器,所述存储器中存储有计算机程序;所述处理器执行所述计算机程序时实现权利要求1-6任一项所述的瞄准的方法。
  10. 一种存储介质,其特征在于,所述存储介质中存储有计算机程序;所述计算机程序被执行时实现权利要求1-6任一项所述的瞄准的方法。
PCT/CN2022/100995 2022-02-15 2022-06-24 一种瞄准的方法、装置、终端及存储介质 WO2023155357A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210136755.9A CN114494353A (zh) 2022-02-15 2022-02-15 一种瞄准的方法、装置、终端及存储介质
CN202210136755.9 2022-02-15

Publications (1)

Publication Number Publication Date
WO2023155357A1 true WO2023155357A1 (zh) 2023-08-24

Family

ID=81480082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100995 WO2023155357A1 (zh) 2022-02-15 2022-06-24 一种瞄准的方法、装置、终端及存储介质

Country Status (2)

Country Link
CN (1) CN114494353A (zh)
WO (1) WO2023155357A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494353A (zh) * 2022-02-15 2022-05-13 武汉高德智感科技有限公司 一种瞄准的方法、装置、终端及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213962A1 (en) * 2000-03-29 2005-09-29 Gordon Terry J Firearm Scope Method and Apparatus for Improving Firing Accuracy
CN109821238A (zh) * 2019-03-29 2019-05-31 网易(杭州)网络有限公司 游戏中的瞄准方法及装置、存储介质、电子装置
CN113610896A (zh) * 2021-08-17 2021-11-05 北京波谱华光科技有限公司 一种简易火控瞄具中目标提前量测量方法及系统
CN114494353A (zh) * 2022-02-15 2022-05-13 武汉高德智感科技有限公司 一种瞄准的方法、装置、终端及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213962A1 (en) * 2000-03-29 2005-09-29 Gordon Terry J Firearm Scope Method and Apparatus for Improving Firing Accuracy
CN109821238A (zh) * 2019-03-29 2019-05-31 网易(杭州)网络有限公司 游戏中的瞄准方法及装置、存储介质、电子装置
CN113610896A (zh) * 2021-08-17 2021-11-05 北京波谱华光科技有限公司 一种简易火控瞄具中目标提前量测量方法及系统
CN114494353A (zh) * 2022-02-15 2022-05-13 武汉高德智感科技有限公司 一种瞄准的方法、装置、终端及存储介质

Also Published As

Publication number Publication date
CN114494353A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
US10782096B2 (en) Skeet and bird tracker
US6260466B1 (en) Target aiming system
US20070040062A1 (en) Projectile tracking system
KR101119882B1 (ko) 사격제원 성능 분석 장치 및 그 동작 방법
SG193398A1 (en) Firearm, aiming system therefor, method of operating the firearm and method of reducing the probability of missing a target
WO2023155357A1 (zh) 一种瞄准的方法、装置、终端及存储介质
US6125308A (en) Method of passive determination of projectile miss distance
US20200200509A1 (en) Joint Firearm Training Systems and Methods
CN113008076A (zh) 影像枪、影像打靶系统、影像打靶方法及存储介质
CN109522890B (zh) 一种利用近红外闪烁光源识别坦克目标的方法
US20170199010A1 (en) System and Method for Tracking and Locating Targets for Shooting Applications
WO2023065962A1 (zh) 信息确定方法、装置、设备及存储介质
US11268790B2 (en) Firing-simulation scope
US20210048276A1 (en) Probabilistic low-power position and orientation
KR102449953B1 (ko) Gis 기반 대포병 탐지 레이더의 탐지오차 보정 장치 및 방법
US20230226454A1 (en) Method for managing and controlling target shooting session and system associated therewith
RU2698839C1 (ru) Стрелковый тренажер для компьютерных систем с цифровым фотоаппаратом
CN110772788B (zh) 用于显示设备对射击游戏的准星的校正方法
EP1580516A1 (en) Device and method for evaluating the aiming behaviour of a weapon
EP2746716A1 (en) Optical device including a mode for grouping shots for use with precision guided firearms
KR102151340B1 (ko) 비비탄용 사격 시스템의 탄착점 검출 방법
US20160018196A1 (en) Target scoring system and method
Boyd et al. Precision guided firearms: disruptive small arms technology
CN111729308B (zh) 一种射击类游戏的画面显示方法、装置及游戏终端
KR102433021B1 (ko) 유, 무인 복합체계의 화기 조준 보정 장치 및 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22926666

Country of ref document: EP

Kind code of ref document: A1