CN115823948A - Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium - Google Patents

Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium Download PDF

Info

Publication number
CN115823948A
CN115823948A CN202211607343.5A CN202211607343A CN115823948A CN 115823948 A CN115823948 A CN 115823948A CN 202211607343 A CN202211607343 A CN 202211607343A CN 115823948 A CN115823948 A CN 115823948A
Authority
CN
China
Prior art keywords
frame
reticle
data
picture
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211607343.5A
Other languages
Chinese (zh)
Inventor
王志科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Prade Technology Co ltd
Original Assignee
Shenzhen Prade Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Prade Technology Co ltd filed Critical Shenzhen Prade Technology Co ltd
Priority to CN202211607343.5A priority Critical patent/CN115823948A/en
Publication of CN115823948A publication Critical patent/CN115823948A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Viewfinders (AREA)

Abstract

The application discloses shooting collimation correction method is applied to electron gun sight, includes: acquiring initial image data containing the impact point; generating data related to an initial viewing picture according to the initial image data and a preset viewing frame, wherein the frame of the initial viewing picture is smaller than the maximum picture which can be generated according to the initial image data; displaying an initial view-finding picture; acquiring translation data about a viewfinder, wherein the translation data is a difference value between the impact point in an initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned to the center of a preset display area; generating a corrected viewfinder frame from the translation data and accordingly generating data about the corrected viewfinder frame, the impact point in the corrected viewfinder frame coinciding with the center of the reticle; the corrected through-view picture is displayed. The collimation correction method provided by the application can more quickly achieve the effects of three-point superposition of the center of a view-finding picture, the center of a reticle and an impact point, and improves the use experience.

Description

Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium
Technical Field
The present application relates to the technical field of weapon aiming devices, and in particular, to a shooting sighting center correction method, an electronic sighting telescope, and a computer-readable storage medium.
Background
The electronic sighting telescope can be used as an auxiliary accessory to be mounted on firearms such as firearms and the like so as to improve the accuracy of aiming at a shooting target.
The existing electronic sighting telescope generally comprises an objective lens, a processor, a memory, an image sensor, a display, an eyepiece and a power supply, wherein the objective lens projects processed light to the image sensor, image data is processed by the processor and then transmitted to a display screen, and the display screen displays an electronic image of a target object according to the acquired image data. Existing electronic scopes also display a graphical interface of the translatable reticle on the display screen for marking the collimation. The use method of the electronic sighting telescope is as follows:
after the electronic sighting device is installed for the first time, a user uses the corresponding gun to pre-shoot the target surface, then moves the reticle in comparison with the impact point in the electronic image, so that the center of the reticle (the intersection point of the cross lines) is overlapped with the impact point in the electronic image, the center of the reticle plays a role of marking the sighting center, and therefore the target object can be aimed only by aiming at the center of the reticle when shooting next time.
However, since the impact point is usually not at the center of the corresponding electronic image in the pre-shooting, that is, the impact point is deviated from the center of the electronic image, so that the center of the reticle is also deviated from the center of the electronic image, this non-aligned state is a first condition that easily causes user confusion, for example, at the time of specific aiming, it is unknown whether the center of the electronic image or the center of the reticle should be used as the reference, or it is always the shooting operation that the reticle is not adjusted in place and needs to be corrected with the accuracy again; second, it is inconvenient for the user to maximize the use of the display area and observe the surrounding environment dynamics of the target object when it is centered on the scope.
Disclosure of Invention
In view of the above problems, the present invention provides a shooting collimation correction method applied to an electronic sighting telescope, including:
acquiring initial image data containing the impact point;
generating data related to an initial viewing picture according to the initial image data and a preset viewing frame, wherein the frame of the initial viewing picture is smaller than the maximum picture which can be generated according to the initial image data;
displaying the initial framing picture;
acquiring translation data about the viewfinder, wherein the translation data is a difference value between the impact point in the initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned to the center of a preset display area;
generating a corrected viewfinder frame from the translation data and accordingly generating data about a corrected viewfinder picture, the corrected viewfinder picture having a smaller frame size than a maximum picture that can be generated from the initial image data, the impact point in the corrected viewfinder picture coinciding with the center of the reticle;
and displaying the corrected framing picture.
Preferably, the reticle is a graphical interface, and the step of acquiring the translation data about the finder frame further comprises:
and displaying the graphical interface of the reticle.
Preferably, the graphical interface of the reticle has a plurality of styles; the method for displaying the graphical interface of the reticle further comprises the following steps:
acquiring configuration data about a graphical interface style of a reticle;
the step of displaying the graphical interface of the reticle specifically includes:
and displaying the graphical interface of the reticle according to the mode determined by the configuration data.
Preferably, between the step of acquiring initial image data including an impact point and the step of acquiring panning data with respect to the finder frame, further comprising:
identifying an impact point in the initial image data, and generating position coordinates of the impact point relative to a preset viewing frame;
and generating translation data about the view frame according to the position coordinates of the impact point and the coordinates of the center of the reticle.
Preferably, the step of acquiring initial image data including the impact point specifically includes: acquiring initial image data containing a plurality of impact points;
between the step of acquiring initial image data including a plurality of impact points and the step of acquiring panning data with respect to the finder frame, further comprising:
identifying each impact point in the data of the initial viewing picture and generating average position coordinates of each impact point relative to the viewing frame;
and generating translation data about the view frame according to the average position coordinate of the impact point and the coordinate of the center of the reticle.
Preferably, between the step of acquiring panning data on the finder frame and the step of displaying the corrected finder screen, further comprising:
judging whether the framing frame translated according to the translation data exceeds a maximum picture which can be generated according to the initial image data;
if not, translating the view frame according to the translation data and generating data related to the corrected view frame according to the translation data;
if yes, the viewfinder frame is reduced according to the translation data, and data related to the corrected viewfinder picture is generated according to the reduced viewfinder frame.
Preferably, the step of displaying the corrected through-view screen further includes, after the step of displaying the corrected through-view screen:
detecting the installation state of the electronic sighting telescope;
when the attachment state of the electronic sighting telescope changes, data on a corrected framing picture is deleted.
The invention also provides an electronic sighting telescope, which comprises an objective lens, a processor, a memory, an image sensor and a display,
the image sensor is used for generating initial image data containing an impact point according to the light processed by the objective lens;
the processor is used for generating data related to an initial framing picture according to the initial image data and a preset framing frame, and the frame of the initial framing picture is smaller than the maximum picture which can be generated according to the initial image data;
the display is used for displaying the initial framing picture;
the processor is used for acquiring translation data about the viewfinder, wherein the translation data is a difference value between the impact point in the initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned with the center of a preset display area;
the processor is used for generating a corrected view frame according to the translation data and accordingly generating data related to a corrected view frame, the size of the corrected view frame is smaller than the maximum frame which can be generated according to the initial image data, and the impact point in the corrected view frame is overlapped with the center of the reticle;
the display is used for displaying the corrected framing picture.
Preferably, the reticle is a graphical interface, and the display is further configured to display the graphical interface of the reticle.
Preferably, the electronic sighting telescope further comprises an input device, and the graphical interface of the reticle has a plurality of patterns;
the input device is used for acquiring configuration data about a graphical interface style of the reticle;
the display is further configured to display a graphical interface of the reticle according to the style determined by the configuration data.
Preferably, the processor is further configured to identify a shot point in the initial image data, and generate a position coordinate of the shot point relative to a preset finder frame; and
and generating translation data about the view frame according to the position coordinates of the impact point and the coordinates of the center of the reticle.
Preferably, the step of acquiring the initial image data including the impact point by the image sensor specifically includes: the image sensor is used for acquiring initial image data comprising a plurality of impact points;
the processor is used for identifying each impact point in the data of the initial framing picture and generating average position coordinates of each impact point relative to the framing frame; and
and generating translation data about the view frame according to the average position coordinate of the impact point and the coordinate of the center of the reticle.
Preferably, the processor is further configured to determine whether translating the viewfinder according to the translation data exceeds a maximum screen that can be generated according to the initial image data;
if not, the processor is used for translating the viewfinder according to the translation data and generating data related to the corrected viewfinder picture according to the translation data;
if yes, the processor is used for reducing the view frame according to the translation data and accordingly generating data related to a corrected view frame.
Preferably, the electronic sighting telescope further comprises a position detection device for detecting the installation state of the electronic sighting telescope;
the processor is used for deleting data related to the corrected framing picture when the installation state of the electronic sighting telescope changes.
The present invention also proposes a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the method of shooting quasi-center correction as described above.
According to the shooting alignment correction method, the non-full-width initial framing picture is generated according to the initial image data, the center of a new 'framing frame' is established by the impact point in the correction process, and then the center of the 'framing frame' moves towards the center of the reticle, so that the impact point is coincided with the center of the reticle, the center of the obtained corrected framing picture is coincided with the center of the reticle, the effect of three-point coincidence is achieved, a user can accurately use correct alignment, and user experience is improved. In addition, due to the fact that a reticle for translating the solid structure is omitted, corresponding operation and structure are simplified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart illustrating the steps of a first embodiment of the method for calibrating a shooting centroid of the present application;
FIG. 2 is a flowchart illustrating the steps of a second embodiment of the method for calibrating shooting isocenter according to the present invention;
FIG. 3 is a flowchart illustrating the steps of a third embodiment of the method for calibrating a shooting centroid of the present application;
FIG. 4 is a flowchart illustrating the steps of a fourth embodiment of the method for calibrating a firing centroid of the present application;
FIG. 5 is a flowchart illustrating the steps of a fifth embodiment of the method for calibrating a firing centroid of the present application;
FIG. 6 is a flowchart illustrating the steps of a sixth embodiment of the method for calibrating a shooting centroid of the present application;
FIG. 7 is a flowchart illustrating the steps of a seventh embodiment of the method for calibrating firing sighting center according to the present application;
FIG. 8 is a schematic view of a calibration interface according to the present application for firing centroid calibration;
fig. 9 is a schematic block diagram of an electronic sighting telescope provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the descriptions in this application referring to "first", "second", etc. are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," "in some embodiments," or "in some embodiments" or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Referring to fig. 1 and 8, fig. 1 is a schematic flowchart illustrating a step of a first embodiment of the method for calibrating a shooting centroid, and fig. 8 is a schematic diagram illustrating a calibration interface according to the method for calibrating a shooting centroid. The application relates to a shooting sighting telescope correction method, which is applied to an electronic sighting telescope, and in a first embodiment, the shooting sighting telescope correction method comprises the following steps:
s100: acquiring initial image data containing the impact point;
in this step, the shooter can shoot at the target surface, and the electronic sighting telescope images the target surface after shooting and generates initial image data. Referring to fig. 8, the impact point is O3, and its coordinates are (x 3, y 3).
S120: generating data related to an initial viewing picture according to the initial image data and a preset viewing frame, wherein the frame of the initial viewing picture is smaller than the maximum picture which can be generated according to the initial image data;
in this step, the acquired initial image data is not all used for generating a display screen, but a margin is left. The shape of the predetermined frame may be similar to or different from the maximum frame, for example, the frame is circular and the maximum frame is rectangular. Referring to fig. 8, in an example, the preset finder frame and the maximum frame may be both rectangles, the center of the maximum frame is O1, and the coordinates thereof are (x 1, y 1); the center of the preset view frame is O2, the coordinates thereof are (x 2, y 2), and the two may not coincide.
S140: displaying the initial framing picture;
in this step, the shooter can observe the electronic image of the target surface and the position of the impact point in the electronic image. It will be appreciated that in the manually adjusted embodiment, the shooter may set pan data about the viewfinder against the electronic graphic, such as operating up, down, left, right, etc. directional input keys to move the viewfinder; in the embodiment of automatic adjustment, the image processing unit may automatically recognize the position of the impact point and calculate the difference between the centers of the frames with respect to the preset frame to generate the translation data required for the subsequent step.
S160: acquiring translation data about the viewfinder, wherein the translation data is a difference value between the impact point in the initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned to the center of a preset display area;
in this step, as explained in the previous step, the translation data may be manually input data or may be automatically generated data according to a preset algorithm. The preset display area may be the entire display area of the display screen, or may be a partial display area of the display screen, for example, the middle and left areas of the display screen. After the view finding picture aligns the center of the reticle by taking the impact point as an anchor point, the center of the reticle can play a role in aiming at the center. Referring again to fig. 8, initially, the center O2 of the initial viewfinder frame is aligned with the center of the reticle, but the impact point O3 is offset from the center of the initial viewfinder frame, and in order to align the impact point O3 with the center of the reticle, the required translation data is (δ (x 3-x 2), δ (y 3-y 2)).
S180: generating a corrected viewfinder frame from the translation data and accordingly generating data about a corrected viewfinder picture, the corrected viewfinder picture having a smaller frame size than a maximum picture that can be generated from the initial image data, the impact point in the corrected viewfinder picture coinciding with the center of the reticle;
in this step, the definition of the view finder may be implemented in various ways, for example, when the view finder is rectangular, the definition of a diagonal line may be implemented, and when the view finder is circular, the definition of a circle center may be implemented. It will be appreciated that the correction framing may be generated with reference to other data in addition to the translation data.
S200: and displaying the corrected framing picture.
In this step, the corrected frame is defined by the corrected frame, and the center of the corrected frame, the center of the reticle, and the impact point as the "sampling" pre-shot are coincided and aligned, so that the target can be aimed in the subsequent shooting operation by referring to the center of the reticle in the corrected frame.
According to the shooting sighting correction method, the non-full initial view finding picture is generated according to the initial image data, the center of a new view finding frame is established by the impact point in the correction process, and then the center of the view finding frame is moved to the center of the reticle, so that the impact point is coincided with the center of the reticle, the center of the obtained corrected view finding picture is coincided with the center of the reticle, namely the effect of three-point coincidence is achieved, a user can accurately use correct sighting, and the user experience is improved. In addition, due to the fact that a reticle for translating the solid structure is omitted, corresponding operation and structure are simplified.
Further, referring to fig. 2, fig. 2 is a schematic flowchart illustrating a step of a second embodiment of the shooting centroid correcting method according to the present application, in the second embodiment, the reticle is a graphical interface, and the step of acquiring the translation data about the finder frame further includes:
s142: and displaying the graphical interface of the reticle.
In this step, the reticle is a graphical interface, thereby eliminating the solid reticle structure. The graphical interface of the reticle may be viewed as an element superimposed on the viewfinder frame. It is understood that the step of displaying the initial viewfinder frame and the step of displaying the graphical interface of the reticle may not be in sequence.
Further, referring to fig. 3, fig. 3 is a schematic flowchart illustrating a step of a third embodiment of the shooting centroid correcting method according to the present application, in the third embodiment, a graphical interface of the reticle has a plurality of patterns; the method for displaying the graphical interface of the reticle further comprises the following steps:
s110: acquiring configuration data about a graphical interface style of a reticle;
in this step, the "style" of the graphical interface includes both combinations of different graphical elements and combinations of the same graphical elements but displayed in different scales. The configuration data of the pattern of the graphical interface may be either pre-loaded data or temporarily set during use, e.g. the shooter may operate a menu to select the pattern of the reticle of his own mood. It will be appreciated that the different styles of reticle graphical interfaces are centered on the same center, so that the alignment need not be re-calibrated after the reticle graphical interface is replaced.
The step of displaying the graphical interface of the reticle specifically comprises:
s144: and displaying the graphical interface of the reticle according to the mode determined by the configuration data.
In this step, shooter can obtain the reticle of oneself's mood appearance style like this, so promoted and used experience.
Further, referring to fig. 4, fig. 4 is a schematic flowchart of a fourth embodiment of the shooting centroid correcting method according to the present application, and in the fourth embodiment, in order to implement automatic centroid correction, between the step of acquiring initial image data including an impact point and the step of acquiring translation data about the finder frame, the method further includes:
s130: identifying an impact point in the initial image data, and generating position coordinates of the impact point relative to a preset viewing frame;
in this step, the impact point may be identified by using a preset pattern identification algorithm, which is not described herein. The shot point identified at the beginning can establish 'absolute coordinates' by referring to the maximum picture, and then establish 'relative coordinates' by referring to the preset viewing frame, thereby establishing the relative position relationship between the two.
S150: and generating translation data about the view frame according to the position coordinates of the impact point and the coordinates of the center of the reticle.
In the step, the relative position relationship between the impact point and the preset viewing frame exists, and the translation data can be easily calculated by referring to the center of the reticle.
Further, effective shooting distances of different guns are different, and when the shooting distance is larger, the reference meaning of a single shot is not high, in order to improve the average reference accuracy of the shooting centroid, referring to fig. 5, fig. 5 is a schematic flow chart of steps of a fifth embodiment of the shooting centroid correction method of the present application, and in the fifth embodiment, the step of acquiring initial image data including the impact point specifically includes:
s102: acquiring initial image data containing a plurality of impact points;
in this step, each pre-shooting can store the corresponding initial image data to count the statistical characteristics of a plurality of impact points.
Between the step of acquiring initial image data including a plurality of impact points and the step of acquiring panning data with respect to the viewfinder frame, further comprising:
s131: identifying each impact point in the data of the initial viewing picture and generating average position coordinates of each impact point relative to the viewing frame;
in the step, after image data of shooting is obtained each time, the specific position of an impact point is identified; thus, the average position coordinates of each impact point as a whole with respect to the finder frame can be calculated.
S151: and generating translation data about the view frame according to the average position coordinate of the impact point and the coordinate of the center of the reticle.
In this step, the average accuracy of the center of the reticle as the centroid in the official shooting is improved based on the translation data generated from the average position coordinates.
Further, when the shooting distance is increased, it is not excluded that the impact point of the test shooting exceeds the preset range, in order to still use the shooting center correction method of the present application in this scenario, referring to fig. 6, fig. 6 is a schematic flow chart of steps of a sixth embodiment of the shooting center correction method of the present application, and in the sixth embodiment, between the step of acquiring the translation data about the finder frame and the step of displaying the corrected finder frame, the method further includes:
s170: judging whether the framing frame translated according to the translation data exceeds a maximum picture which can be generated according to the initial image data;
in the step, a judging link is added to judge whether the impact point exceeds a preset range.
If not, S180: translating the viewfinder according to the translation data and generating data about a corrected viewfinder picture accordingly;
in this step, the corrected through-view picture can maintain the same resolution as the initial through-view picture.
If yes, S190: the frame is reduced according to the translation data and data on a corrected frame is generated accordingly.
In this step, the corrected framing picture is displayed with a relatively low resolution, but it is still possible to ensure that the "impact point" coincides with the center of the reticle.
Further, since the shooting center corresponds to a specific installation state of the electronic sighting telescope, once the electronic sighting telescope is reinstalled, for example, the sighting telescope is installed on a new gun, or another electronic sighting telescope is installed on the same gun, the shooting center corrected before has no function of centering any more, in order to automatically enter a new shooting center correcting process, referring to fig. 7, fig. 7 is a schematic flow chart of steps of a seventh embodiment of the shooting center correcting method, and in the seventh embodiment, after the step of displaying the corrected framing picture, the method further comprises:
s220: detecting the installation state of the electronic sighting telescope;
in this step, the mounting state of the electronic scope with respect to the specific gun is detected, for example, whether the electronic scope is mounted in place, whether the electronic scope is remounted, or the like.
S240: when the attachment state of the electronic sighting telescope changes, data on a corrected framing picture is deleted.
This step specifically includes substep S241: judging whether the installation state of the electronic sighting telescope changes or not; and step 242: if so, the data on the corrected through-view picture is deleted. Thereby automatically entering a new firing alignment process.
Referring to fig. 9, the present invention further provides an electronic sighting telescope comprising an objective lens, a processor, a memory, an image sensor, and a display,
the image sensor is used for generating initial image data containing an impact point according to the light processed by the objective lens;
the processor is used for generating data related to an initial framing picture according to the initial image data and a preset framing frame, and the frame of the initial framing picture is smaller than the maximum picture which can be generated according to the initial image data;
the display is used for displaying the initial framing picture;
the processor is used for acquiring translation data about the viewfinder, wherein the translation data is a difference value between the impact point in the initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned with the center of a preset display area;
the processor is used for generating a corrected view frame according to the translation data and accordingly generating data related to a corrected view frame, the size of the corrected view frame is smaller than the maximum frame which can be generated according to the initial image data, and the impact point in the corrected view frame is overlapped with the center of the reticle;
the display is used for displaying the corrected framing picture.
In this embodiment, the processor and the memory are connected by a bus, such as an SPI (serial peripheral interface) bus or an I2C (Inter-integrated Circuit) bus. The image sensor and the display may additionally be connected by a video data transmission line.
Specifically, the Processor may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The shooter can shoot at the target surface, and the image sensor of the electronic sighting telescope images at the shot target surface and generates initial image data. Referring to fig. 8, the impact point is O3, and its coordinates are (x 3, y 3). The generated initial image data can be transmitted to a processor through a bus, and the processor can be a general-purpose processor or a reusable image processor.
The initial image data acquired by the processor is not all used for generating the display screen, but a margin is left. The shape of the predetermined frame may be similar to or different from the maximum frame, for example, the frame is circular and the maximum frame is rectangular. Referring to fig. 8, in an example, the preset finder frame and the maximum frame may be both rectangles, the center of the maximum frame is O1, and the coordinates thereof are (x 1, y 1); the center of the preset view frame is O2, the coordinates thereof are (x 2, y 2), and the two may not coincide.
The shooter can observe the electronic image of the target surface and the position of the impact point in the electronic image. It will be appreciated that in the manually adjusted embodiment, the shooter may set pan data about the viewfinder against the electronic graphic, such as operating up, down, left, right, etc. directional input keys to move the viewfinder; in the embodiment of automatic adjustment, the image processing unit may automatically recognize the position of the impact point and calculate the difference between the centers of the frames with respect to the preset frame to generate the translation data required for the subsequent step.
The translation data may be manually input data or may be automatically generated according to a preset algorithm. The preset display area may be the entire display area of the display screen, or may be a partial display area of the display screen, for example, the middle and left areas of the display screen. After the view finding picture aligns the center of the reticle by taking the impact point as an anchor point, the center of the reticle can play a role in aiming at the center. Referring again to fig. 8, initially, the center O2 of the initial viewfinder frame is aligned with the center of the reticle, but the impact point O3 is offset from the center of the initial viewfinder frame, and in order to align the impact point O3 with the center of the reticle, the required translation data is (δ (x 3-x 2), δ (y 3-y 2)).
The frame can be defined in various ways, for example, the frame can be defined by a diagonal line when the frame is rectangular, and the frame can be defined by a circle center when the frame is circular. It is understood that the correction framing may be generated with reference to other data in addition to the translation data. The corrected frame is defined by the corrected frame, and the center of the corrected frame, the center of the reticle, and the impact point as the "sample" shot are coincidently aligned, so that the center of the reticle is referenced in the corrected frame to allow for aiming during the subsequent shooting operation.
According to the electronic sighting telescope, the non-full initial view finding picture is generated according to the initial image data, the center of a new view finding frame is established by the impact point in the correction process, and then the center of the view finding frame is moved to the center of the reticle, so that the impact point is coincided with the center of the reticle, the center of the obtained corrected view finding picture is coincided with the center of the reticle, namely the effect of three-point coincidence is achieved, a user can accurately use correct alignment, and the user experience is improved. In addition, due to the fact that a reticle for translating the solid structure is omitted, corresponding operation and structure are simplified.
Furthermore, the reticle is a graphical interface, so that a reticle with a solid structure is omitted, and a corresponding mounting structure is omitted. The display is also used for displaying a graphical interface of the reticle. The graphical interface of the reticle may be viewed as an element superimposed on the viewfinder frame.
Further, the electronic sighting telescope also comprises an input device, and the graphical interface of the reticle has a plurality of styles;
the input device is used for acquiring configuration data about a graphical interface style of the reticle;
the display is also used for displaying the graphical interface of the reticle according to the mode determined by the configuration data. So the shooter can obtain the reticle of own mood style, so the use experience is promoted.
In this embodiment, the "style" of the graphical interface includes both combinations of different graphical elements and combinations of the same graphical elements but displayed in different scales. The configuration data of the pattern of the graphical interface may be either pre-loaded data or temporarily set during use, e.g. the shooter may operate a menu to select the pattern of the reticle of his own mood. It will be appreciated that the different styles of reticle graphical interfaces are centered on the same center, so that the alignment need not be re-calibrated after the reticle graphical interface is replaced.
Further, to realize automatic centering correction, the processor is further configured to identify a shot point in the initial image data and generate position coordinates of the shot point relative to a preset viewing frame; and
and generating translation data about the view frame according to the position coordinates of the impact point and the coordinates of the center of the reticle.
In this embodiment, the impact point may be identified by using a preset pattern identification algorithm, which is not described herein. The shot point identified at the beginning can establish 'absolute coordinates' by referring to the maximum picture, and then establish 'relative coordinates' by referring to the preset viewing frame, thereby establishing the relative position relationship between the two. With the relative position relationship between the impact point and the preset view-finding frame, the translation data can be easily calculated by referring to the center of the reticle.
Further, effective shooting distances of different guns are different, and when the shooting distance is larger, the reference significance of a single shot is not high, and in order to improve the average reference accuracy of the shooting sighting center, the step of acquiring the initial image data including the impact point by the image sensor specifically includes: the image sensor is used for acquiring initial image data containing a plurality of impact points, so that the corresponding initial image data can be stored every time of pre-shooting to count the statistical characteristics of the impact points;
the processor is used for identifying each impact point in the data of the initial framing picture and generating average position coordinates of each impact point relative to the framing frame; and
and generating translation data about the view frame according to the average position coordinate of the impact point and the coordinate of the center of the reticle.
In the present embodiment, after each acquisition of image data of a shot, a specific position of an impact point is identified; thus, the average position coordinates of each impact point as a whole with respect to the finder frame can be calculated. The translation data generated from the average position coordinates, whereby the average accuracy of the center of the reticle as the centroid in the official shot will be improved.
Further, when the shooting distance is increased, it is not excluded that the impact point of the test shooting exceeds a preset range, and in order to still adopt the shooting sighting center correction method of the present application under the situation, the processor is further configured to determine whether translating the finder frame according to the translation data exceeds a maximum picture that can be generated according to the initial image data;
if not, the processor is used for translating the viewfinder according to the translation data and generating data related to the corrected viewfinder picture according to the translation data;
if yes, the processor is used for reducing the viewfinder frame according to the translation data and generating data related to the corrected viewfinder picture according to the reduced viewfinder frame.
In the embodiment, a judging link is added to judge whether the impact point exceeds a preset range. If the impact point is not out of range, the corrected viewfinder frame may maintain the same resolution as the initial viewfinder frame. If the impact point is out of range, the corrected framing picture is displayed at a relatively low resolution, but it is still ensured that the "impact point" coincides with the center of the reticle.
Further, since the shooting sighting center corresponds to a specific installation state of the electronic sighting telescope, once the electronic sighting telescope is reinstalled, for example, the sighting telescope is installed to a new gun, or another electronic sighting telescope is installed on the same gun, the shooting sighting center which is corrected before no longer has the function of the sighting center, and in order to automatically enter a new shooting sighting center correction process, the electronic sighting telescope further comprises a position detection device which is used for detecting the installation state of the electronic sighting telescope;
the processor is used for deleting data related to the corrected framing picture when the installation state of the electronic sighting telescope changes.
In this embodiment, the position detection device can detect the installation state of the electronic scope with respect to the specific gun, such as whether the electronic scope is installed in place, whether the electronic scope is reinstalled, and the like. The position detection device can adopt laser, hall switch or detect whether the electronic sighting telescope is installed in place through contact communication. If the mounting state of the electronic sighting telescope is detected to be changed, the data about the corrected view-finding picture is deleted, and a new shooting alignment correction process is automatically carried out.
Referring again to fig. 1 and 9, the present invention also proposes a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the method for firing centroid correction as described above. For example, the computer program is loaded by a processor and may perform the following steps:
s100: acquiring initial image data containing the impact point;
s120: generating data related to an initial viewing picture according to the initial image data and a preset viewing frame, wherein the frame of the initial viewing picture is smaller than the maximum picture which can be generated according to the initial image data;
s140: displaying the initial framing picture;
s160: acquiring translation data about the viewfinder, wherein the translation data is a difference value between the impact point in the initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned to the center of a preset display area;
s180: generating a corrected viewfinder frame from the translation data and accordingly generating data about a corrected viewfinder picture, the corrected viewfinder picture having a smaller frame size than a maximum picture that can be generated from the initial image data, the impact point in the corrected viewfinder picture coinciding with the center of the reticle;
s200: and displaying the corrected framing picture.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The computer readable storage medium may be an internal storage unit of the electronic sighting telescope of the foregoing embodiments, such as a hard disk or a memory of the electronic sighting telescope. The computer readable storage medium may also be an external storage device of the electronic scope, such as a plug-in hard disk provided on the electronic scope, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like.
Since the computer program stored in the computer-readable storage medium can execute any shooting quasi-center correction method provided in the embodiments of the present application, beneficial effects that can be achieved by any shooting quasi-center correction method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted for the foregoing embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A shooting alignment correction method is applied to an electronic sighting telescope and is characterized by comprising the following steps:
acquiring initial image data containing the impact point;
generating data about an initial viewing picture according to the initial image data and a preset viewing frame, wherein the size of the initial viewing picture is smaller than the maximum picture which can be generated according to the initial image data;
displaying the initial framing picture;
acquiring translation data about the viewfinder, wherein the translation data is a difference value between the impact point in the initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned to the center of a preset display area;
generating a corrected viewfinder frame from the translation data and accordingly generating data about a corrected viewfinder picture, the corrected viewfinder picture having a smaller frame size than a maximum picture that can be generated from the initial image data, the impact point in the corrected viewfinder picture coinciding with the center of the reticle;
and displaying the corrected framing picture.
2. The method of firing centroid correction according to claim 1,
the reticle is a graphical interface, and the step of obtaining translation data about the view finder further comprises, before the step of obtaining translation data about the view finder:
and displaying the graphical interface of the reticle.
3. The method of firing sighting correction of claim 2 wherein the reticle has a graphical interface having a plurality of styles; the method for displaying the graphical interface of the reticle further comprises the following steps:
acquiring configuration data about a graphical interface style of a reticle;
the step of displaying the graphical interface of the reticle specifically includes:
and displaying the graphical interface of the reticle according to the style determined by the configuration data.
4. A method of shooting center correction according to claim 2 or 3, further comprising, between the step of acquiring initial image data containing an impact point and the step of acquiring panning data with respect to the frame:
identifying an impact point in the initial image data and generating position coordinates of the impact point relative to a preset viewing frame;
and generating translation data about the view frame according to the position coordinates of the impact point and the coordinates of the center of the reticle.
5. The method of shooting quasi-center correction of claim 4,
the step of acquiring initial image data including the impact point specifically includes: acquiring initial image data containing a plurality of impact points;
between the step of acquiring initial image data including a plurality of impact points and the step of acquiring panning data with respect to the finder frame, further comprising:
identifying each impact point in the data of the initial viewing picture and generating average position coordinates of each impact point relative to the viewing frame;
and generating translation data about the view frame according to the average position coordinate of the impact point and the coordinate of the center of the reticle.
6. The shooting quasi-center correction method according to claim 1, further comprising, between the step of acquiring panning data on the finder frame and the step of displaying the corrected finder frame:
judging whether the framing frame translated according to the translation data exceeds a maximum picture which can be generated according to the initial image data;
if not, translating the view frame according to the translation data and generating data related to the corrected view frame according to the translation data;
if yes, the viewfinder frame is reduced according to the translation data, and data related to the corrected viewfinder picture is generated according to the reduced viewfinder frame.
7. The shooting quasi-center correction method according to claim 1 or 6, further comprising, after the step of displaying the corrected framing picture:
detecting the installation state of the electronic sighting telescope;
when the attachment state of the electronic sighting telescope changes, data on a corrected framing picture is deleted.
8. An electronic sighting telescope comprises an objective lens, a processor, a memory, an image sensor and a display,
the image sensor is used for generating initial image data containing an impact point according to the light processed by the objective lens;
the processor is used for generating data related to an initial framing picture according to the initial image data and a preset framing frame, and the frame of the initial framing picture is smaller than the maximum picture which can be generated according to the initial image data;
the display is used for displaying the initial framing picture;
the processor is used for acquiring translation data about the viewfinder, wherein the translation data is a difference value between the impact point in the initial viewfinder picture and the center of a reticle, and the center of the reticle is aligned with the center of a preset display area;
the processor is used for generating a corrected view frame according to the translation data and accordingly generating data related to a corrected view frame, the size of the corrected view frame is smaller than the maximum frame which can be generated according to the initial image data, and the impact point in the corrected view frame is overlapped with the center of the reticle;
the display is used for displaying the corrected framing picture.
9. The electronic sight of claim 8,
the reticle is a graphical interface, and the display is further used for displaying the graphical interface of the reticle.
10. The electronic sight of claim 9, further comprising an input device, the reticle having a graphical interface with a plurality of patterns;
the input device is used for acquiring configuration data about a graphical interface style of the reticle;
the display is further configured to display a graphical interface of the reticle according to the style determined by the configuration data.
11. The electronic sight of claim 9 or 10,
the processor is further used for identifying a shot point in the initial image data and generating position coordinates of the shot point relative to a preset view frame; and
and generating translation data about the view frame according to the position coordinates of the impact point and the coordinates of the center of the reticle.
12. The electronic sight of claim 11,
the step of acquiring initial image data including the impact point by the image sensor specifically includes: the image sensor is used for acquiring initial image data comprising a plurality of impact points;
the processor is used for identifying each impact point in the data of the initial framing picture and generating average position coordinates of each impact point relative to the framing frame; and
and generating translation data about the view frame according to the average position coordinate of the impact point and the coordinate of the center of the reticle.
13. The electronic sight of claim 8, wherein the processor is further configured to determine whether translating the viewing frame in accordance with the translation data exceeds a maximum frame that can be generated in accordance with the initial image data;
if not, the processor is used for translating the viewfinder according to the translation data and generating data related to the corrected viewfinder picture according to the translation data;
if yes, the processor is used for reducing the viewfinder frame according to the translation data and generating data related to the corrected viewfinder picture according to the reduced viewfinder frame.
14. The electronic sight according to claim 8 or 13, further comprising position detection means for detecting a mounted state of the electronic sight;
the processor is used for deleting data related to the corrected framing picture when the installation state of the electronic sighting telescope changes.
15. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the method of shoot centroid correction as claimed in any one of claims 1 to 7.
CN202211607343.5A 2022-12-14 2022-12-14 Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium Pending CN115823948A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211607343.5A CN115823948A (en) 2022-12-14 2022-12-14 Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211607343.5A CN115823948A (en) 2022-12-14 2022-12-14 Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115823948A true CN115823948A (en) 2023-03-21

Family

ID=85547314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211607343.5A Pending CN115823948A (en) 2022-12-14 2022-12-14 Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115823948A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100236535A1 (en) * 2009-03-20 2010-09-23 Jerry Rucinski Electronic weapon site
CN101975530A (en) * 2010-10-19 2011-02-16 李丹韵 Electronic sighting device and method for regulating and determining graduation thereof
CN104613816A (en) * 2015-01-30 2015-05-13 杭州硕数信息技术有限公司 Digital optical sight and method for achieving target tracking, locking and precise shooting through same
CN109990657A (en) * 2019-05-07 2019-07-09 武汉高德红外股份有限公司 It is a kind of based on image registration without target single-shot school rifle method
CN110081778A (en) * 2019-05-07 2019-08-02 武汉高德红外股份有限公司 It is a kind of based on image procossing without target school rifle method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100236535A1 (en) * 2009-03-20 2010-09-23 Jerry Rucinski Electronic weapon site
CN101975530A (en) * 2010-10-19 2011-02-16 李丹韵 Electronic sighting device and method for regulating and determining graduation thereof
CN104613816A (en) * 2015-01-30 2015-05-13 杭州硕数信息技术有限公司 Digital optical sight and method for achieving target tracking, locking and precise shooting through same
CN109990657A (en) * 2019-05-07 2019-07-09 武汉高德红外股份有限公司 It is a kind of based on image registration without target single-shot school rifle method
CN110081778A (en) * 2019-05-07 2019-08-02 武汉高德红外股份有限公司 It is a kind of based on image procossing without target school rifle method

Similar Documents

Publication Publication Date Title
CA2569721C (en) Electronic sight for firearm, and method of operating same
RU2564217C2 (en) Electronic sighting device and method of its adjustment and calibration detection
US8656628B2 (en) System, method and computer program product for aiming target
US7292262B2 (en) Electronic firearm sight, and method of operating same
US9151570B2 (en) Synchronized elevation trajectory riflescope
US8998085B2 (en) Optical device configured to determine a prey score of antlered prey
KR101501594B1 (en) Method for verifying a surveying instrument's external orientation
US9612115B2 (en) Target-correlated electronic rangefinder
US20050117024A1 (en) Gradient displaying method of mobile terminal
JP2009290548A (en) Image processing apparatus, image processing program, image processing method and electronic device
CN111083458A (en) Brightness correction method, system, equipment and computer readable storage medium
CN110595275A (en) Digital image-based cannon correcting device and method thereof
KR101925289B1 (en) Method and apparatus for identifying location/angle of terminal
CN115823948A (en) Shooting sighting center correction method, electronic sighting telescope and computer readable storage medium
US10096260B2 (en) Golf play assisting system
CN112762763B (en) Visual perception system
CN110772788B (en) Method for correcting sight of shooting game by display equipment
CN111795673A (en) Azimuth angle display method and device
KR20210155931A (en) Method for Aiming Moving Target and Apparatus for Aiming them
JP2000258122A (en) Luminous position standardizing device
KR102202625B1 (en) Method and system for displaying launcher danger zone using argumented reality
KR101121807B1 (en) Method for measuring 3-dimensional coordinate of an object and portable terminal implementing the same
GB2622946A (en) Method of and apparatus for adding digital functionality to a scope
EP3730895A1 (en) Rifle scope adjustment
CN115937301A (en) Double-camera calibration method and double-camera positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination