CN113163112B - Fusion focus control method and system - Google Patents

Fusion focus control method and system Download PDF

Info

Publication number
CN113163112B
CN113163112B CN202110320069.2A CN202110320069A CN113163112B CN 113163112 B CN113163112 B CN 113163112B CN 202110320069 A CN202110320069 A CN 202110320069A CN 113163112 B CN113163112 B CN 113163112B
Authority
CN
China
Prior art keywords
image
video image
target image
focus control
gradient value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110320069.2A
Other languages
Chinese (zh)
Other versions
CN113163112A (en
Inventor
路新
程勇策
李楠
乔宇辰
袁北
刘文振
叶艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Research Institute Of China Electronics Technology Group Corp
Original Assignee
Third Research Institute Of China Electronics Technology Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Research Institute Of China Electronics Technology Group Corp filed Critical Third Research Institute Of China Electronics Technology Group Corp
Priority to CN202110320069.2A priority Critical patent/CN113163112B/en
Publication of CN113163112A publication Critical patent/CN113163112A/en
Application granted granted Critical
Publication of CN113163112B publication Critical patent/CN113163112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The scheme discloses a fusion focus control method, which comprises the following steps: filtering the acquired video image data; obtaining the ratio of the size of a target image in the video image to the field range based on a deep learning algorithm; calculating the spatial gradient value of a target image in the video image; obtaining the definition of the video image based on the ratio of the size of the target image to the field range and the spatial gradient value; adjusting a focal length based on the sharpness of the video image. The method has the advantages of self-adaption of the adjusting step length and high precision, and can be widely applied to the fields of various video monitoring, detection evidence obtaining, photoelectric guidance and the like.

Description

Fusion focus control method and system
Technical Field
The invention relates to the technical field of adjustment and control of lens focal lengths, in particular to a fusion focal control method and system.
Background
When the current photoelectric platform carrying the visible light or infrared camera automatically tracks a target, the distance between the target and the camera is continuously changed along with the movement of the target, and at the moment, the control software needs to be manually operated to finish focusing and zooming so that the target can be clearly and stably imaged, and the target characteristics can be conveniently observed and extracted. Because the focusing and zooming operations are repeatedly changed when the target is tracked, the imaging blurring of the target can be caused by improper operation, and the tracked target is lost.
In the existing automatic focusing mode, when the focal length is in a long focal length, the change of an image definition function is relatively flat due to fixed adjustment step length, the depth of field is relatively small, the video quality can be repeatedly changed, and a target is difficultly focused to the degree satisfied by an observer through focusing control. If the step length is set to be small, the depth of field is large in short focus, which causes the increase of the adjusting time and the failure of quick response. Most of the existing technical means for automatic zooming are the specified focal length position, and the control voltage of the zooming motor is adjusted by continuously detecting the difference value between the current focal length value and the specified focal length value until the difference value approaches zero. However, when the target is tracked, the distance change information of the target is unknown, the focal distance changes at any moment, and the operation of specifying the focal distance cannot adapt to the changing scene.
Disclosure of Invention
One purpose of the scheme is to provide a fusion focus control method, which can quickly and accurately complete focusing and zooming operations by calculating the size and definition characteristics of a target when the target is tracked; the problem of low manual operation efficiency is avoided, and clear and easily observed targets can be obtained in the whole process of target motion tracking.
It is a further object of the present solution to provide a system for performing the above method.
In order to achieve the purpose, the scheme is as follows:
a method of fusion focus control, the method comprising:
s100, filtering the acquired video image data;
s200, obtaining the ratio of the size of a target image in the video image to the field range based on a deep learning algorithm;
s300, calculating a spatial gradient value of a target image in the video image;
s400, obtaining the definition of the video image based on the ratio of the size of the target image to the field range and the spatial gradient value;
and S500, adjusting the focal length based on the definition of the video image.
Preferably, the deep learning algorithm is a YOLOv3 algorithm.
Preferably, the ratio of the target image size to the field of view range is the ratio of the wide and high pixel values of the target image to the wide and high pixel values of the whole frame image.
Preferably, the spatial gradient value of the target image is extracted by adopting a two-dimensional laplacian operator at the center of the target image.
Preferably, the adjusting the focal length comprises focusing or zooming.
In a second aspect, there is provided a fusion focus control system, the system comprising:
the image acquisition unit is used for acquiring a video image;
the image processing unit is used for filtering the acquired video image data; obtaining the ratio of the size of a target image in the video image to the field range based on a deep learning algorithm; calculating the spatial gradient value of a target image in the video image; obtaining the definition of the video image based on the ratio of the size of the target image to the field of view and the spatial gradient value;
and the control unit adjusts the focal length based on the definition of the video image.
Preferably, the control unit adjusts a focal length based on the sharpness of the video image to achieve focusing or zooming.
Preferably, the image acquisition unit comprises a camera.
Preferably, the camera is a visible light camera or an infrared thermal imaging camera.
In a third aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by one or more computers, causes the one or more computers to perform operations performed in a method as described in any one of the above.
The scheme has the following beneficial effects:
compared with the prior art, the method has the advantages that when the photoelectric monitoring equipment tracks the target, the ratio of the size of the target to the field range and the airspace gradient value are calculated in real time, the focusing and zooming operations are finished quickly and accurately in a self-adaptive mode, the tracked target is clear, the use efficiency of the photoelectric monitoring equipment is improved, and the method has the advantages of self-adaptive adjustment of the step size and high precision.
Drawings
In order to illustrate the implementation of the solution more clearly, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the solution, and that other drawings may be derived from these drawings by a person skilled in the art without inventive effort.
FIG. 1 is a block flow diagram of the method of the present application;
FIG. 2 is a schematic diagram of the system architecture of the present application;
FIG. 3 is a schematic diagram of an image processing unit according to an embodiment;
FIG. 4 is a schematic diagram of the control unit of the embodiment;
FIG. 5 is a flowchart of an embodiment of an image processing unit process;
FIG. 6 is a flowchart illustrating an adaptive lens focusing and zooming process according to an embodiment.
Detailed Description
Embodiments of the present solution will be described in further detail below with reference to the accompanying drawings. It is clear that the described embodiments are only a part of the embodiments of the present solution, and not an exhaustive list of all embodiments. It should be noted that, in the present embodiment, features of the embodiment and the embodiment may be combined with each other without conflict.
The terms "first," "second," and the like in the description and in the claims, and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The inventor of the application finds that most of the existing automatic zooming technical means are that a focal length position is firstly appointed, and the control voltage of a zooming motor is adjusted by continuously detecting the difference value between the current focal length value and the appointed focal length value until the difference value approaches zero. However, when the target is tracked, the far and near change information of the target is unknown, and the focal distance should change at any time, so that the operation of specifying the focal distance cannot adapt to the changing scene. Therefore, the inventor proposes a fusion focus control method and system.
As shown in fig. 1, a method of fusion focus control, the method comprising the steps of:
s100, filtering the acquired video image data;
s200, obtaining the ratio of the size of a target image in the video image to the field range based on a deep learning algorithm;
s300, calculating a spatial gradient value of a target image in the video image;
s400, obtaining the definition of the video image based on the ratio of the size of the target image to the field range and the spatial gradient value;
and S500, adjusting the focal length of the lens based on the definition of the video image.
In one embodiment, the image processing unit receives an instruction, starts to track a target, collects a frame of video image through a video collection interface, performs convolution filtering on data, calculates the ratio of a field range to the size of the target image in the video image by using a deep learning algorithm YOLOv3 algorithm, extracts a target image spatial gradient value by using a two-dimensional Laplacian operator in the center of the target image, and sends the value to the control unit through a CAN bus.
And after receiving focusing and zooming data, the control unit runs a focusing and zooming self-adaptive control driving voltage calculation module program, calculates focusing and zooming voltage by utilizing the obtained ratio of the field range to the target size and the airspace gradient value in a self-adaptive manner, drives the camera lens to focus and zoom until the focusing and zooming voltage approaches 0, and finishes the whole focusing and zooming operation.
As shown in fig. 2, a fusion focus control system includes an image acquisition unit 10, an image processing unit 20, and a control unit 30;
the image processing unit 20 obtains continuous video images from the image obtaining unit 10, for example, from a visible light camera or an infrared thermal imaging camera, obtains the ratio of the wide and high pixel values of the target image in the video images to the wide and high pixel values of the whole frame image by using a deep learning algorithm, simultaneously uses the spatial gradient as an evaluation criterion of the image sharpness, detects the edge high frequency information of the image based on the spatial gradient, and sends the calculation result to the control unit 30; the control unit 30 obtains the focusing and zooming control data from the image processing unit 20, and adaptively adjusts the control voltage of the focusing and zooming motor according to the magnitude and polarity of the control quantity until the focusing and zooming voltage is 0, thereby realizing the focusing and zooming functions. The camera completes scene shooting and outputs video in HDSDI format or PAL format. The method has the advantages of self-adaption of the adjusting step length and high precision, and can be widely applied to the fields of various video monitoring, detection and evidence obtaining, photoelectric guidance and the like.
In one embodiment, as shown in fig. 3, the image processing unit 20 is an image difference processing circuit mainly constructed by a digital signal processor of TMS320C6416 and a XILINX V5 FPGA, and operates the program for obtaining the size of the target image in the video image and calculating the spatial gradient of the image shown in fig. 5, so as to obtain the ratio of the size of the target image to the range of the field of view and calculate the spatial gradient of the image; then, the SJA1000 is used as a CAN bus interface chip, and the two operation results are sent to the control unit 30;
the method for acquiring the size of a target image in a video image and calculating the spatial gradient of the image comprises the following steps:
(1) when tracking starts, acquiring a frame of image, filtering the obtained pixel value of the whole frame, and reducing noise;
(2) detecting a boundary frame of a target image frame by using a deep learning YOLOv3 algorithm to obtain wide and high pixel values of the target image;
(3) respectively calculating the ratio of the width and height pixel values of the target image to the width and height pixel values of the whole frame image;
(4) selecting the larger value of the width-height ratio as a zooming control value, and recording as m;
(5) selecting a region with the width of 0.8 × 0.8 at the center of the target image, and calculating a gradient value by using a two-dimensional Laplacian operator, and marking the gradient value as n;
(6) and sending the m and n values through a CAN bus.
In one embodiment, as shown in fig. 4, the control unit 30 uses the control chip TMS320F2808 and the driving chip L298P as core devices, runs a camera lens focusing and zooming adaptive control driving voltage calculation program shown in fig. 6, receives focusing and zooming control data calculated by the image processing unit 20 through the CAN bus, adjusts the driving voltage, drives the focusing and zooming motors to move, and receives a limit signal of the camera lens, and if the focusing far limit signal is valid, the focusing motors CAN only move in the focusing near direction, and vice versa, and the zooming control is the same.
The lens focusing and zooming self-adaptive control driving voltage calculation step comprises the following steps:
(1) in order to prevent the motor from moving too frequently and eliminate motor jitter, m and n values need to be received continuously for three times;
(2) when m <0.6 and the zoom far limit is invalid, the zoom operation is performed, and the zoom voltage performs zoom voltage U1= K (m-0.5), wherein K is adaptive adjustment gain and is proportional to the absolute value of (m-0.5). When m is less than 0.6 and the zooming far limit is effective, the zooming voltage U1 is 0, and the zooming operation is stopped;
(3) and when m is greater than 0.7 and the zooming close limit is invalid, zooming out is carried out, and zooming voltage U1= K (m-0.5) is carried out, wherein K is adaptive adjustment gain and is in proportion to the absolute value of (m-0.5). When m is greater than 0.7 and the zooming close limit is effective, the zooming voltage U1 is 0, and the zooming operation is stopped;
(4) when 0.6< = m < =0.7, the zoom voltage U1 is 0, and the zoom operation is not performed;
(5) obtaining a gradient value H (i) at this time and obtaining a gradient value H (i-1) at the last time, and recording as (i) = H (i) -H (i-1);
(6) when the absolute value of e (i) is larger than the dead zone threshold value, and when e (i) is a positive value, the focusing far limit is invalid, focusing far operation is carried out, otherwise, the focusing voltage U2 is 0;
(7) when the absolute value of e (i) is larger than the dead zone threshold value, and when the e (i) is a negative value, the focusing close limit is invalid, focusing close operation is carried out, otherwise, the focusing voltage U2 is 0;
(8) focus voltage U2= KK × e (i), where KK is an adaptive adjustment gain, proportional to the absolute value of e (i);
(9) and superposing the zooming voltage U1 and the focusing voltage U2 to the zooming PWM, and outputting the focusing PWM.
Compared with the prior art, when the photoelectric monitoring equipment tracks the target, the ratio of the size of the target image to the field range and the airspace gradient value of the target image are calculated in real time, the self-adaption is fast, the focusing and zooming operations can be accurately finished, the tracked target is clear, the service efficiency of the photoelectric monitoring equipment is improved, and the photoelectric monitoring equipment has the advantages of self-adaption of the adjustment step length and high precision.
The readable storage medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAvA, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be understood that the above-described embodiments of the present invention are examples for clearly illustrating the invention, and are not to be construed as limiting the embodiments of the present invention, and it will be obvious to those skilled in the art that various changes and modifications can be made on the basis of the above description, and it is not intended to exhaust all embodiments, and obvious changes and modifications can be made on the basis of the technical solutions of the present invention.

Claims (9)

1. A fusion focus control method, comprising:
s100, filtering the acquired video image data;
s200, obtaining the ratio of the size of a target image in the video image to the field range based on a deep learning algorithm;
s300, calculating a spatial gradient value of a target image in the video image, wherein the spatial gradient value of the target image is extracted by adopting a two-dimensional Laplacian operator at the center of the target image;
s400, obtaining the definition of the video image based on the ratio of the size of the target image to the field range and the spatial gradient value;
and S500, adjusting the focal length based on the definition of the video image.
2. A fusion focus control method as recited in claim 1, wherein the deep learning algorithm is the YOLOv3 algorithm.
3. A fusion focus control method as recited in claim 1, wherein the target image size to field of view range ratio is a ratio of target image wide and high pixel values to full frame image wide and high pixel values.
4. A fusion focus control method as in claim 1, wherein the adjusting the focal length comprises focusing or zooming.
5. A fusion focus control system, comprising:
the image acquisition unit is used for acquiring a video image;
the image processing unit is used for filtering the acquired video image data; obtaining the ratio of the size of a target image in the video image to the field range based on a deep learning algorithm; calculating the spatial gradient value of a target image in the video image, wherein the spatial gradient value is obtained by extracting the spatial gradient value of the target image by adopting a two-dimensional Laplacian operator in the center of the target image; obtaining the definition of the video image based on the ratio of the size of the target image to the field of view and the spatial gradient value;
and the control unit adjusts the focal length based on the definition of the video image.
6. A fusion focus control system as claimed in claim 5, wherein the control unit adjusts a focal length to achieve focusing or zooming based on sharpness of the video image.
7. A fusion focus control system as recited in claim 5, wherein the image acquisition unit comprises a camera.
8. A fusion focus control system as claimed in claim 7 wherein the camera is a visible light camera or an infrared thermal imaging camera.
9. A computer-readable storage medium having stored thereon a computer program that, when executed by one or more computers, causes the one or more computers to perform operations performed in the method of any one of claims 1 to 4.
CN202110320069.2A 2021-03-25 2021-03-25 Fusion focus control method and system Active CN113163112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110320069.2A CN113163112B (en) 2021-03-25 2021-03-25 Fusion focus control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110320069.2A CN113163112B (en) 2021-03-25 2021-03-25 Fusion focus control method and system

Publications (2)

Publication Number Publication Date
CN113163112A CN113163112A (en) 2021-07-23
CN113163112B true CN113163112B (en) 2022-12-13

Family

ID=76884762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110320069.2A Active CN113163112B (en) 2021-03-25 2021-03-25 Fusion focus control method and system

Country Status (1)

Country Link
CN (1) CN113163112B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117459830B (en) * 2023-12-19 2024-04-05 北京搜狐互联网信息服务有限公司 Automatic zooming method and system for mobile equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240078A (en) * 2017-06-06 2017-10-10 广州优创电子有限公司 Lens articulation Method for Checking, device and electronic equipment
CN110278383A (en) * 2019-07-25 2019-09-24 浙江大华技术股份有限公司 Focus method, device and electronic equipment, storage medium
CN110785993A (en) * 2018-11-30 2020-02-11 深圳市大疆创新科技有限公司 Control method and device of shooting equipment, equipment and storage medium
CN111526286A (en) * 2020-04-20 2020-08-11 苏州智感电子科技有限公司 Method and system for controlling motor motion and terminal equipment
CN112333383A (en) * 2020-10-27 2021-02-05 浙江华创视讯科技有限公司 Automatic focusing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9720089B2 (en) * 2012-01-23 2017-08-01 Microsoft Technology Licensing, Llc 3D zoom imager

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240078A (en) * 2017-06-06 2017-10-10 广州优创电子有限公司 Lens articulation Method for Checking, device and electronic equipment
CN110785993A (en) * 2018-11-30 2020-02-11 深圳市大疆创新科技有限公司 Control method and device of shooting equipment, equipment and storage medium
CN110278383A (en) * 2019-07-25 2019-09-24 浙江大华技术股份有限公司 Focus method, device and electronic equipment, storage medium
CN111526286A (en) * 2020-04-20 2020-08-11 苏州智感电子科技有限公司 Method and system for controlling motor motion and terminal equipment
CN112333383A (en) * 2020-10-27 2021-02-05 浙江华创视讯科技有限公司 Automatic focusing method and device

Also Published As

Publication number Publication date
CN113163112A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
KR20180084085A (en) METHOD, APPARATUS AND ELECTRONIC DEVICE
KR101964861B1 (en) Cameara apparatus and method for tracking object of the camera apparatus
US20180063511A1 (en) Apparatus and method for detecting object automatically and estimating depth information of image captured by imaging device having multiple color-filter aperture
US20130101177A1 (en) Motion estimation apparatus, depth estimation apparatus, and motion estimation method
CN105141807B (en) Video signal image treating method and apparatus
GB2545551A (en) Imaging device and imaging method
JP2009522591A (en) Method and apparatus for controlling autofocus of a video camera by tracking a region of interest
US9934585B2 (en) Apparatus and method for registering images
CN101713902A (en) Fast camera auto-focus
JP2007293722A (en) Image processor, image processing method, image processing program, and recording medium with image processing program recorded thereon, and movile object detection system
Jeon et al. Fully digital auto-focusing system with automatic focusing region selection and point spread function estimation
US10705408B2 (en) Electronic device to autofocus on objects of interest within field-of-view of electronic device
KR100820952B1 (en) Detecting method at automatic police enforcement system of illegal-stopping and parking vehicle using single camera and system thereof
CN112333379A (en) Image focusing method and device and image acquisition equipment
CN115760912A (en) Moving object tracking method, device, equipment and computer readable storage medium
CN113163112B (en) Fusion focus control method and system
Dong et al. Instantaneous video stabilization for unmanned aerial vehicles
EP3218756B1 (en) Direction aware autofocus
KR101460317B1 (en) Detection apparatus of moving object in unstable camera environment and method thereof
JP5539565B2 (en) Imaging apparatus and subject tracking method
CN111105429A (en) Integrated unmanned aerial vehicle detection method
CN114255177B (en) Exposure control method, device, equipment and storage medium in imaging
Srivastava et al. Design and implementation of a real-time autofocus algorithm for thermal imagers
CN114882003A (en) Method, medium and computing device for detecting shooting pose change of camera
JP5247419B2 (en) Imaging apparatus and subject tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant