CN108419018B - Focusing camera and method - Google Patents
Focusing camera and method Download PDFInfo
- Publication number
- CN108419018B CN108419018B CN201810411307.9A CN201810411307A CN108419018B CN 108419018 B CN108419018 B CN 108419018B CN 201810411307 A CN201810411307 A CN 201810411307A CN 108419018 B CN108419018 B CN 108419018B
- Authority
- CN
- China
- Prior art keywords
- dsp controller
- clamped
- motor driver
- lens
- transmission shaft
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
Abstract
The focusing camera comprises a lens barrel, wherein a focus adjusting piece is arranged in the lens barrel, and a lens is arranged in the focus adjusting piece; a DSP controller is connected in the lens barrel in a clamping way; the focal length adjusting piece comprises an outer cylinder; a first linear motor is fixed in the outer cylinder, and a first transmission shaft is arranged at the end part of the first linear motor; a second linear motor is fixed in the outer cylinder, and a second transmission shaft is arranged at the end part of the second linear motor; the end part of the first transmission shaft is clamped with a sliding block, and a first propelling piece is clamped on the sliding block; a second propelling part is clamped at the end part of the second transmission shaft; a sliding rod penetrates through the first propelling piece; a linear displacement sensor is clamped on the first propelling piece; the first linear motor is electrically connected with a first motor driver, and the second linear motor is electrically connected with a second motor driver; the DSP controller comprises a video people number recognizer, which comprises a neural network design module, an image data enhancement module, a crowd density map module and a training module; and the DSP controller controls to realize focal length adjustment.
Description
Technical Field
The invention relates to the technical field of video processing, in particular to a focusing camera and a method.
Background
In the process of taking a picture or taking an image, because the traditional camera does not have the function of automatically adjusting the focal length or has poor focal length adjusting effect, the resolution ratio of the shot image or video is low, and the picture quality can not meet the requirement;
the convolutional neural network is developed rapidly in image processing, neural networks with various architectures are diversified, and the convolutional neural network can be used for estimating the number of people in a high-density scene by designing a precise neural network structure. Public places such as railway stations, places with dense people flow such as gymnasiums can regulate and control people flow to real-time monitoring crowd number so as to avoid the occurrence of events threatening personal safety such as treading and the like, and has great significance for improving public safety.
The traditional crowd counting algorithm needs to carry out complex preprocessing on an image in the early stage, needs manual design and characteristic extraction, is poor in adaptability under different scenes, and is poor in effect due to serious shielding, visual angle distortion and the like under a high-density crowd scene.
The deep learning directly inputs pictures with different sizes by designing a convolutional neural network without preprocessing such as foreground segmentation and the like on the pictures and manually designing and extracting features, the network can realize end-to-end training, automatically learns high-level semantic features, and can alternately regress the crowd density and the crowd total number of image blocks to realize the people number estimation.
Therefore, it is desirable to provide a camera capable of automatically adjusting the focal length to capture high-quality video and effectively estimating the number of people in the video.
Disclosure of Invention
The invention aims to provide a focusing camera and a method, which are used for solving the problems that the focusing of the existing camera is inconvenient and the estimation of the number of people is inaccurate.
In order to achieve the purpose, the technical scheme of the invention is that
A focusing camera comprises a lens barrel, wherein a focus adjusting piece is screwed in the inner thread of the lens barrel, and a lens is arranged in the focus adjusting piece in a sliding manner; a DSP controller is connected in the lens cone in a clamping manner and is electrically connected with the focal length adjusting piece; the focal length adjusting piece comprises a cylindrical outer cylinder;
a first linear motor is fixed on the inner wall of one end of the outer cylinder through a bolt, a first transmission shaft is screwed at the end part of the first linear motor, and the first transmission shaft extends along the axial direction of the outer cylinder; a second linear motor is fixed on the inner wall of the other end of the outer cylinder through a bolt, a second transmission shaft is screwed at the end part of the second linear motor, and the second transmission shaft extends along the axial direction of the outer cylinder; the first transmission shaft and the second transmission shaft extend into the outer barrel and are arranged oppositely;
a sliding block is clamped at the other end of the first transmission shaft, and a circular first propelling piece is clamped on the sliding block; the lens is slidably arranged in the first propelling piece in a penetrating way;
a second pushing piece is clamped at the other end of the second transmission shaft and is positioned on one side of the lens, which is far away from the first pushing piece, and is tightly pressed at the edge of the lens;
a sliding rod penetrates through the edge of the first propelling piece in a sliding mode, and the sliding rod is vertically clamped on the second propelling piece in a clamping mode;
a linear displacement sensor is clamped on the first propelling piece and is electrically connected with the DSP controller;
a first motor driver is electrically connected to the first linear motor, a second motor driver is electrically connected to the second linear motor, and the first motor driver and the second motor driver are both electrically connected to the DSP controller;
the DSP controller comprises a video people number recognizer, and the video people number recognizer comprises a neural network design module, an image data enhancement module, a crowd density map module and a training module.
Wherein the first pusher comprises a cylindrical housing; a telescopic hole is arranged in the shell along the axial direction of the shell in a penetrating way; a baffle ring is clamped at the end part of the shell, and a light hole is arranged in the baffle ring in a penetrating way; the telescopic hole is communicated with the light hole, and the diameter of the light hole is smaller than that of the telescopic hole;
a sliding guide groove extending along the axial direction of the shell is concavely arranged on the inner wall of the shell; a first spring is arranged in the sliding guide groove; one end of the first spring hooks the side wall of the baffle ring, and the other end of the first spring is pressed on the lens; and a stop block is clamped on the inner wall of the shell and is positioned at one end of the sliding guide groove far away from the stop ring.
The second propelling part comprises an arc-shaped baffle, and an adjusting groove is concavely arranged at the edge of the upper side of the baffle and extends along the arc direction of the baffle;
an adjusting seat is clamped in the adjusting groove, a rectangular integrated board is clamped at the top end of the adjusting seat, and a circular clamping ring is clamped on the side wall of the integrated board; triangular prism-shaped clamping teeth are uniformly clamped on the inner wall of the clamping ring; the clamping ring is matched with the sliding rod; the baffle is pressed on the end part of the lens.
The lens barrel comprises a DSP controller, a focal length adjusting piece and a lens, wherein the lens is clamped in the lens barrel and is positioned between the DSP controller and the focal length adjusting piece; the lens barrel further comprises an image sensor which is clamped in the lens barrel and is positioned between the DSP controller and the optical filter.
The neural network design module is used for designing a convolutional neural network with complementary depth;
the neural network design module comprises a first row of deep layer network design units, a second row of shallow layer network design units, an alternative unit and a network fusion estimation unit;
the first column of deep network design unit is used for designing a first column of deep networks, the first column of deep networks comprise 13 convolutional layers, the sizes of convolutional kernels are 3 multiplied by 3, and the convolutional layers are activated by using a linear correction unit function after being convolved;
the second row of shallow layer network design units are used for designing a second row of deep layer networks, each second row of deep layer networks comprises 3 convolutional layers, the sizes of convolutional kernels are 5 multiplied by 5, and the convolutional layers are activated by using linear correction unit functions after being convolved;
the alternating unit is used for inputting the output of the second row of shallow layer networks into the first row of deep layer networks, and outputting the output after processing through an average value pooling layer and a convolution layer;
the network fusion estimation unit is used for connecting the first row of deep networks and the second row of deep networks together and then performing 1 x 1 convolutional layer processing, so that the 1 x 1 convolutional layer is used for replacing a full connection layer, two rows of networks are fused, the networks become full convolutional networks, and input of pictures with various scales is received and estimated density maps are output.
The image data enhancement module is used for realizing image data enhancement by utilizing angle rotation of an image, multi-scale scaling of the image, image mirroring and cutting and scaling in an image pyramid mode.
The crowd density map module is used for obtaining a real crowd density map by processing an existing public data set through Gaussian kernel fuzzy normalization.
The training module is used for training the convolutional neural network with complementary depth by using the processed sample set to obtain a network model.
A focusing method for the focusing camera comprises the following steps:
step 1: presetting image resolution and total length of one-way movement by the DSP controller;
step 2: the DSP controller sends an acquisition command to the lens, and the lens acquires an image and sends the image to the DSP controller;
and step 3: the DSP controller analyzes the resolution of the received image and compares the actual image resolution obtained by analysis with a preset image resolution; if the actual image resolution is smaller than the preset image resolution, the DSP controller sends a first movement command to the first motor driver and the second motor driver; the DSP controller presets a linear moving distance;
and 4, step 4: the first motor driver drives the first linear motor to rotate along one direction, and the second motor driver drives the second linear motor to rotate along the same direction; the first transmission shaft and the second transmission shaft move along one direction and drive the lens to move along the same direction; the linear displacement sensor detects the actual moving distance of the lens in real time and sends the actual moving distance to the DSP controller;
and 5: if the actual moving distance is smaller than the preset linear moving distance, the DSP controller sends a continuous moving command to the first motor driver and the second motor driver; if the actual moving distance is equal to the preset linear moving distance, the DSP controller sends a moving stopping command to the first motor driver and the second motor driver; the DSP controller accumulates the actual moving distance;
step 6: if the accumulated value of the actual moving distance is smaller than the total length of the one-way movement, returning to the step 2; if the accumulated value of the actual moving distance is larger than or equal to the total length of the one-way movement, the DSP controller sends a reverse moving command to the first motor driver and the second motor driver;
and 7: the first motor driver drives the first linear motor to rotate reversely, and the second motor driver drives the second linear motor to rotate reversely; the first transmission shaft and the second transmission shaft move along the other direction and drive the lens to move; the linear displacement sensor detects the actual moving distance of the lens in real time and sends the actual moving distance to the DSP controller;
and 8: if the actual moving distance is equal to the preset linear moving distance, executing the step 2;
and step 9: and if the actual image resolution is greater than or equal to the preset image resolution, the DSP controller sends a stop movement command to the first motor driver and the second motor driver.
The invention has the following advantages:
the focusing camera comprises a lens barrel, wherein a focus adjusting piece is screwed in the inner thread of the lens barrel, and a lens is arranged in the focus adjusting piece in a sliding manner; a DSP controller is connected in the lens cone in a clamping manner and is electrically connected with the focal length adjusting piece; the focal length adjusting piece comprises a cylindrical outer cylinder;
a first linear motor is fixed on the inner wall of one end of the outer cylinder through a bolt, a first transmission shaft is screwed at the end part of the first linear motor, and the first transmission shaft extends along the axial direction of the outer cylinder; a second linear motor is fixed on the inner wall of the other end of the outer cylinder through a bolt, a second transmission shaft is screwed at the end part of the second linear motor, and the second transmission shaft extends along the axial direction of the outer cylinder; the first transmission shaft and the second transmission shaft extend into the outer barrel and are arranged oppositely;
a sliding block is clamped at the other end of the first transmission shaft, and a circular first propelling piece is clamped on the sliding block; the lens is slidably arranged in the first propelling piece in a penetrating way;
a second pushing piece is clamped at the other end of the second transmission shaft and is positioned on one side of the lens, which is far away from the first pushing piece, and is tightly pressed at the edge of the lens;
a sliding rod penetrates through the edge of the first propelling piece in a sliding mode, and the sliding rod is vertically clamped on the second propelling piece in a clamping mode;
a linear displacement sensor is clamped on the first propelling piece and is electrically connected with the DSP controller;
a first motor driver is electrically connected to the first linear motor, a second motor driver is electrically connected to the second linear motor, and the first motor driver and the second motor driver are both electrically connected to the DSP controller;
the DSP controller comprises a video people number recognizer, and the video people number recognizer comprises a neural network design module, an image data enhancement module, a crowd density map module and a training module;
the DSP controller drives the first linear motor and the second linear motor by sending a rotation command so as to drive the lens to move, so that the focal length adjustment is realized to shoot a high-resolution image;
the neural network design module designs a convolutional neural network with complementary depth; the image data enhancement module utilizes the image to realize image data enhancement by angle rotation, multi-scale scaling of the image, mirroring of the image and pyramid scaling of the image; the crowd density graph module utilizes Gaussian kernel fuzzy normalization to process an existing public data set to obtain a real crowd density graph; and the training module trains the depth-complementary convolutional neural network by using the processed sample set to obtain a network model.
Drawings
Fig. 1 is a schematic structural diagram of a focusing camera of the present invention.
Fig. 2 is a schematic structural view of a focus adjusting member of the present invention.
Fig. 3 is a schematic structural view of the first urging member of the present invention.
Fig. 4 is a schematic view of the structure of the second propulsion member of the present invention.
1-a lens barrel; 2-a lens; 3-a focus adjustment; 31-a first linear motor; 32-a first drive shaft; 33-an outer cylinder; 34-a slide block; 35-a slide bar; 36-a second drive shaft; 37-a second linear motor; 38-a second pusher; 381-an integrated board; 382-a snap ring; 383-a latch tooth; 384-an adjustment seat; 385-adjusting tank; 386-baffle plates; 39-a first pusher; 391-a housing; 392-a stop; 393-telescopic holes; 394-light hole; 395-a first spring; 396-sliding guide groove; 397-stop ring; 310-linear displacement sensor; 4-an optical filter; 5-an image sensor; 6-DSP controller.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1 to 4, a focusing camera according to embodiment 1 of the present invention includes a lens barrel 1, a focal length adjusting member 3 screwed to an internal thread of the lens barrel 1, and a lens 2 slidably disposed in the focal length adjusting member 3; a DSP controller 6 is connected in the lens barrel 1 in a clamping manner, and the DSP controller 6 is electrically connected with the focal length adjusting part 3; the focus adjusting member 3 includes a cylindrical outer cylinder 33; a first linear motor 31 is fixed to an inner wall of one end of the outer cylinder 33 by bolts, a first transmission shaft 32 is screwed to an end of the first linear motor 31, and the first transmission shaft 32 extends in an axial direction of the outer cylinder 33; a second linear motor 37 is fixed on the inner wall of the other end of the outer cylinder 33 by bolts, a second transmission shaft 36 is screwed on the end part of the second linear motor 37, and the second transmission shaft 36 extends along the axial direction of the outer cylinder 33; the first transmission shaft 32 and the second transmission shaft 36 both extend into the outer cylinder 33 and are arranged oppositely;
a slide block 34 is clamped at the other end of the first transmission shaft 32, and a circular first propelling part 39 is clamped on the slide block 34; the lens 2 is slidably arranged in the first propelling part 39 in a penetrating way; a second pushing member 38 is clamped at the other end of the second transmission shaft 36, and the second pushing member 38 is located on one side of the lens 2 far away from the first pushing member 39 and is pressed at the edge of the lens 2; a sliding rod 35 is arranged at the edge of the first propelling part 39 in a sliding and penetrating manner, and the sliding rod 35 is vertically clamped on the second propelling part 38; a linear displacement sensor 310 is clamped on the first pushing element 39, and the linear displacement sensor 310 is electrically connected with the DSP controller 6; a first motor driver is electrically connected to the first linear motor 31, a second motor driver is electrically connected to the second linear motor 37, and both the first motor driver and the second motor driver are electrically connected to the DSP controller 6;
the DSP controller comprises a video people number recognizer, and the video people number recognizer comprises a neural network design module, an image data enhancement module, a crowd density map module and a training module.
The first pusher 39 comprises a cylindrical housing 391; a telescopic hole 393 is arranged in the shell 391 along the axial direction; a baffle ring 397 is clamped at the end part of the shell 391, and a light-transmitting hole 394 is arranged in the baffle ring 397 in a penetrating way; the telescopic hole 393 is communicated with the light transmitting hole 394, and the diameter of the light transmitting hole 394 is smaller than that of the telescopic hole 393; a slide guide groove 396 extending along the axial direction is concavely formed on the inner wall of the housing 391; a first spring 395 is arranged in the sliding guide groove 396; one end of the first spring 395 hooks the side wall of the baffle ring 397, and the other end of the first spring is pressed on the lens 2; a stopper 392 is clamped on the inner wall of the housing 391, and the stopper 392 is located at one end of the slide guide groove 396 far away from the stopper ring 397.
The second propelling part 38 comprises a circular arc-shaped baffle 386, wherein an adjusting groove 385 is concavely arranged at the upper edge of the baffle 386, and the adjusting groove 385 extends along the circular arc direction of the baffle 386; an adjusting seat 384 is clamped in the adjusting groove 385, a rectangular integrated board 381 is clamped at the top end of the adjusting seat 384, and a circular clamping ring 382 is clamped on the side wall of the integrated board 381; triangular prism-shaped clamping teeth 383 are uniformly clamped on the inner wall of the clamping ring 382; the snap ring 382 fits into the slide rod 35; the baffle 386 is pressed against the end of the lens 2.
The lens barrel further comprises an optical filter 4 clamped in the lens barrel 1 and positioned between the DSP controller 6 and the focal length adjusting piece 3; the lens barrel further comprises an image sensor 5 which is clamped in the lens barrel 1 and is positioned between the DSP controller 6 and the optical filter 4.
The DSP controller 6 drives the first linear motor 31 and the second linear motor 37 by sending a rotation command, so as to drive the lens 2 to move, thereby implementing a focal length adjustment to capture a high-resolution image;
example 2
Further, on the basis of example 1:
the neural network design module is used for designing a convolutional neural network with complementary depth;
the neural network design module comprises a first row of deep layer network design units, a second row of shallow layer network design units, an alternative unit and a network fusion estimation unit;
the first-row deep network design unit is used for designing a first-row deep network, the first-row deep network comprises 13 convolutional layers, the sizes of convolutional kernels are 3 multiplied by 3, the convolutional layers are activated by using a linear correction unit function after being convolved, the network is sparse, the parameter interdependence is reduced, and the over-fitting problem is relieved;
the second row of shallow layer network design units are used for designing a second row of deep layer networks, each second row of deep layer networks comprises 3 convolutional layers, the sizes of convolutional cores are 5 multiplied by 5, the convolutional layers are activated by using a linear correction unit function after being convolved, pooling processing is carried out after activation, the rows are processed by using average value pooling (AvgPool), the sizes of pooled windows are 5 multiplied by 5, and the step length is 1;
the alternating unit is used for inputting the output of the second row of shallow layer networks into the first row of deep layer networks, and outputting the output after processing through an average value pooling layer and a convolution layer;
the network fusion estimation unit is used for connecting the first row of deep networks and the second row of deep networks together and then performing 1 x 1 convolutional layer processing, so that the 1 x 1 convolutional layer is used for replacing a full connection layer, two rows of networks are fused, the networks become full convolutional networks, and input of pictures with various scales is received and estimated density maps are output.
The image data enhancement module is used for realizing image data enhancement by utilizing angle rotation of the image, multi-scale scaling of the image, image mirroring and cutting and scaling in an image pyramid mode. Performing rotation operation with the gradient of 5 degrees on an input image, and expanding image data to 3 times by rotating 5 degrees leftwards and 5 degrees rightwards; the scales of the input image are respectively as follows: zooming operations of 0.6, 0.9 and 1.4 times are carried out to enlarge the image data to 12 times; performing mirror image operation on an input image to expand image data to 24 times; in order to make the network more robust to the size change of the input image, a pyramid image scaling is adopted, the scaling range is 0.6 to 1.3 times of the original image, the scaling interval is 0.1, and 24 is 8 to 192, and the image data is enlarged to 192 times; by this processing, the network does not greatly affect the recognition of the person in the image by its size.
The crowd density map module is used for processing the existing public data set by utilizing Gaussian kernel fuzzy normalization to obtain a real crowd density map.
Because the labeling results of different labels are slightly different due to manual labeling of the data set, the fuzzy normalization processing is carried out by adopting a Gaussian kernel, and the fuzzy normalization processing method is characterized in that the labeling set image x and the density graph Ground Truth subjected to the fuzzy normalization processing by the Gaussian kernel correspond to real density graphs which are as follows:
where M represents the number of people in image x,
x represents the position of each pixel in the input image x, xiRepresenting the annotated location of the ith person,represents a standard deviation of σiβ is a constant (empirical value of 0.298),representing the average distance between the marked position of the ith person and the marked position of the 10 persons nearest to the ith person, i.e.
Example 3
Further, on the basis of example 2:
a focusing method for the focusing camera comprises the following steps:
step 1: presetting image resolution and total length of unidirectional movement by the DSP controller 6;
step 2: the DSP controller 6 sends an acquisition command to the lens 2, and the lens 2 acquires an image and sends the image to the DSP controller 6;
and step 3: the DSP controller 6 analyzes the resolution of the received image and compares the actual image resolution obtained by analysis with a preset image resolution; if the actual image resolution is smaller than the preset image resolution, the DSP controller 6 sends a first movement command to the first motor driver and the second motor driver; the DSP controller 6 presets a linear moving distance;
and 4, step 4: the first motor driver drives the first linear motor 31 to rotate along one direction, and the second motor driver drives the second linear motor 37 to rotate along the same direction; the first transmission shaft 32 and the second transmission shaft 36 move along one direction and drive the lens 2 to move along the same direction; the linear displacement sensor 310 detects the actual moving distance of the lens 2 in real time and sends the actual moving distance to the DSP controller 6;
and 5: if the actual moving distance is smaller than the preset linear moving distance, the DSP controller 6 sends a continuous moving command to the first motor driver and the second motor driver; if the actual moving distance is equal to the preset linear moving distance, the DSP controller 6 sends a stop moving command to the first motor driver and the second motor driver; the DSP controller 6 accumulates the actual moving distance;
step 6: if the accumulated value of the actual moving distance is smaller than the total length of the one-way movement, returning to the step 2; if the accumulated value of the actual moving distance is larger than or equal to the total length of the one-way movement, the DSP controller 6 sends a reverse moving command to the first motor driver and the second motor driver;
and 7: the first motor driver drives the first linear motor 31 to rotate reversely, and the second motor driver drives the second linear motor 37 to rotate reversely; the first transmission shaft 32 and the second transmission shaft 36 move along the other direction and drive the lens 2 to move; the linear displacement sensor 310 detects the actual moving distance of the lens 2 in real time and sends the actual moving distance to the DSP controller 6;
and 8: if the actual moving distance is equal to the preset linear moving distance, executing the step 2;
and step 9: if the actual image resolution is greater than or equal to the preset image resolution, the DSP controller 6 sends a stop moving command to the first motor driver and the second motor driver.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.
Claims (5)
1. A focusing camera is characterized by comprising a lens barrel (1), wherein a focus adjusting piece (3) is screwed in the inner thread of the lens barrel (1), and a lens (2) is arranged in the focus adjusting piece (3) in a sliding manner; a DSP controller (6) is clamped in the lens barrel (1), and the DSP controller (6) is electrically connected with the focal length adjusting piece (3);
the focal length adjusting piece (3) comprises a cylindrical outer cylinder (33);
a first linear motor (31) is fixed on the inner wall of one end of the outer cylinder (33) through bolts, a first transmission shaft (32) is screwed at the end part of the first linear motor (31), and the first transmission shaft (32) extends along the axial direction of the outer cylinder (33); a second linear motor (37) is fixed on the inner wall of the other end of the outer cylinder (33) through bolts, a second transmission shaft (36) is screwed at the end part of the second linear motor (37), and the second transmission shaft (36) extends along the axial direction of the outer cylinder (33); the first transmission shaft (32) and the second transmission shaft (36) both extend into the outer cylinder (33) and are oppositely arranged;
a sliding block (34) is clamped at the other end of the first transmission shaft (32), and a circular first propelling piece (39) is clamped on the sliding block (34); the lens (2) is slidably arranged in the first propelling piece (39) in a penetrating way;
a second pushing piece (38) is clamped at the other end of the second transmission shaft (36), and the second pushing piece (38) is positioned on one side, far away from the first pushing piece (39), of the lens (2) and is pressed at the edge of the lens (2);
a sliding rod (35) penetrates through the edge of the first propelling part (39) in a sliding manner, and the sliding rod (35) is vertically clamped on the second propelling part (38);
a linear displacement sensor (310) is clamped on the first propelling piece (39), and the linear displacement sensor (310) is electrically connected with the DSP controller (6);
a first motor driver is electrically connected to the first linear motor (31), a second motor driver is electrically connected to the second linear motor (37), and the first motor driver and the second motor driver are both electrically connected to the DSP controller (6);
the DSP controller (6) comprises a video people number recognizer, and the video people number recognizer comprises a neural network design module, an image data enhancement module, a crowd density map module and a training module;
the neural network design module comprises a first row of deep layer network design units, a second row of shallow layer network design units, an alternative unit and a network fusion estimation unit;
the first column of deep network design unit is used for designing a first column of deep networks, the first column of deep networks comprise 13 convolutional layers, the sizes of convolutional kernels are 3 multiplied by 3, and the convolutional layers are activated by using a linear correction unit function after being convolved;
the second row of shallow layer network design units are used for designing a second row of deep layer networks, each second row of deep layer networks comprises 3 convolutional layers, the sizes of convolutional kernels are 5 multiplied by 5, the convolutional layers are activated by using a linear correction unit function after being convolved, and pooling processing is performed after the activation;
the alternating unit is used for inputting the output of the second row of shallow layer networks into the first row of deep layer networks, and outputting the output after processing through an average value pooling layer and a convolution layer;
the network fusion estimation unit is used for connecting the first row of deep networks and the second row of deep networks together and then performing 1 x 1 convolutional layer processing, so that the 1 x 1 convolutional layer is used for replacing a full connection layer, two rows of networks are fused, the networks become full convolutional networks, and input of pictures with various scales is received and an estimated density map is output;
the image data enhancement module is used for realizing image data enhancement by utilizing the angle rotation of an image, the multi-scale scaling of the image, the image mirroring and the cutting scaling of the image in a pyramid mode;
the crowd density map module is used for processing the existing public data set by utilizing Gaussian kernel fuzzy normalization to obtain a real crowd density map;
the training module is used for training the convolutional neural network with complementary depth by using the processed sample set to obtain a network model.
2. The focusing camera according to claim 1, characterized in that the first pusher (39) comprises a cylindrical housing (391); a telescopic hole (393) is arranged in the first propelling part (39) along the axial direction of the shell (391); a baffle ring (397) is clamped at the end part of the shell (391), and a light hole (394) is arranged in the baffle ring (397) in a penetrating way; the telescopic hole (393) is communicated with the light transmitting hole (394), and the diameter of the light transmitting hole (394) is smaller than that of the telescopic hole (393);
a sliding guide groove (396) extending along the axial direction is concavely arranged on the inner wall of the shell (391); a first spring (395) is arranged in the sliding guide groove (396); one end of the first spring (395) hooks the side wall of the baffle ring (397) and the other end of the first spring is pressed on the lens (2); a stop (392) is clamped on the inner wall of the shell (391), and the stop (392) is positioned at one end of the sliding guide groove (396) far away from the stop ring (397).
3. The focusing camera head according to claim 2, wherein the second propelling member (38) comprises a circular arc-shaped baffle plate (386), an adjusting groove (385) is concavely arranged at the upper side edge of the baffle plate (386), and the adjusting groove (385) extends along the circular arc direction of the baffle plate (386);
an adjusting seat (384) is clamped in the adjusting groove (385), a rectangular integrated board (381) is clamped at the top end of the adjusting seat (384), and a circular clamping ring (382) is clamped on the side wall of the integrated board (381); triangular prism-shaped clamping teeth (383) are uniformly clamped on the inner wall of the clamping ring (382); the snap ring (382) is adapted to the sliding rod (35); the baffle plate (386) is pressed on the end part of the lens (2).
4. The focusing camera head according to claim 3, further comprising a filter (4) clamped in the lens barrel (1) and located between the DSP controller (6) and the focus adjusting member (3); the lens barrel further comprises an image sensor (5) which is clamped in the lens barrel (1) and is positioned between the DSP controller (6) and the optical filter (4).
5. A focusing method for use in the focusing camera head according to any one of claims 1 to 4, comprising the steps of:
step 1: presetting image resolution and total length of unidirectional movement through the DSP controller (6);
step 2: the DSP controller (6) sends a collection command to the lens (2), and the lens (2) collects images and sends the images to the DSP controller (6);
and step 3: the DSP controller (6) analyzes the resolution of the received image and compares the actual image resolution obtained by analysis with a preset image resolution; if the actual image resolution is less than the preset image resolution, the DSP controller (6) sends a first movement command to the first motor driver and the second motor driver; the DSP controller (6) presets a linear moving distance;
and 4, step 4: the first motor driver drives the first linear motor (31) to rotate along one direction, and the second motor driver drives the second linear motor (37) to rotate along the same direction; the first transmission shaft (32) and the second transmission shaft (36) move along one direction and drive the lens (2) to move along the same direction; the linear displacement sensor (310) detects the actual moving distance of the lens (2) in real time and sends the actual moving distance to the DSP controller (6);
and 5: if the actual moving distance is smaller than the preset linear moving distance, the DSP controller (6) sends a continuous moving command to the first motor driver and the second motor driver; if the actual moving distance is equal to the preset linear moving distance, the DSP controller (6) sends a moving stopping command to the first motor driver and the second motor driver; the DSP controller (6) accumulates the actual moving distance;
step 6: if the accumulated value of the actual moving distance is smaller than the total length of the one-way movement, returning to the step 2; if the accumulated value of the actual moving distance is larger than or equal to the total length of the one-way movement, the DSP controller (6) sends a reverse moving command to the first motor driver and the second motor driver;
and 7: the first motor driver drives the first linear motor (31) to rotate reversely, and the second motor driver drives the second linear motor (37) to rotate reversely; the first transmission shaft (32) and the second transmission shaft (36) move along the other direction and drive the lens (2) to move; the linear displacement sensor (310) detects the actual moving distance of the lens (2) in real time and sends the actual moving distance to the DSP controller (6);
and 8: if the actual moving distance is equal to the preset linear moving distance, executing the step 2;
and step 9: and if the actual image resolution is greater than or equal to the preset image resolution, the DSP controller (6) sends a stop movement command to the first motor driver and the second motor driver.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810411307.9A CN108419018B (en) | 2018-05-02 | 2018-05-02 | Focusing camera and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810411307.9A CN108419018B (en) | 2018-05-02 | 2018-05-02 | Focusing camera and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108419018A CN108419018A (en) | 2018-08-17 |
CN108419018B true CN108419018B (en) | 2020-08-18 |
Family
ID=63137441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810411307.9A Active CN108419018B (en) | 2018-05-02 | 2018-05-02 | Focusing camera and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108419018B (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100582746B1 (en) * | 2003-12-19 | 2006-05-23 | 주식회사 하이소닉 | Imaging Device |
CN101581819B (en) * | 2008-05-15 | 2013-06-05 | 鸿富锦精密工业(深圳)有限公司 | Lens module |
US8077412B2 (en) * | 2009-02-27 | 2011-12-13 | Panasonic Corporation | Lens barrel and imaging device |
CN103744161B (en) * | 2014-01-07 | 2016-08-24 | 中国科学院西安光学精密机械研究所 | A kind of High Precision Automatic adjustment image planes device and method of adjustment thereof |
CN107301387A (en) * | 2017-06-16 | 2017-10-27 | 华南理工大学 | A kind of image Dense crowd method of counting based on deep learning |
-
2018
- 2018-05-02 CN CN201810411307.9A patent/CN108419018B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108419018A (en) | 2018-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103929588B (en) | camera zoom fast automatic focusing method and system | |
CN109886225B (en) | Image gesture action online detection and recognition method based on deep learning | |
CN104301601A (en) | Coarse tuning and fine tuning combined infrared image automatic focusing method | |
EP1583022A2 (en) | Process and apparatus for acquiring regions of interest of moving objects | |
EP2344980A1 (en) | Device, method and computer program for detecting a gesture in an image, and said device, method and computer program for controlling a device | |
DE10344058A1 (en) | Device and method for reducing image blur in a digital camera | |
Raghavendra et al. | Comparative evaluation of super-resolution techniques for multi-face recognition using light-field camera | |
CN103384998A (en) | Imaging device, imaging method, program, and program storage medium | |
DE102005009626A1 (en) | Camera for tracking objects, has processing unit with region of interest-sampling unit, and tracking unit with particle filter, which are provided to determine tracking data of objects to be tracked on basis of image data | |
CN108600638B (en) | Automatic focusing system and method for camera | |
CN107845145B (en) | Three-dimensional reconstruction system and method under electron microscopic scene | |
DE102019133642A1 (en) | DIGITAL IMAGING SYSTEM INCLUDING OPTICAL PLENOPTIC DEVICE AND IMAGE DATA PROCESSING METHOD FOR DETECTING VEHICLE OBSTACLES AND GESTURES | |
CN112861691A (en) | Pedestrian re-identification method under occlusion scene based on part perception modeling | |
US8295605B2 (en) | Method for identifying dimensions of shot subject | |
CN112987026A (en) | Event field synthetic aperture imaging algorithm based on hybrid neural network | |
CN108694385A (en) | A kind of high speed face identification method, system and device | |
CN111027440B (en) | Crowd abnormal behavior detection device and detection method based on neural network | |
CN201681056U (en) | Industrial high-resolution observation device for X-ray negative films | |
CN108419018B (en) | Focusing camera and method | |
CN110889868B (en) | Monocular image depth estimation method combining gradient and texture features | |
DE102013224704A1 (en) | Method for automatically focusing a camera | |
Chiu et al. | An efficient auto focus method for digital still camera based on focus value curve prediction model | |
CN108347577B (en) | Imaging system and method | |
CN109218587A (en) | A kind of image-pickup method and system based on binocular camera | |
CN106556958A (en) | The auto focusing method of Range-gated Imager |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220921 Address after: 510725 Shop 1308-G2, No. 3889 Huangpu East Road, Huangpu District, Guangzhou, Guangdong, China Patentee after: Guangzhou Elon Technology Co.,Ltd. Address before: A401, No. 471, Dashi Section, 105 National Road, Dashi Street, Panyu District, Guangzhou City, Guangdong Province, 511430 Patentee before: GUANGZHOU FEEYY INTELLIGENT TECHNOLOGY CO.,LTD. |
|
TR01 | Transfer of patent right |