CN116261044A - Intelligent focusing method and device for hundred million-level cameras - Google Patents

Intelligent focusing method and device for hundred million-level cameras Download PDF

Info

Publication number
CN116261044A
CN116261044A CN202310271268.8A CN202310271268A CN116261044A CN 116261044 A CN116261044 A CN 116261044A CN 202310271268 A CN202310271268 A CN 202310271268A CN 116261044 A CN116261044 A CN 116261044A
Authority
CN
China
Prior art keywords
focusing
real
time image
image data
strategy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310271268.8A
Other languages
Chinese (zh)
Other versions
CN116261044B (en
Inventor
袁潮
邓迪旻
路鹤松
程璐
温建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202310271268.8A priority Critical patent/CN116261044B/en
Publication of CN116261044A publication Critical patent/CN116261044A/en
Application granted granted Critical
Publication of CN116261044B publication Critical patent/CN116261044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an intelligent focusing method and device for a hundred million-class camera. Wherein the method comprises the following steps: acquiring real-time image data and lens focusing parameters; identifying the real-time image data through a real-time image identification model to obtain focusing input parameters; obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters; and carrying out focusing operation on the real-time image data according to the focusing strategy. The invention solves the technical problems that in the prior art, the focal length adjusting function only adjusts the current camera equipment according to the requirement of an image, and the stepping motor cannot be restored to the camera device before adjustment in the adjusting process or after adjustment, so that the service life and the convenience of the high-precision camera are greatly reduced.

Description

Intelligent focusing method and device for hundred million-level cameras
Technical Field
The invention relates to the field of high-precision camera focal length processing, in particular to an intelligent focusing method and device for a hundred million-level camera.
Background
Along with the continuous development of intelligent science and technology, intelligent equipment is increasingly used in life, work and study of people, and the quality of life of people is improved and the learning and working efficiency of people is increased by using intelligent science and technology means.
Currently, aiming at the focal length adjusting process in the shooting process of a high-precision camera, the position of a lens is automatically adjusted by using a stepping motor according to the parameter requirement of an image, so that the focal length is changed by using a lens displacement mode, and the technical effect of focusing adjustment is achieved. However, the focal length adjusting function in the prior art is to adjust the current camera device in a focusing way only according to the requirement of an image, and the stepping motor cannot be restored to be a camera device before adjustment in the adjusting process or after adjustment, so that the service life and convenience of the high-precision camera are greatly reduced.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an intelligent focusing method and device for a hundred million-level camera, which at least solve the technical problems that the service life and convenience of a high-precision camera are greatly reduced because the current camera equipment is only subjected to focusing adjustment according to the needs of images and a stepping motor cannot be restored to be a camera device before adjustment in the adjusting process or after adjustment.
According to an aspect of the embodiment of the present invention, there is provided an intelligent focusing method for a hundred million-class camera, including: acquiring real-time image data and lens focusing parameters; identifying the real-time image data through a real-time image identification model to obtain focusing input parameters; obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters; and carrying out focusing operation on the real-time image data according to the focusing strategy.
Optionally, the lens focusing parameters include: focus limit, focus sensitivity, stepping speed.
Optionally, before the identifying the real-time image data through the real-time image identifying model to obtain the focusing input parameter, the method further includes: collecting historical focusing data similar to the real-time image data through a big data server; and training and generating the real-time image recognition model by utilizing the historical focusing data and a preset model.
Optionally, after the focusing strategy is obtained according to the focusing input parameter and the lens focusing parameter, the method further includes: focal length rolling data in the focusing strategy are extracted; and taking the focal length rolling data as a stepping motor restoring stepping parameter to generate a stepping motor restoring strategy.
According to another aspect of the embodiments of the present invention, there is also provided an intelligent focusing apparatus for a hundred million-class camera, including: the acquisition module is used for acquiring real-time image data and lens focusing parameters; the identification module is used for identifying the real-time image data through a real-time image identification model to obtain focusing input parameters; the calculation module is used for obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters; and the focusing module is used for carrying out focusing operation on the real-time image data according to the focusing strategy.
Optionally, the lens focusing parameters include: focus limit, focus sensitivity, stepping speed.
Optionally, the apparatus further includes: the acquisition module is used for acquiring historical focusing data similar to the real-time image data through the big data server; and the training module is used for training and generating the real-time image recognition model by utilizing the historical focusing data and the preset model.
Optionally, the apparatus further includes: the extraction module is used for extracting focal length rolling data in the focusing strategy; and the generation module is used for taking the focal length rolling data as a stepping motor reduction stepping parameter to generate a stepping motor reduction strategy.
According to another aspect of the embodiment of the present invention, there is also provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute an intelligent focusing method of a hundred million-class camera.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute an intelligent focusing method for a hundred million-class camera when executed.
In the embodiment of the invention, the real-time image data and lens focusing parameters are acquired; identifying the real-time image data through a real-time image identification model to obtain focusing input parameters; obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters; according to the focusing strategy, the mode of focusing operation on real-time image data solves the technical problems that in the prior art, the focal length adjusting function only adjusts the current camera equipment according to the requirement of an image, and a stepping motor cannot be restored to be a camera device before adjustment in the adjusting process or after adjustment, so that the service life and convenience of a high-precision camera are greatly reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of a method for intelligent focusing of a hundred million-class camera in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of an intelligent focusing apparatus for a hundred million-class camera according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device for performing the method according to the invention according to an embodiment of the invention;
fig. 4 is a memory unit for holding or carrying program code for implementing a method according to the invention, according to an embodiment of the invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an intelligent focus method for a hundred million-level camera, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and that, although a logical sequence is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in a different order than that illustrated herein.
Example 1
Fig. 1 is a flowchart of an intelligent focusing method of a billion-level camera according to an embodiment of the present invention, as shown in fig. 1, the method includes the steps of:
step S102, acquiring real-time image data and lens focusing parameters.
Specifically, in order to solve the technical problems that in the prior art, the focal length adjusting function is to adjust the current camera device according to the requirement of an image, and the stepper motor cannot be restored to the camera device before adjustment in the adjusting process or after adjustment, so that the service life and convenience of the high-precision camera are greatly reduced, firstly, image data are required to be acquired in real time according to the equipment of the high-precision camera system, the image data are required to be used for scene monitoring or scene recognition and the like of the high-precision camera system under a certain precision, and meanwhile, focusing parameters of the camera are required to be acquired according to the camera array. Optionally, the lens focusing parameters include: a focusing limit value, a focusing sensitivity and a stepping speed, wherein the focusing limit value represents the maximum or minimum focusing value of the camera in the focusing process and is used for generating aiming at a focusing strategy.
Step S104, identifying the real-time image data through a real-time image identification model to obtain focusing input parameters.
Specifically, in order to increase the efficiency and hit rate of image recognition, the embodiment of the invention can analyze and calculate the focusing requirement of real-time image data by using a DNN deep neural network model or GAN generation antagonistic neural network model, thereby obtaining focusing input parameters, wherein the focusing input parameters are used for generating corresponding focusing strategies so as to execute focusing operation on a camera, and the real-time image data can be more clear.
Optionally, before the identifying the real-time image data through the real-time image identifying model to obtain the focusing input parameter, the method further includes: collecting historical focusing data similar to the real-time image data through a big data server; and training and generating the real-time image recognition model by utilizing the historical focusing data and a preset model.
Specifically, before the focusing input parameters are generated, the neural network model needs to be trained and perfected according to the historical data set in the large data platform, so that the neural network model which can be directly used for generating the focusing input parameters is obtained. For example, before the real-time image data is identified by the real-time image identification model to obtain the focusing input parameters, the method further includes: collecting historical focusing data similar to the real-time image data through a big data server; and training and generating the real-time image recognition model by utilizing the historical focusing data and a preset model.
And step S106, obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters.
Specifically, after the focusing input parameters are generated, the embodiment of the invention needs to apply a plurality of pieces of data of the lens focusing parameters with the focusing input parameters, so as to obtain a focusing strategy aiming at a certain camera, wherein the focusing strategy comprises how to perform focusing, and how many steps of focusing operations need to be performed by the stepping motor.
Optionally, after the focusing strategy is obtained according to the focusing input parameter and the lens focusing parameter, the method further includes: focal length rolling data in the focusing strategy are extracted; and taking the focal length rolling data as a stepping motor restoring stepping parameter to generate a stepping motor restoring strategy.
Specifically, after the focusing strategy is obtained, the conventional focusing process usually performs purposeful focusing operation on real-time image data, and a stepping motor is used to drive a lens to perform optical distance and focus change, however, after the focusing operation is performed, the lens position tends to stay at the focusing time, so that the reference position of the next focusing is changed, and the problem of inaccurate focusing is further generated. Because the method further comprises the following steps after obtaining the focusing strategy according to the focusing input parameters and the lens focusing parameters: focal length rolling data in the focusing strategy are extracted; and the focal length rolling data is used as a stepping motor reduction stepping parameter to generate a stepping motor reduction strategy, the process after focusing can be processed, and the travelling track of the stepping motor is reduced to prepare for subsequent focusing.
And step S108, focusing operation is carried out on the real-time image data according to the focusing strategy.
Through the embodiment, the focal length adjusting function in the prior art is solved, the current camera equipment is adjusted in a focusing way only according to the requirement of an image, and the stepping motor cannot be restored to be the camera device before adjustment in the adjusting process or after adjustment, so that the technical problems of the service life and the convenience of the high-precision camera are greatly reduced.
Example two
Fig. 2 is a block diagram of an intelligent focusing apparatus for a billion-level camera according to an embodiment of the present invention, as shown in fig. 2, the apparatus includes:
the acquiring module 20 is configured to acquire real-time image data and lens focusing parameters.
Specifically, in order to solve the technical problems that in the prior art, the focal length adjusting function is to adjust the current camera device according to the requirement of an image, and the stepper motor cannot be restored to the camera device before adjustment in the adjusting process or after adjustment, so that the service life and convenience of the high-precision camera are greatly reduced, firstly, image data are required to be acquired in real time according to the equipment of the high-precision camera system, the image data are required to be used for scene monitoring or scene recognition and the like of the high-precision camera system under a certain precision, and meanwhile, focusing parameters of the camera are required to be acquired according to the camera array. Optionally, the lens focusing parameters include: a focusing limit value, a focusing sensitivity and a stepping speed, wherein the focusing limit value represents the maximum or minimum focusing value of the camera in the focusing process and is used for generating aiming at a focusing strategy.
The recognition module 22 is configured to recognize the real-time image data through a real-time image recognition model, so as to obtain a focusing input parameter.
Specifically, in order to increase the efficiency and hit rate of image recognition, the embodiment of the invention can analyze and calculate the focusing requirement of real-time image data by using a DNN deep neural network model or GAN generation antagonistic neural network model, thereby obtaining focusing input parameters, wherein the focusing input parameters are used for generating corresponding focusing strategies so as to execute focusing operation on a camera, and the real-time image data can be more clear.
Optionally, the apparatus further includes: the acquisition module is used for acquiring historical focusing data similar to the real-time image data through the big data server; and the training module is used for training and generating the real-time image recognition model by utilizing the historical focusing data and the preset model.
Specifically, before the focusing input parameters are generated, the neural network model needs to be trained and perfected according to the historical data set in the large data platform, so that the neural network model which can be directly used for generating the focusing input parameters is obtained. For example, before the real-time image data is identified by the real-time image identification model to obtain the focusing input parameters, the method further includes: collecting historical focusing data similar to the real-time image data through a big data server; and training and generating the real-time image recognition model by utilizing the historical focusing data and a preset model.
And the calculation module 24 is configured to obtain a focusing strategy according to the focusing input parameter and the lens focusing parameter.
Specifically, after the focusing input parameters are generated, the embodiment of the invention needs to apply a plurality of pieces of data of the lens focusing parameters with the focusing input parameters, so as to obtain a focusing strategy aiming at a certain camera, wherein the focusing strategy comprises how to perform focusing, and how many steps of focusing operations need to be performed by the stepping motor.
Optionally, the apparatus further includes: the extraction module is used for extracting focal length rolling data in the focusing strategy; and the generation module is used for taking the focal length rolling data as a stepping motor reduction stepping parameter to generate a stepping motor reduction strategy.
Specifically, after the focusing strategy is obtained, the conventional focusing process usually performs purposeful focusing operation on real-time image data, and a stepping motor is used to drive a lens to perform optical distance and focus change, however, after the focusing operation is performed, the lens position tends to stay at the focusing time, so that the reference position of the next focusing is changed, and the problem of inaccurate focusing is further generated. Because the method further comprises the following steps after obtaining the focusing strategy according to the focusing input parameters and the lens focusing parameters: focal length rolling data in the focusing strategy are extracted; and the focal length rolling data is used as a stepping motor reduction stepping parameter to generate a stepping motor reduction strategy, the process after focusing can be processed, and the travelling track of the stepping motor is reduced to prepare for subsequent focusing.
And the focusing module 26 is used for performing focusing operation on the real-time image data according to the focusing strategy.
Through the embodiment, the focal length adjusting function in the prior art is solved, the current camera equipment is adjusted in a focusing way only according to the requirement of an image, and the stepping motor cannot be restored to be the camera device before adjustment in the adjusting process or after adjustment, so that the technical problems of the service life and the convenience of the high-precision camera are greatly reduced.
According to another aspect of the embodiment of the present invention, there is also provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute an intelligent focusing method of a hundred million-class camera.
Specifically, the method comprises the following steps: acquiring real-time image data and lens focusing parameters; identifying the real-time image data through a real-time image identification model to obtain focusing input parameters; obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters; and carrying out focusing operation on the real-time image data according to the focusing strategy. Optionally, the lens focusing parameters include: focus limit, focus sensitivity, stepping speed. Optionally, before the identifying the real-time image data through the real-time image identifying model to obtain the focusing input parameter, the method further includes: collecting historical focusing data similar to the real-time image data through a big data server; and training and generating the real-time image recognition model by utilizing the historical focusing data and a preset model. Optionally, after the focusing strategy is obtained according to the focusing input parameter and the lens focusing parameter, the method further includes: focal length rolling data in the focusing strategy are extracted; and taking the focal length rolling data as a stepping motor restoring stepping parameter to generate a stepping motor restoring strategy.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute an intelligent focusing method for a hundred million-class camera when executed.
Specifically, the method comprises the following steps: acquiring real-time image data and lens focusing parameters; identifying the real-time image data through a real-time image identification model to obtain focusing input parameters; obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters; and carrying out focusing operation on the real-time image data according to the focusing strategy. Optionally, the lens focusing parameters include: focus limit, focus sensitivity, stepping speed. Optionally, before the identifying the real-time image data through the real-time image identifying model to obtain the focusing input parameter, the method further includes: collecting historical focusing data similar to the real-time image data through a big data server; and training and generating the real-time image recognition model by utilizing the historical focusing data and a preset model. Optionally, after the focusing strategy is obtained according to the focusing input parameter and the lens focusing parameter, the method further includes: focal length rolling data in the focusing strategy are extracted; and taking the focal length rolling data as a stepping motor restoring stepping parameter to generate a stepping motor restoring strategy.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, fig. 3 is a schematic hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to enable communication connections between the elements. The memory 33 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented as, for example, a central processing unit (Central Processing Unit, abbreviated as CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through wired or wireless connections.
Alternatively, the input device 30 may include a variety of input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a programmable interface of software, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware insertion interface (such as a USB interface, a serial port, etc.) for data transmission between devices; alternatively, the user-oriented user interface may be, for example, a user-oriented control key, a voice input device for receiving voice input, and a touch-sensitive device (e.g., a touch screen, a touch pad, etc. having touch-sensitive functionality) for receiving user touch input by a user; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, for example, an input pin interface or an input interface of a chip, etc.; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, etc. with a communication function. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, audio, or the like.
In this embodiment, the processor of the terminal device may include functions for executing each module of the data processing apparatus in each device, and specific functions and technical effects may be referred to the above embodiments and are not described herein again.
Fig. 4 is a schematic hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of the implementation of fig. 3. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the methods of the above-described embodiments.
The memory 42 is configured to store various types of data to support operation at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, video, etc. The memory 42 may include a random access memory (random access memory, simply referred to as RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power supply component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. The processing component 40 may include one or more processors 41 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 40 may include one or more modules that facilitate interactions between the processing component 40 and other components. For example, processing component 40 may include a multimedia module to facilitate interaction between multimedia component 45 and processing component 40.
The power supply assembly 44 provides power to the various components of the terminal device. Power supply components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
The multimedia component 45 comprises a display screen between the terminal device and the user providing an output interface. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a speech recognition mode. The received audio signals may be further stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 further includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing assembly 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: volume button, start button and lock button.
The sensor assembly 48 includes one or more sensors for providing status assessment of various aspects for the terminal device. For example, the sensor assembly 48 may detect the open/closed state of the terminal device, the relative positioning of the assembly, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot, where the SIM card slot is used to insert a SIM card, so that the terminal device may log into a GPRS network, and establish communication with a server through the internet.
From the above, it will be appreciated that the communication component 43, the audio component 46, and the input/output interface 47, the sensor component 48 referred to in the embodiment of fig. 4 may be implemented as an input device in the embodiment of fig. 3.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. An intelligent focusing method of a hundred million-class camera is characterized by comprising the following steps:
acquiring real-time image data and lens focusing parameters;
identifying the real-time image data through a real-time image identification model to obtain focusing input parameters;
obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters;
and carrying out focusing operation on the real-time image data according to the focusing strategy.
2. The method of claim 1, wherein the lens focusing parameters comprise: focus limit, focus sensitivity, stepping speed.
3. The method of claim 1, wherein prior to said identifying said real-time image data by a real-time image identification model to obtain a focus input parameter, the method further comprises:
collecting historical focusing data similar to the real-time image data through a big data server;
and training and generating the real-time image recognition model by utilizing the historical focusing data and a preset model.
4. The method of claim 1, wherein after the deriving a focus strategy from the focus input parameters and the lens focus parameters, the method further comprises:
focal length rolling data in the focusing strategy are extracted;
and taking the focal length rolling data as a stepping motor restoring stepping parameter to generate a stepping motor restoring strategy.
5. An intelligent focusing device for a hundred million-class camera, comprising:
the acquisition module is used for acquiring real-time image data and lens focusing parameters;
the identification module is used for identifying the real-time image data through a real-time image identification model to obtain focusing input parameters;
the calculation module is used for obtaining a focusing strategy according to the focusing input parameters and the lens focusing parameters;
and the focusing module is used for carrying out focusing operation on the real-time image data according to the focusing strategy.
6. The apparatus of claim 5, wherein the lens focusing parameters comprise: focus limit, focus sensitivity, stepping speed.
7. The apparatus of claim 5, wherein the apparatus further comprises:
the acquisition module is used for acquiring historical focusing data similar to the real-time image data through the big data server;
and the training module is used for training and generating the real-time image recognition model by utilizing the historical focusing data and the preset model.
8. The apparatus of claim 5, wherein the apparatus further comprises:
the extraction module is used for extracting focal length rolling data in the focusing strategy;
and the generation module is used for taking the focal length rolling data as a stepping motor reduction stepping parameter to generate a stepping motor reduction strategy.
9. A non-volatile storage medium, characterized in that the non-volatile storage medium comprises a stored program, wherein the program, when run, controls a device in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for executing the processor, wherein the computer readable instructions when executed perform the method of any of claims 1 to 4.
CN202310271268.8A 2023-03-17 2023-03-17 Intelligent focusing method and device for hundred million-level cameras Active CN116261044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310271268.8A CN116261044B (en) 2023-03-17 2023-03-17 Intelligent focusing method and device for hundred million-level cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310271268.8A CN116261044B (en) 2023-03-17 2023-03-17 Intelligent focusing method and device for hundred million-level cameras

Publications (2)

Publication Number Publication Date
CN116261044A true CN116261044A (en) 2023-06-13
CN116261044B CN116261044B (en) 2024-04-02

Family

ID=86680843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310271268.8A Active CN116261044B (en) 2023-03-17 2023-03-17 Intelligent focusing method and device for hundred million-level cameras

Country Status (1)

Country Link
CN (1) CN116261044B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116723419A (en) * 2023-07-03 2023-09-08 北京拙河科技有限公司 Acquisition speed optimization method and device for billion-level high-precision camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100322182B1 (en) * 1994-07-13 2002-06-20 이중구 Device and method for compensating focus of zoom camera
JP2006235059A (en) * 2005-02-23 2006-09-07 Fuji Photo Film Co Ltd Photographing device
CN101907761A (en) * 2010-07-13 2010-12-08 惠州市百宏微动技术工业有限公司 Automatic zooming module device
US20140240578A1 (en) * 2013-02-22 2014-08-28 Lytro, Inc. Light-field based autofocus
CN109714530A (en) * 2018-12-25 2019-05-03 中国科学院长春光学精密机械与物理研究所 A kind of aerial camera image focus adjustment method
CN113556470A (en) * 2021-09-18 2021-10-26 浙江宇视科技有限公司 Lens focusing method and device, computer equipment and storage medium
CN113766139A (en) * 2021-09-29 2021-12-07 广东朝歌智慧互联科技有限公司 Focusing device and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100322182B1 (en) * 1994-07-13 2002-06-20 이중구 Device and method for compensating focus of zoom camera
JP2006235059A (en) * 2005-02-23 2006-09-07 Fuji Photo Film Co Ltd Photographing device
CN101907761A (en) * 2010-07-13 2010-12-08 惠州市百宏微动技术工业有限公司 Automatic zooming module device
US20140240578A1 (en) * 2013-02-22 2014-08-28 Lytro, Inc. Light-field based autofocus
CN109714530A (en) * 2018-12-25 2019-05-03 中国科学院长春光学精密机械与物理研究所 A kind of aerial camera image focus adjustment method
CN113556470A (en) * 2021-09-18 2021-10-26 浙江宇视科技有限公司 Lens focusing method and device, computer equipment and storage medium
CN113766139A (en) * 2021-09-29 2021-12-07 广东朝歌智慧互联科技有限公司 Focusing device and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116723419A (en) * 2023-07-03 2023-09-08 北京拙河科技有限公司 Acquisition speed optimization method and device for billion-level high-precision camera
CN116723419B (en) * 2023-07-03 2024-03-22 北京拙河科技有限公司 Acquisition speed optimization method and device for billion-level high-precision camera

Also Published As

Publication number Publication date
CN116261044B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
KR20210042952A (en) Image processing method and device, electronic device and storage medium
CN116261044B (en) Intelligent focusing method and device for hundred million-level cameras
CN104077563A (en) Human face recognition method and device
CN116614453B (en) Image transmission bandwidth selection method and device based on cloud interconnection
CN110941727B (en) Resource recommendation method and device, electronic equipment and storage medium
CN105224950A (en) The recognition methods of filter classification and device
CN114866702A (en) Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device
CN116744102B (en) Ball machine tracking method and device based on feedback adjustment
CN116579964B (en) Dynamic frame gradual-in gradual-out dynamic fusion method and device
CN116723419B (en) Acquisition speed optimization method and device for billion-level high-precision camera
CN116528052A (en) Method and device for increasing exposure precision of light field camera under high-speed movement
CN116088580B (en) Flying object tracking method and device
CN116030501B (en) Method and device for extracting bird detection data
CN115511735B (en) Snow field gray scale picture optimization method and device
CN115984333B (en) Smooth tracking method and device for airplane target
CN116757983B (en) Main and auxiliary image fusion method and device
CN116228593B (en) Image perfecting method and device based on hierarchical antialiasing
CN116579965B (en) Multi-image fusion method and device
CN116468883B (en) High-precision image data volume fog recognition method and device
CN116389915B (en) Method and device for reducing flicker of light field camera
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN117896625A (en) Picture imaging method and device based on low-altitude high-resolution analysis
CN116723298B (en) Method and device for improving transmission efficiency of camera end
CN116485841A (en) Motion rule identification method and device based on multiple wide angles
CN112637504B (en) Focusing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant