CN117956280A - Method, device, storage medium and chip for controlling lens - Google Patents
Method, device, storage medium and chip for controlling lens Download PDFInfo
- Publication number
- CN117956280A CN117956280A CN202211328872.1A CN202211328872A CN117956280A CN 117956280 A CN117956280 A CN 117956280A CN 202211328872 A CN202211328872 A CN 202211328872A CN 117956280 A CN117956280 A CN 117956280A
- Authority
- CN
- China
- Prior art keywords
- target
- motion state
- data
- lens
- compensation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 description 14
- 230000000694 effects Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012634 optical imaging Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000003381 stabilizer Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Adjustment Of Camera Lenses (AREA)
Abstract
The present disclosure relates to a method, an apparatus, a storage medium, and a chip for controlling a lens, which can obtain a motion state of a target lens at a target moment; when the motion state is a preset motion state, performing data compensation on the first shake data of the target lens at the target moment according to the preset motion state and the target position information of the target lens at the target moment to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far from the preset boundary, and the preset boundary is a boundary corresponding to the movable space of the target lens; and controlling the target lens to move according to the second dithering data.
Description
Technical Field
The present disclosure relates to the field of optical imaging, and in particular, to a method, an apparatus, a storage medium, and a chip for controlling a lens.
Background
With the continuous development of image acquisition technology, the quality of images shot by a camera is higher and higher, which depends on OIS (Optical image stabilizer, optical imaging anti-shake) functions of a camera head. The OIS works on the principle that the amount of camera motion due to shake is cancelled by moving the lens, thereby reducing image blur.
In the OIS process of compensating the movement amount of the camera by translating the lens, the maximum movement range is limited by the hardware condition of the lens module, and if the movement of the camera is too severe, the lens can move to the edge of the lens module, so-called "edge collision" phenomenon occurs.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method, apparatus, storage medium, and chip for controlling a lens.
According to a first aspect of an embodiment of the present disclosure, there is provided a method of controlling a lens, including: acquiring a motion state of a target lens at a target moment;
When the motion state is a preset motion state, performing data compensation on first shake data of the target lens at the target moment according to the preset motion state and target position information of the target lens at the target moment to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, and the preset boundary is a boundary corresponding to a movable space of the target lens;
And controlling the target lens to move according to the second dithering data.
Optionally, the performing data compensation on the first shake data of the target lens at the target moment according to the preset motion state and the target position information of the target lens at the target moment, and obtaining the second shake data includes:
determining a target compensation amount determination model according to the preset motion state and the target position information;
determining jitter compensation data according to the target position information and the target compensation amount determination model;
And carrying out data compensation on the first jitter data according to the jitter compensation data to obtain the second jitter data.
Optionally, the determining the target compensation amount determining model according to the preset motion state and the target position information includes:
And if the preset motion state is the first motion state, determining the target compensation amount determining model from a plurality of first preset compensation amount determining models according to the target position information, wherein different first preset compensation amount determining models correspond to different position intervals, and the position intervals are intervals in which the position of the target lens is located.
Optionally, the determining the target compensation amount determining model according to the preset motion state and the target position information includes:
And if the preset motion state is the second motion state, taking a second preset compensation amount determining model as the target compensation amount determining model.
Optionally, the determining jitter compensation data according to the target position information and the target compensation amount determination model includes:
after the target position information is input into the target compensation quantity determining model, outputting a target compensation rate through the target compensation quantity determining model, wherein the target compensation rate represents the compensation force of the target lens movement at the target moment;
and determining the jitter compensation data according to the target compensation rate.
Optionally, the performing data compensation on the first jitter data according to the jitter compensation data, and obtaining the second jitter data includes:
If the preset motion state is the first motion state, taking the difference value between the first jitter data and the jitter compensation data as the second jitter data;
and if the preset motion state is the second motion state, taking the sum of the first jitter data and the jitter compensation data as the second jitter data.
Optionally, the controlling the target lens movement according to the second shake data includes:
determining a target movement amount of the target lens according to the second shake data;
and controlling the movement of the target lens according to the target movement amount.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a lens, including:
The acquisition module is configured to acquire the motion state of the target lens at the target moment;
The compensation module is configured to perform data compensation on the first shake data of the target lens at the target moment according to the preset motion state and the target position information of the target lens at the target moment when the motion state is the preset motion state, so as to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, and the preset boundary is a boundary corresponding to a movable space of the target lens;
and a control module configured to control the target lens to move according to the second shake data.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a lens, including:
A processor;
A memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a motion state of a target lens at a target moment;
When the motion state is a preset motion state, performing data compensation on first shake data of the target lens at the target moment according to the preset motion state and target position information of the target lens at the target moment to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, and the preset boundary is a boundary corresponding to a movable space of the target lens;
And controlling the target lens to move according to the second dithering data.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of controlling a lens provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a processor and an interface; the processor is configured to read instructions to implement the steps of the method for controlling a lens provided in the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: when the motion state of the target lens is a preset motion state, performing data compensation on first shake data of the target lens at the target moment according to the preset motion state and target position information of the target lens at the target moment to obtain second shake data; the preset motion state comprises a first motion state approaching to the preset boundary or a second motion state far away from the preset boundary, wherein the preset boundary is a boundary corresponding to a movable space of the target lens, so that when the motion state of the target lens at the target moment is the first motion state, the first dithering data is subjected to data compensation, the dithering data can be reduced, smaller second dithering data are obtained, and when the target lens is controlled to move based on the second dithering data, the probability of edge collision of the target lens can be reduced, the edge collision can be reduced as much as possible, and the anti-dithering effect and the image quality are improved; when the motion state of the target lens at the target moment is the second motion state, the first dithering data is subjected to data compensation, so that the dithering data can be increased, larger second dithering data can be obtained, and thus, when the target lens is controlled to move based on the second dithering data, the target lens can be accelerated to move towards the middle area, the long-time drifting phenomenon after the lens hits edges is avoided, the problem of picture movement is solved, and the picture quality of a shot image is also improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a diagram showing the processing effect of an OIS algorithm in the related art after the "hit edge" and after the subsequent signal suppression;
FIG. 2 is a schematic diagram of a drift phenomenon after a lens is separated from a hit edge in an OIS algorithm in the related art;
FIG. 3 is a schematic diagram of a lens position change after an OIS algorithm lens is separated from a hit edge in the related art;
FIG. 4 is a flowchart illustrating a method of controlling a lens, according to an example embodiment;
FIG. 5 is a flow chart of a method of step S402 shown in accordance with the embodiment shown in FIG. 4;
FIGS. 6a and 6b are schematic diagrams illustrating a centering effect of a lens after the lens is out of the strike according to an exemplary embodiment;
FIG. 7 is a block diagram of an apparatus for controlling a lens, according to an exemplary embodiment;
Fig. 8 is a block diagram of another apparatus for controlling a lens according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, all actions of acquiring signals, information or data in the present application are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
The method and the device are mainly applied to a scene of image acquisition for realizing anti-shake control based on an OIS technology, the OIS is limited by hardware conditions of a lens module in the process of compensating the movement amount of the camera in a lens translation mode, the maximum compensation range of each coordinate axis is +/-1 DEG, and the camera moves too severely, so that the lens moves to the edge of the lens module, and a phenomenon of 'bumping edges' occurs. An OIS algorithm in the related art does not process the "bumping" of the lens, which results in that OIS can be correctly anti-shake in the first half period of exposure (i.e. before bumping of the lens), but the exposure is not yet finished, and the bumping phenomenon occurs, so that the shot image cannot be correctly anti-shake in the second half period of exposure, and the acquired image has a phenomenon of clear half and blurred half.
Another OIS algorithm provided in the related art uses a filter to integrally process signals, and firstly filters input gyroscope signals (angular velocity of camera moving in a corresponding direction at each moment when the gyroscope signals represent camera shake) to remove low-frequency shake; then, integrating and calculating the filtered gyroscope signals to obtain a required lens movement angle (usually, hall signals are used for representing the movement angle of the lens at each moment); the obtained angle signal is filtered again to pull back the signal to the vicinity of the 0 point, in this way, the whole filtering process is performed on the signal, but the signal (the angular velocity corresponding to the lens movement velocity) obtained by differentiating the hall signal after the filtering process cannot be matched with the gyroscope signal, which affects the actual anti-shake effect. For example, fig. 1 is a diagram of the processing effect of the OIS algorithm in the related art for the "hit" back-to-back and subsequent signal suppression, as shown in fig. 1, in an ideal anti-shake state, the differential of the hall signal is highly matched with the gyro signal (such as the signal in the left square frame in fig. 1), when the lens hits the edge, the OIS algorithm suppresses the hall signal strongly, and although the lens is maintained in the middle area for a subsequent period of time, the differential of the hall signal and the gyro signal deviate greatly after the lens breaks away from the hit edge, which indicates that the anti-shake effect of this period of time is not good.
In addition, as shown in fig. 2, an excessive hall signal value during edge collision affects the calculation of the filter, so that the lens position can drift for a long time after the edge collision is finished, as shown in fig. 2, a drift range of almost 1 ° appears after the end of dithering, and obvious picture movement appears on an image.
In addition, if the lens cannot be translated back to the central area (or nearby) of the lens module after the edge collision is removed, the subsequent anti-shake range is limited, as shown in fig. 3, the curve corresponding to the hall signal cannot be returned to the middle in time after the edge collision is removed, and remains nearby 1 ° until the range of the lens capable of performing anti-shake processing in the correct direction only remains about 0.5 ° when the lens shakes next time, so that the subsequent anti-shake range is limited.
In order to solve the above-mentioned problems, the present disclosure provides a method, an apparatus, a storage medium, and a chip for controlling a lens, where when a motion state of a target lens is a preset motion state, data compensation is performed on first shake data of the target lens at the target time according to the preset motion state and target position information of the target lens at the target time, so as to obtain second shake data; the preset motion state comprises a first motion state approaching to the preset boundary or a second motion state far away from the preset boundary, wherein the preset boundary is a boundary corresponding to a movable space of the target lens, so that when the motion state of the target lens at the target moment is the first motion state, the first dithering data is subjected to data compensation, the dithering data can be reduced, smaller second dithering data are obtained, and when the target lens is controlled to move based on the second dithering data, the probability of edge collision of the target lens can be reduced, the edge collision can be reduced as much as possible, and the anti-dithering effect and the image quality are improved; when the motion state of the target lens at the target moment is the second motion state, the first dithering data is subjected to data compensation, so that the dithering data can be increased, larger second dithering data can be obtained, and thus, when the target lens is controlled to move based on the second dithering data, the target lens can be accelerated to move towards the middle area, the long-time drifting phenomenon after the lens hits edges is avoided, the problem of picture movement is solved, and the picture quality of a shot image is also improved.
The following detailed description of specific embodiments of the present disclosure refers to the accompanying drawings.
Fig. 4 is a flowchart illustrating a method of controlling a lens, as shown in fig. 4, according to an exemplary embodiment, the method including the following steps.
In step S401, a motion state of the target lens at the target timing is acquired.
In an actual application scene, when a user holds a camera to take a picture, the camera may shake due to external factors, and under the condition of camera shake, if the shake phenomenon is not processed, the shot image frame is blurred, and the image quality is affected.
In one possible implementation manner of the present disclosure, in a moving process of the target lens, a motion state of the target lens may be divided into a first motion state, "bumping edge" state and a second motion state, where the first motion state refers to a process of moving the target lens from a central area of the lens to an edge of the lens module, and when the target lens moves to the edge of the lens module, the process of moving from the edge of the lens module to the central area after the target lens is separated from the bumping edge is referred to as "bumping edge" state, and the process of moving from the edge of the lens module to the central area is referred to as "second motion state" (or referred to as a motion state when the target lens is far from the edge of the lens module).
In step S402, when the motion state is a preset motion state, according to the preset motion state and the target position information of the target lens at the target time, data compensation is performed on the first shake data of the target lens at the target time, so as to obtain second shake data.
The preset motion state includes a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, where the preset boundary is a boundary corresponding to a movable space of the target lens, for example, the preset boundary may be an edge position of the lens module. In addition, the target time may be the current time, the target position information may be the position information of the target lens at the current time, for example, a coordinate system may be established on a screen where the target lens is located, where the origin of coordinates of the coordinate system is a preset center point of a plane where the target lens is located when the target lens is located in a center area of the lens module, and the target position information may be represented by an angle of a position where the center point of the target lens at the current time is located with respect to a coordinate axis.
In addition, the first shake data is the angular velocity of the target lens moving in the corresponding direction at the target moment, for example, the first shake data can be represented by a gyroscope signal representing camera shake, the second shake data is shake data obtained after the first shake data is subjected to data compensation based on shake compensation data obtained through calculation, and the data compensation comprises the steps of increasing the first shake data to obtain the second shake data or reducing the first shake data to obtain the second shake data; for example, in an actual application scenario, if the current motion state of the target lens is a first motion state close to the edge of the lens module, it is indicated that the lens has not hit the edge at this time, and in order to reduce the hit edge as much as possible at this time, the first shake data should be reduced so as to control the target lens to decelerate when moving towards the edge of the lens module, otherwise, if the current motion state of the target lens is a second motion state far away from the edge of the lens module, it is indicated that the lens has separated from the hit edge at this time, and it should be controlled that the target lens is "in return" (i.e. returns to the central area of the lens module) at this time, so that the first shake data can be increased by data compensation, and the second shake data can be obtained so as to control the target lens to accelerate in return.
In step S403, the target lens movement is controlled according to the second shake data.
In this step, a target movement amount of the target lens may be determined according to the second shake data; and then controlling the target lens to move according to the target movement amount.
The target movement amount may include a movement angle of the target lens with respect to the coordinate axis, or an offset amount of the target lens with respect to the origin of coordinates in each coordinate axis direction.
In a possible implementation manner, the target movement amount may be obtained by performing integral calculation on the second shake data, and then the target lens movement is controlled according to the target movement amount, so as to prevent a lens blur problem caused by lens shake.
By adopting the method, when the motion state of the target lens at the target moment is the first motion state, the first dithering data is subjected to data compensation, so that the dithering data can be reduced, smaller second dithering data can be obtained, and when the target lens is controlled to move based on the second dithering data, the probability of edge collision of the target lens can be reduced, thereby reducing the edge collision as much as possible, and improving the anti-dithering effect and the image quality; when the motion state of the target lens at the target moment is the second motion state, the first dithering data is subjected to data compensation, so that the dithering data can be increased, larger second dithering data can be obtained, and thus, when the target lens is controlled to move based on the second dithering data, the target lens can be accelerated to move towards the middle area, the long-time drifting phenomenon after the lens hits edges is avoided, the problem of picture movement is solved, and the picture quality of a shot image is also improved.
Fig. 5 is a flowchart of a method of step S402 shown in accordance with the embodiment of fig. 4, the method further comprising, as shown in fig. 5, the steps of:
in step S4021, a target compensation amount determination model is determined according to the preset motion state and the target position information.
The target compensation amount determination model can be used for calculating a target compensation rate, wherein the target compensation rate characterizes compensation force of target moment on target lens movement.
In one possible implementation manner of this step, if the preset motion state is the first motion state, the target compensation amount determining model may be determined from a plurality of first preset compensation amount determining models according to the target position information, and different first preset compensation amount determining models correspond to different position intervals, where the position interval is an interval where the position of the target lens is located.
When the target lens is in the first motion state, the target lens moves from the central area of the lens to the edge of the lens module, and at the moment, in order to reduce the probability of lens edge collision, different compensation quantity determining models can be set according to different position intervals where the current position of the target lens is located, so that data compensation can be carried out on first shake data among the partitions according to different positions of the current position of the target lens.
For example, a first position interval and a second position interval may be set, where the first position interval may be set to [0.75×bound,0.9×bound), the second position interval may be set to [0.9×bound, bound), where Bound represents a maximum translatable angle of the target lens, and a first preset compensation amount determination model corresponding to the first position interval [0.75×bound,0.9×bound) is:
The first preset compensation amount determination model corresponding to the second position interval [0.9×bound, bound) is:
Wherein k in the formulas (1) and (2) represents a target compensation rate, the target compensation rate represents compensation force for moving the target lens at different moments, h represents target position information of the lens, other parameters in the formulas (1) and (2) are all preset experience values, and in actual business, the parameters can be set according to needs, so that the method is not limited.
Thus, if the target lens is currently in the first motion state, when the included angle between the current position of the target lens and the coordinate axis is determined to be located in the first position interval [0.75×bound,0.9×bound ], the target compensation amount determination model is determined to be equation (1), and when the included angle between the current position of the target lens and the coordinate axis is determined to be located in the second position interval [0.9×bound, bound), the target compensation amount determination model is determined to be equation (2).
In addition, in this step, if the preset motion state is the second motion state, the target lens moves from the edge of the lens module to the central area, that is, the target lens is in a motion state after being separated from the edge collision, so that the target lens can return to the middle as soon as possible, jitter data needs to be increased, and at this time, the second preset compensation amount determination model may be used as the target compensation amount determination model.
For example, if the preset motion state is the second motion state, the following second preset compensation amount determination model (i.e., equation 3) may be used as the target compensation amount determination model:
k=-0.5*cos(π*x)+0.5,{0≤x≤1} (3)
Wherein k represents a target compensation rate, and x represents a ratio of an offset angle of the target lens relative to the coordinate axis at the current moment to a maximum translatable angle Bound of the target lens.
In step S4022, shake compensation data is determined according to the target position information and the target compensation amount determination model.
In this step, after the target position information is input into the target compensation amount determination model, a target compensation rate is output by the target compensation amount determination model, where the target compensation rate characterizes a compensation force of the target lens movement at a target moment; and determining the jitter compensation data according to the target compensation rate.
For example, the target compensation rate may be calculated after inputting the target position information of the target lens into one of the target compensation amount determination models in formulas (1) - (3), and then the shake compensation data may be determined according to the target compensation rate by:
α=k*0.05*Bound (4)
Where a represents the jitter compensation data and k represents the target compensation rate.
In step S4023, data compensation is performed on the first jitter data according to the jitter compensation data, so as to obtain the second jitter data.
In this step, if the preset motion state is the first motion state, taking a difference value between the first shake data and the shake compensation data as the second shake data; and if the preset motion state is the second motion state, taking the sum of the first jitter data and the jitter compensation data as the second jitter data.
For example, the first jitter data may be subjected to data compensation by the following formula to obtain the second jitter data:
g=g-α (5)
Wherein g on the left of the equal sign indicates the second jitter data after compensation, g on the right of the equal sign indicates the first jitter data, and a indicates the jitter compensation data calculated based on the formula (4).
It should be noted that, if the motion state of the target lens at the target moment is the first preset motion state, the target compensation rate may be calculated based on the formula (1) or the formula (2), and then the shake compensation data is further calculated based on the formula (4) according to the target compensation rate, where the target compensation rate is a positive value and the shake compensation data is also a positive value, so that after the first shake data is subjected to data compensation based on the formula (5), the obtained second shake data is smaller than the first shake data, and therefore, the lens movement angle obtained by further integrating the smaller second shake data is also smaller than the lens movement angle obtained by calculating based on the first shake data, so that the possibility of "bumping edge" of the target lens in the anti-shake movement process may be reduced; in addition, if the motion state of the target lens at the target moment is the second preset motion state, the target compensation rate can be calculated based on the formula (3), then the shake compensation data is further calculated based on the formula (4) according to the target compensation rate, under the scene, the target compensation rate is negative, the shake compensation data is also negative, so that the second shake data obtained after the data compensation is carried out on the first shake data based on the formula (5) is larger than the first shake data, therefore, the lens movement angle obtained by further integral calculation based on the larger second shake data is also larger than the lens movement angle obtained by calculation based on the first shake data, the target lens can be accelerated to move to the middle area after the target lens breaks away from the collision edge, the long-time drift phenomenon after the lens collision edge is avoided, the problem of picture movement is solved, and the picture quality of the shot image is also improved.
Fig. 6a and 6b are schematic diagrams illustrating the return effect of the lens after the lens is separated from the hit edge according to an exemplary embodiment, as shown in fig. 6a, after the lens is separated from the hit edge, compared with the hall signal before the compensation of the gyro signal, the hall signal after the compensation of the gyro signal is substantially stable in the central area of the lens module, and no offset phenomenon occurs, thereby solving the problem of image movement; as shown in fig. 6b, after the lens is separated from the bump edge, compared with the hall signal before the compensation of the gyro signal, the hall signal after the compensation of the gyro signal is substantially stable in the central area of the lens module, which has a better centering effect, and provides a larger anti-shake space for the subsequent lens when shake occurs.
Fig. 7 is a block diagram illustrating an apparatus for controlling a lens according to an exemplary embodiment. Referring to fig. 7, the apparatus includes:
An obtaining module 701 configured to obtain a motion state of a target lens at a target time;
The compensation module 702 is configured to perform data compensation on the first shake data of the target lens at the target moment according to the preset motion state and the target position information of the target lens at the target moment when the motion state is the preset motion state, so as to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, and the preset boundary is a boundary corresponding to a movable space of the target lens;
A control module 703 configured to control the target lens movement according to the second shake data.
Optionally, the compensation module 702 is configured to determine a target compensation amount determination model according to the preset motion state and the target position information; determining jitter compensation data according to the target position information and the target compensation amount determination model; and carrying out data compensation on the first jitter data according to the jitter compensation data to obtain the second jitter data.
Optionally, the compensation module 702 is configured to determine, if the preset motion state is the first motion state, the target compensation amount determination model from a plurality of first preset compensation amount determination models according to the target position information, where different first preset compensation amount determination models correspond to different position intervals, and the position interval is an interval where the position of the target lens is located.
Optionally, the compensation module 702 is configured to use a second preset compensation amount determination model as the target compensation amount determination model if the preset motion state is the second motion state.
Optionally, the compensation module 702 is configured to input the target position information into the target compensation amount determination model, and then output a target compensation rate through the target compensation amount determination model, where the target compensation rate characterizes a compensation force of the target lens movement at a target moment; and determining the jitter compensation data according to the target compensation rate.
Optionally, the compensation module 702 is configured to take a difference value between the first jitter data and the jitter compensation data as the second jitter data if the preset motion state is the first motion state; and if the preset motion state is the second motion state, taking the sum of the first jitter data and the jitter compensation data as the second jitter data.
Optionally, the control module 703 is configured to determine a target movement amount of the target lens according to the second shake data; and controlling the movement of the target lens according to the target movement amount.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
By adopting the device, when the motion state of the target lens at the target moment is the first motion state, the first dithering data is subjected to data compensation, so that the dithering data can be reduced, smaller second dithering data can be obtained, and when the target lens is controlled to move based on the second dithering data, the probability of edge collision of the target lens can be reduced, thereby reducing the edge collision as much as possible, and improving the anti-dithering effect and the image quality; when the motion state of the target lens at the target moment is the second motion state, the first dithering data is subjected to data compensation, so that the dithering data can be increased, larger second dithering data can be obtained, and thus, when the target lens is controlled to move based on the second dithering data, the target lens can be accelerated to move towards the middle area, the long-time drifting phenomenon after the lens hits edges is avoided, the problem of picture movement is solved, and the picture quality of a shot image is also improved.
The present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of controlling a lens provided by the present disclosure.
Fig. 8 is a block diagram illustrating an apparatus 800 for controlling a lens according to an exemplary embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the method of controlling a lens described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
Input/output interface 812 provides an interface between processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the above-described method of controlling a lens.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of apparatus 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
The apparatus may be a stand-alone electronic device or may be part of a stand-alone electronic device, for example, in one embodiment, the apparatus may be an integrated circuit (INTEGRATED CIRCUIT, IC) or a chip, where the integrated circuit may be an IC or may be a collection of ICs; the chip may include, but is not limited to, the following: GPU (Graphics Processing Unit, graphics Processor), CPU (Central Processing Unit ), FPGA (Field Programmable GATE ARRAY, programmable logic array), DSP (DIGITAL SIGNAL Processor ), ASIC (Application SPECIFIC INTEGRATED Circuit), SOC (System on Chip, SOC, system on Chip or System on Chip), and the like. The integrated circuit or chip may be configured to execute executable instructions (or code) to implement the method of controlling a lens described above. The executable instructions may be stored on the integrated circuit or chip or may be retrieved from another device or apparatus, such as the integrated circuit or chip including a processor, memory, and interface for communicating with other devices. The executable instructions may be stored in the memory, which when executed by the processor implement the method of controlling a lens described above; or the integrated circuit or the chip can receive the executable instructions through the interface and transmit the executable instructions to the processor for execution, so as to realize the method for controlling the lens.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described method of controlling a lens when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (11)
1. A method of controlling a lens, comprising:
acquiring a motion state of a target lens at a target moment;
When the motion state is a preset motion state, performing data compensation on first shake data of the target lens at the target moment according to the preset motion state and target position information of the target lens at the target moment to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, and the preset boundary is a boundary corresponding to a movable space of the target lens;
And controlling the target lens to move according to the second dithering data.
2. The method of claim 1, wherein the performing data compensation on the first shake data of the target lens at the target moment according to the preset motion state and the target position information of the target lens at the target moment, to obtain the second shake data includes:
determining a target compensation amount determination model according to the preset motion state and the target position information;
determining jitter compensation data according to the target position information and the target compensation amount determination model;
And carrying out data compensation on the first jitter data according to the jitter compensation data to obtain the second jitter data.
3. The method of claim 2, wherein the determining a target compensation amount determination model according to the preset motion state and the target position information comprises:
And if the preset motion state is the first motion state, determining the target compensation amount determining model from a plurality of first preset compensation amount determining models according to the target position information, wherein different first preset compensation amount determining models correspond to different position intervals, and the position intervals are intervals in which the position of the target lens is located.
4. The method of claim 2, wherein the determining a target compensation amount determination model according to the preset motion state and the target position information comprises:
And if the preset motion state is the second motion state, taking a second preset compensation amount determining model as the target compensation amount determining model.
5. The method of claim 2, wherein said determining jitter compensation data from said target location information and said target compensation amount determination model comprises:
after the target position information is input into the target compensation quantity determining model, outputting a target compensation rate through the target compensation quantity determining model, wherein the target compensation rate represents the compensation force of the target lens movement at the target moment;
and determining the jitter compensation data according to the target compensation rate.
6. The method of claim 2, wherein the performing data compensation on the first jitter data according to the jitter compensation data to obtain the second jitter data comprises:
If the preset motion state is the first motion state, taking the difference value between the first jitter data and the jitter compensation data as the second jitter data;
and if the preset motion state is the second motion state, taking the sum of the first jitter data and the jitter compensation data as the second jitter data.
7. The method of any of claims 1-6, wherein the controlling the target lens movement according to the second shake data comprises:
determining a target movement amount of the target lens according to the second shake data;
and controlling the movement of the target lens according to the target movement amount.
8. An apparatus for controlling a lens, comprising:
The acquisition module is configured to acquire the motion state of the target lens at the target moment;
The compensation module is configured to perform data compensation on the first shake data of the target lens at the target moment according to the preset motion state and the target position information of the target lens at the target moment when the motion state is the preset motion state, so as to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, and the preset boundary is a boundary corresponding to a movable space of the target lens;
and a control module configured to control the target lens to move according to the second shake data.
9. An apparatus for controlling a lens, comprising:
A processor;
A memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a motion state of a target lens at a target moment;
When the motion state is a preset motion state, performing data compensation on first shake data of the target lens at the target moment according to the preset motion state and target position information of the target lens at the target moment to obtain second shake data; the preset motion state comprises a first motion state approaching to a preset boundary or a second motion state far away from the preset boundary, and the preset boundary is a boundary corresponding to a movable space of the target lens;
And controlling the target lens to move according to the second dithering data.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1 to 7.
11. A chip, comprising a processor and an interface; the processor is configured to read instructions to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211328872.1A CN117956280A (en) | 2022-10-27 | 2022-10-27 | Method, device, storage medium and chip for controlling lens |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211328872.1A CN117956280A (en) | 2022-10-27 | 2022-10-27 | Method, device, storage medium and chip for controlling lens |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117956280A true CN117956280A (en) | 2024-04-30 |
Family
ID=90798568
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211328872.1A Pending CN117956280A (en) | 2022-10-27 | 2022-10-27 | Method, device, storage medium and chip for controlling lens |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117956280A (en) |
-
2022
- 2022-10-27 CN CN202211328872.1A patent/CN117956280A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9674395B2 (en) | Methods and apparatuses for generating photograph | |
US11061202B2 (en) | Methods and devices for adjusting lens position | |
CN106331504B (en) | Shooting method and device | |
CN111105454B (en) | Method, device and medium for obtaining positioning information | |
CN110769147B (en) | Shooting method and electronic equipment | |
EP3905660A1 (en) | Method and device for shooting image, and storage medium | |
CN112414400B (en) | Information processing method and device, electronic equipment and storage medium | |
CN108829475B (en) | UI drawing method, device and storage medium | |
CN110620871B (en) | Video shooting method and electronic equipment | |
CN114430453B (en) | Camera anti-shake system, control method, equipment and medium | |
CN111383296B (en) | Method and device for displaying drawn track and storage medium | |
CN115950415A (en) | Method and device for determining navigation direction and storage medium | |
CN117956280A (en) | Method, device, storage medium and chip for controlling lens | |
CN113315903B (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN114765663A (en) | Anti-shake processing method and device, mobile device and storage medium | |
CN117956281A (en) | Method, device, storage medium and chip for controlling lens | |
CN111862288A (en) | Pose rendering method, device and medium | |
CN115190235B (en) | Method and device for distributing active space range of image acquisition module and related equipment | |
CN110458962B (en) | Image processing method and device, electronic equipment and storage medium | |
CN116347248B (en) | Image acquisition method and device, electronic equipment, medium and chip | |
CN116402695B (en) | Video data processing method, device and storage medium | |
CN112203015B (en) | Camera control method, device and medium system | |
CN118154678A (en) | Image processing method, device, medium, equipment and chip | |
CN117939294A (en) | Method, device, storage medium and chip for controlling lens | |
CN118052721A (en) | Image fusion method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |