CN115242967B - Imaging anti-shake method, imaging anti-shake apparatus, image pickup device, and readable storage medium - Google Patents
Imaging anti-shake method, imaging anti-shake apparatus, image pickup device, and readable storage medium Download PDFInfo
- Publication number
- CN115242967B CN115242967B CN202210645359.9A CN202210645359A CN115242967B CN 115242967 B CN115242967 B CN 115242967B CN 202210645359 A CN202210645359 A CN 202210645359A CN 115242967 B CN115242967 B CN 115242967B
- Authority
- CN
- China
- Prior art keywords
- sample
- jitter
- compensation
- shake
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 114
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000012549 training Methods 0.000 claims description 51
- 238000011156 evaluation Methods 0.000 claims description 27
- 230000000694 effects Effects 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 17
- 230000015654 memory Effects 0.000 claims description 13
- 230000003068 static effect Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 230000002265 prevention Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Landscapes
- Studio Devices (AREA)
Abstract
The application discloses an imaging anti-shake method, an imaging anti-shake device, an imaging device and a readable storage medium, wherein the imaging anti-shake method comprises the following steps: obtaining jitter state data of the image pickup device at the current moment based on the first jitter data of the image pickup device at the current moment; wherein the shake state data includes first shake data and at least one second shake data of the image pickup device before the present moment; based on the shake state data of the image pickup device at the current moment, compensation prediction is carried out to obtain shake compensation data at the current moment; the shake compensator is controlled to perform shake compensation on the imaging element of the imaging device based on shake compensation data at the present time. According to the scheme, the jitter trend can be predicted, and the jitter can be restrained as accurately as possible.
Description
Technical Field
The present application relates to the field of imaging, and more particularly, to an imaging anti-shake method, an imaging anti-shake apparatus, an image pickup device, and a readable storage medium.
Background
With the rapid development of imaging technology, various imaging devices such as a dome camera, a thermal imager and the like are widely applied in production and life.
However, since the application environment of the image pickup device inevitably has shake, it is particularly necessary to perform anti-shake control on the image pickup device for a part of application scenes in which a clear image needs to be acquired. The existing anti-shake method of the image pickup device needs to wait until the displacement is obviously deflected and then perform the displacement compensation in the opposite direction, and due to the lack of predictability, hysteresis exists in the anti-shake control, and thus the anti-shake inaccuracy is caused. In view of this, how to predict the jitter tendency, suppress the jitter in advance, and improve the anti-jitter accuracy is a problem to be solved.
Disclosure of Invention
The application mainly solves the technical problem of providing an imaging anti-shake method, an imaging anti-shake device, an imaging device and a readable storage medium, which can avoid the shake of the imaging device in the imaging process as much as possible.
In order to solve the above technical problem, a first aspect of the present application provides an imaging anti-shake method, including: obtaining jitter state data of the image pickup device at the current moment based on the first jitter data of the image pickup device at the current moment; wherein the shake state data includes first shake data and at least one second shake data of the image pickup device before the present moment; based on the shake state data of the image pickup device at the current moment, compensation prediction is carried out to obtain shake compensation data at the current moment; the shake compensator is controlled to perform shake compensation on the imaging element of the imaging device based on shake compensation data at the present time.
In order to solve the technical problem, a second aspect of the present application provides an imaging anti-shake apparatus, which includes a shake status acquisition module, a shake compensation prediction module, and a shake compensation control module, where the shake status acquisition module is configured to obtain shake status data of an image pickup device at a current time based on first shake data of the image pickup device at the current time; wherein the shake state data includes first shake data and at least one second shake data of the image pickup device before the present moment; the shake compensation prediction module is used for performing compensation prediction based on shake state data of the image pickup device at the current moment to obtain shake compensation data at the current moment; and the jitter compensation control module is used for controlling the jitter compensator to perform jitter compensation on the imaging element of the imaging device based on the jitter compensation data of the current moment.
In order to solve the above technical problem, a third aspect of the present application provides an image pickup device, including a shake compensator, an imaging element, a processor, and a memory, where the shake compensator, the imaging element, and the memory are respectively coupled to the processor, and the processor is configured to execute program instructions stored in the memory, so as to implement the imaging shake prevention method in the first aspect.
In order to solve the above technical problem, a fourth aspect of the present application provides a computer-readable storage medium storing program instructions executable by a processor for implementing the imaging anti-shake method in the above first aspect.
In the above scheme, the first shake data of the current moment of the image pickup device is analyzed to obtain the shake state data of the image pickup device at the current moment, and then the shake state data is compensated and predicted to obtain the shake compensation data at the current moment, and finally the shake compensator is controlled to perform shake compensation on the imaging element of the image pickup device according to the shake compensation data, so that future shake trend and the current shake state are considered at the same time through shake trend prediction analysis on the shake state data, and shake can be restrained as accurately as possible in advance.
Drawings
FIG. 1 is a flow chart of an embodiment of an anti-shake method of the present application;
FIG. 2 is a flow chart of an embodiment of a sample utility value acquisition method;
FIG. 3 is a schematic view of an embodiment of a targeting vector;
FIG. 4 is a flowchart illustrating an embodiment of step S202 in FIG. 2;
FIG. 5 is a flow chart of an embodiment of a jitter compensation model training method;
FIG. 6 is a flowchart illustrating an embodiment of step S501 in FIG. 5;
FIG. 7 is a schematic diagram of an embodiment of a jitter compensator performing jitter compensation;
FIG. 8 is a schematic diagram of a frame of an embodiment of an imaging anti-shake apparatus according to the application;
FIG. 9 is a schematic view of a frame of an embodiment of an image pickup device of the present application;
FIG. 10 is a schematic diagram of a frame of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of an imaging anti-shake method according to the present application.
Specifically, the imaging anti-shake method in this embodiment may include the steps of:
Step S11: and obtaining the jitter state data of the image pickup device at the current moment based on the first jitter data of the image pickup device at the current moment.
In the present embodiment, the image pickup device includes a spatial sensing element such as a gyroscope, an angular motion sensor, or the like for acquiring shake data.
In one implementation scenario, the jitter data is acquired at preset time intervals, which may be specific times of 500 ms, 1 second, etc., without specific limitation.
In one implementation, the dithering data includes relative coordinate position and time information of the image capturing device in an X-axis direction, a Y-axis direction, and a vertical direction in a three-dimensional coordinate system. For example, the numerical expression (1, 2,3, 15:30:10) indicates that the relative coordinate position in the X-axis direction in the three-dimensional coordinate system is 1, the relative coordinate position in the y-axis direction is 2, and the relative position coordinate in the vertical direction is 3 when the imaging device is at 15 points for 30 minutes and 15 seconds.
In one implementation scenario, the shake state data of the image capturing device at the current time includes first shake data of the image capturing device at the current time and at least one second shake data acquired by the image capturing device before the current time.
In a specific implementation scenario, a data list may be constructed for the jitter data as the jitter state data, where the maximum storage capacity in the jitter data list is not less than two sets of jitter data, and the jitter data is arranged in time order. In addition, the jitter data list may reject the oldest data each time new jitter data is acquired in the case where the storage capacity is full. For example, the maximum storage capacity of the jitter data list is 8 sets of jitter data, and the current time jitter data list includes: (1, 2,3, 15:30:10), (3,2,3, 15:30:30), (4, 15:31:15) … …, and the latest jitter data (5, 6,7, 15:32:00) are acquired at this time, and then (1, 2,3, 15:30:10) is needed to be removed to make room for storage, and the latest jitter data is stored.
Step S12: and carrying out compensation prediction based on the jitter state data of the image pickup device at the current moment to obtain jitter compensation data at the current moment.
In one implementation scenario, in order to improve efficiency and accuracy of compensation prediction, a shake compensation model may be trained in advance, and the shake compensation model may include a compensation generation network, where the compensation generation network is configured to perform compensation prediction according to shake state data of the image capturing device at a current time, so as to obtain shake compensation data at the current time. It should be noted that, the jitter generating network is a part of a jitter compensating model, and the jitter compensating model further includes a compensating evaluation network, where the compensating evaluation network may evaluate a compensating effect to promote network performance of the compensating generating network, and the detailed description may refer to the following related description, which is not repeated herein.
Further, training the sample data to obtain a jitter compensation model, wherein the sample data comprises: the method comprises the steps of sampling a first jitter state of a sample at a first moment, sampling a first jitter compensation of the sample at the first moment, performing a sample utility value after the sampling the first jitter compensation, and performing a sampling a second jitter state of the sample at a second moment after the sampling the first jitter compensation. Further, the first jitter state of the sample at the first time represents the current jitter state of the image capturing device, the first jitter compensation of the sample at the first time represents the jitter compensation action generated by the jitter compensation model, the utility value of the sample after the first jitter compensation is executed represents the value for quantitatively evaluating the effect of the jitter compensation after the jitter compensation is executed, and the second jitter state of the sample at the second time after the first jitter compensation is executed represents the jitter state of the image capturing device after the jitter compensation is executed.
In the scheme, the compensation prediction is performed through the compensation generation network in the jitter compensation model, so that the compensation prediction can be more intelligent. In addition, training sample data to obtain a jitter compensation model, the sample data including: the first jitter state of the sample at the first moment, the first jitter compensation of the sample at the first moment, the utility value of the sample after the first jitter compensation of the sample is executed, and the second jitter state of the sample at the second moment after the first jitter compensation of the sample is executed, and the jitter states of the current moment and the next moment are fully considered by the sample, so that the jitter compensation model obtained through training can be finer, and the subsequent execution of the jitter compensation is more accurate.
In another implementation scenario, the jitter compensation model may be constructed with reference to a convolutional neural network model or other neural network model.
In one implementation, the shake compensation data includes a first compensation amount in an X-axis direction of the imaging element, a second compensation amount in a Y-axis direction of the imaging element.
In a specific implementation scenario, the shake compensation data may be expressed using a mathematical expression including two elements, for example, expression (1, 2) indicates that a first compensation amount in the X-axis direction of the imaging element is 1 unit distance, and a second compensation amount in the Y-axis direction of the imaging element is 2 unit distances.
In another implementation scenario, the shake compensation data further comprises a third compensation amount of the imaging element in the vertical direction, an inclination angle of the imaging element in the vertical direction and a rotation angle of a plane in which the imaging element is located on the basis of the first compensation amount of the imaging element in the X-axis direction and the second compensation amount of the imaging element in the Y-axis direction. In the scheme, the shake compensation data is further expanded to the vertical direction on the basis of the horizontal direction, shake in the horizontal direction and the numerical direction is comprehensively considered, granularity of anti-shake control is reduced, precise control can be realized as much as possible, and an anti-shake effect is improved.
In a specific implementation scenario, the shake compensation data may be expressed using a mathematical expression including five elements, for example, the mathematical expression (1, 2,3, 15 °,30 °) indicates that the first compensation amount in the X-axis direction of the imaging element is 1 unit distance, the second compensation amount in the Y-axis direction of the imaging element is 2 unit distances, the third compensation amount in the vertical direction of the imaging element is 3 unit distances, the inclination angle of the imaging element in the vertical direction is 15 °, and the rotation angle of the plane in which the imaging element lies is 30 °.
Referring to fig. 2, fig. 2 is a flowchart illustrating an embodiment of a sample utility value obtaining method.
Specifically, the sample utility value obtaining method in this embodiment may include the following steps:
Step S201: after performing the first shake compensation of the sample, a first image of the sample taken by the image pickup device on the target carrier is acquired.
Referring to fig. 3, fig. 3 is a schematic diagram of an embodiment of a target carrier. Specifically, the target carrier is a piece of white paper, and the target object may be a black circular pattern, or may be a triangle or other shape or a combination of other colors such as red, which is not limited in detail herein, and it should be noted that the target object needs to have a significant difference in color and shape from the target carrier. The first sample image is obtained by shooting the target carrier containing the target object, and the first sample image can intuitively reflect the jitter degree of the imaging device after the first sample jitter compensation is performed because the target object and the target carrier have obvious differences.
Step S202: and obtaining a first utility value based on the difference between the target object on the first image of the sample and the target object on the second image of the sample.
In one implementation scenario, the second sample image is obtained by shooting the target carrier in the static state of the image pickup device, and reflects the shake degree of the image pickup device in the static state.
Further, by comparing the difference between the first image of the sample and the second image of the sample, a first utility value is obtained for characterizing the jitter compensation effect of performing the first jitter compensation of the sample.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of step S202 in fig. 2. Specifically, step S202 in the above embodiment may include the steps of:
Step S2021: a sample first contour of the target object on the sample first image is detected and a sample second contour of the target object on the sample second image is detected.
In one implementation, to minimize errors in subsequent computations, each of the sample first image and the sample second image may be multiple images.
In one implementation, a contour detection algorithm may be employed to extract the contours of the target object on the image.
In a specific implementation scenario, a contour tracking algorithm, an image subset-based algorithm, a run-based algorithm, and other contour detection algorithms may be used to detect a first contour of a target object on a first image of a sample and a second contour of a target object on a second image of a sample, which are not particularly limited herein.
Step S2022: a first distance of the sample between the two points furthest apart on the first profile of the sample is obtained and a second distance of the sample between the two points furthest apart on the second profile of the sample is obtained.
In a specific implementation scenario, taking 5 samples of the first image and 5 samples of the second image as examples. For the edge contour of the first image of each sample, a distance calculation method such as a rotation karst algorithm can be adopted to calculate the distance between the maximum two points of the edge contour, and the average value of the distances between the maximum two points of the first image of 5 samples is obtained to obtain the first distance of the sample, which is recorded as D1. For each sample second image, a method for processing the sample first image is adopted, which is not described herein again, to obtain the distance between the maximum two points of the sample second image and calculate the average value, and obtain the sample second distance, which is denoted as D2.
Step S2023: and obtaining a first utility value based on an absolute difference between the first distance of the sample and the second distance of the sample.
In one implementation scenario, the first utility value may be obtained by performing a logarithmic process on the absolute difference between the first distance of the sample and the second distance of the sample, or the absolute difference between the first distance of the sample and the second distance of the sample may be directly used as the first utility value.
In a specific implementation scenario, as described above, D1 represents a first distance of the sample, D2 represents a second distance of the sample, and the absolute difference between D1 and D2 may be logarithmized to obtain a first utility value, for example log 10 (|d1-d2|) or ln (|d1-d2|), or the absolute difference between D1 and D2 may be directly used as the first utility value.
In the above scheme, by detecting the sample contour of the target object on the image and obtaining the sample distance between the two furthest points on the sample contour, the influence of jitter on the imaging of the target carrier is fully considered, and the jitter distance is calculated to the greatest extent based on the contour, so that the first utility value obtained according to the absolute difference of the sample distances of the first image and the second image can fully represent the jitter compensation effect.
Step S203: based at least on the first utility value, a sample utility value is obtained.
In one implementation scenario, before performing step S203, the sample utility value obtaining method further includes obtaining a second utility value according to the first jitter compensation of the sample. The second utility value characterizes an execution complexity of the jitter compensator to execute the first jitter compensation of the samples.
In one implementation scenario, the first jitter compensation is performed according to the first jitter compensation data of the sample, and further, the second utility value may be obtained according to a preset calculation method according to the first jitter compensation data of the sample.
In a specific implementation scenario, the sample first jitter compensation data may refer to the jitter compensation data in the foregoing embodiment, which is not described herein. For example, the first jitter compensation data of the sample is a mathematical formula (X, Y, V, H, R), where X represents a first compensation amount of the imaging element in the X-axis direction, Y represents a second compensation amount of the imaging element in the Y-axis direction, V represents a third compensation amount of the imaging element in the vertical direction, H represents an inclination angle of the imaging element in the vertical direction, and R represents a rotation angle of a plane on which the imaging element is located, and the preset calculation method is that the second utility value is equal to a sum of values in the first jitter compensation data of the sample, and at this time, the second utility value is x+y+v+h+r. The preset calculation method further includes various other methods, such as adding coefficients to various compensation amounts, etc., but it should be noted that, regardless of the method, all sub-data of the jitter compensation data needs to be comprehensively considered, so as to fully embody the execution complexity of the jitter compensator for executing the sample first jitter compensation.
In one implementation scenario, the first utility value and the second utility value may be fused to obtain a sample utility value.
In the scheme, the second utility value is analyzed through the first jitter compensation, the complexity of jitter compensation execution is reflected, the first utility value is fused, the sample utility value is obtained, the complexity of jitter compensation execution and the effect of jitter compensation are comprehensively considered, and the sample utility value is more comprehensive in evaluation of the jitter compensation.
In one particular implementation scenario, as previously described, the sample utility value represents a value that quantitatively evaluates the effect of jitter compensation after performing the jitter compensation action. Therefore, the first utility value and the second utility value can be directly added to obtain a sample utility value. Referring to the previous embodiment, the sample utility value is X+Y+V+H+R+ln (|D1-2|).
In another specific implementation scenario, a certain coefficient may be respectively assigned to the first utility value and the second utility value, and then a sum of the two may be calculated as the sample utility value. Referring to the previous examples, the sample utility value at this time is a (x+y+v+h+r) +b×ln (|d1-d2|). a and b may be set to specific values of 0.4, 0.5, etc., without specific limitation.
In another implementation scenario, the first utility value may be directly taken as the sample utility value.
In one specific implementation scenario, referring to the previous embodiment, ln (|D1-D2|) is directly taken as the sample utility value.
In the above scheme, the first utility value is obtained by comparing the difference between the specific target object on the sample image after executing the jitter compensation and the specific target object on the sample image in the static state of the image pickup device, and the first utility value fully characterizes the effect of the jitter compensation through the state difference of the image pickup device in the static state and after executing the jitter compensation, so that the sample utility value is obtained at least according to the first utility value, and the effect of the jitter compensation can be effectively characterized.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of a jitter compensation model training method.
Specifically, the method for training the jitter compensation model in this embodiment may include the following steps:
step S501: based on the sample data, a jitter compensation model is trained.
In one implementation scenario, as previously described, the jitter compensation model includes a compensation evaluation network and a compensation generation network. Referring to fig. 6, fig. 6 is a flowchart illustrating an embodiment of step S501 in fig. 5. Specifically, step S501 in the above embodiment may include the steps of:
Step S5011: and predicting the second jitter state of the sample based on the compensation generating network to obtain second jitter compensation of the sample.
In one implementation scenario, the compensation generating network in the jitter compensation model predicts a sample second jitter state at a sample second time after the image capturing device performs the sample first jitter compensation, and may obtain a sample second jitter compensation corresponding to the sample second jitter state. For convenience of the following description, the sample second jitter state may be denoted as S (t+1), and the sample second jitter is compensated for a (t+1).
Step S5012: and evaluating the sample second jitter state and the sample second jitter compensation based on the compensation evaluation network to obtain future discount returns.
In one implementation scenario, the compensation evaluation network in the jitter model evaluates the sample second jitter state and the sample second jitter compensation, so that a sample utility value of the sample at the second moment can be obtained, and further, future discount returns are obtained according to the sample utility value and the discount coefficient of the sample at the second moment.
Further, since the second jitter compensation is not performed, the predicted utility value of the sample at the second time of the sample must have a certain deviation, and therefore, the discount coefficient needs to be combined to obtain the future discount return. The utility value of the sample at the second time of the sample is denoted as Q (t+1), and the discount coefficient is denoted as k, so that the compensation evaluation network can predict the future discount return as k+Q (t+1) according to S (t+1) and A (t+1). K may be a reference value of 0.8, 0.9, etc., and is not particularly limited herein.
Step S5013: network parameters of the jitter compensation model are adjusted based on the sample utility value and the future discount returns.
In one implementation scenario, the sample utility value reflects the effect of executing jitter compensation at the current time, and the future discount return reflects the effect of executing jitter compensation predicted at the next time, so that the network parameters of the jitter compensation model can be adjusted according to the sample utility value and the future discount return, and the jitter compensation effect is further optimized.
In the scheme, the future discount returns are obtained according to the predicted sample second jitter compensation and sample second jitter state evaluation, the effect of the jitter compensation is executed according to the next moment, and the current sample utility value is combined, so that the optimization direction of the jitter compensation model can be effectively reflected, and the network parameters of the jitter compensation model can be effectively adjusted.
Step S502: predicting a new sample first jitter state of the image pickup device based on the trained compensation generating network to obtain new sample first jitter compensation, acquiring a new sample utility value and a new sample second jitter state after the new sample first jitter compensation is executed, and obtaining new sample data based on the new sample first jitter state, the new sample first jitter compensation, the new sample second jitter state and the new sample second jitter state.
In one implementation scenario, the trained compensation generating network predicts the new sample first jitter state of the image capturing device again to obtain the new sample first jitter compensation, and at this time, with reference to the foregoing embodiment, the new sample utility value and the new sample second jitter state may be obtained again, so as to reconstruct into new sample data.
Step S503: the step of training the jitter compensation model and subsequent steps are re-performed based on the new sample data.
In one implementation scenario, detecting whether the new sample utility value meets a preset training condition, and re-executing the step of training the jitter compensation model and subsequent steps based on the new sample data in response to the new sample utility value meeting the preset training condition; otherwise, stopping training the jitter compensation model. Therefore, whether the jitter compensation model needs to be continuously trained is judged by judging whether the sample utility value meets the preset training condition or not, and further, the jitter compensation effect is continuously improved by continuously training the jitter model.
In a specific implementation scenario, the preset training condition is that the sample utility value changes, at which time the jitter compensation model may be trained until the new sample utility value no longer changes.
In another specific implementation scenario, the preset training condition may be set such that the rate of change of the sample utility value is greater than a first threshold or the magnitude of change is greater than a second threshold, where the first threshold may be set to a specific value such as 0.1, 0.2, and the second threshold may be set to a specific value such as 1,2, and the like, and the specific limitation is not herein set. The jitter compensation model may be trained until the rate of change of the new sample utility value is no greater than a threshold or the magnitude of change is no greater than a threshold.
In the scheme, the jitter compensation model is trained according to the sample data, and further training is performed again according to new sample data generated by the jitter compensation model. Therefore, the jitter compensation model can be trained as much as possible, so that the trained jitter compensation model can be more perfect.
In another implementation scenario, the jitter compensation model includes a compensation generation network, a compensation target generation network, a compensation evaluation network, and a compensation target evaluation network. The compensation target generation network and the compensation target evaluation network are used for generating a training data set, and the compensation generation network and the compensation evaluation network are mainly used for training network parameters of the shake compensation model.
More specifically, the compensation generating network and the compensation target generating network are both composed of a Long Short-Term Memory network layer LSTM (Long Short-Term Memory), a fully connected network layer and a softmax logistic regression layer, and the activation function of the fully connected network layer adopts a linear correction function (ReLU). The input of the two networks is the sample jitter state, denoted as S, and is used for representing the jitter state of the device in the last period of time, and the output of the networks is the original jitter compensation data a (0) and the final jitter compensation data a (t), wherein t represents the current time of the sample, and t+1 represents the second time of the sample. The compensation evaluation network and the compensation target evaluation network are composed of a long-short-period memory network layer and two fully-connected network layers. The input of the compensation evaluation network is a first jitter state S (t) and an estimated value function E (t) of a sample at the current moment of the sample, and the updated strategy gradient value is output after gradient update is carried out through an Adam neural network optimizer; the input of the compensation target evaluation network is a sample second jitter state S (t+1) at a second moment and final jitter data A (t) output by the compensation target generation network, the network output is estimated future discount return k x Q (t+1), wherein k is a discount coefficient, and Q (t+1) is a sample utility value at the sample second moment.
The specific jitter compensation model is trained as follows:
step 7.1: and inputting the first jitter state S (t) of the sample into a compensation generation network, outputting original jitter compensation data A (0) by the compensation generation network, and then adopting random noise to interfere the original jitter compensation data A (t) to obtain final jitter compensation data A (t).
Step 7.2: the jitter compensator executes the final jitter compensation data a (t) to obtain a sample second jitter state S (t+1), and calculates a utility value Q (t) after the final jitter compensation data is executed.
Step 7.3: the first jitter state S (t) of the sample, the second jitter state S (t+1) of the sample, the final jitter compensation data A (t) and the utility value Q (t) after the final jitter compensation data is executed are counted into an experience pool, and the steps of 7.1-7.3 are repeatedly executed until enough samples are obtained, wherein the number of the required samples is not particularly limited, and is recommended to be not less than 1000.
Step 7.4: and extracting a certain number of samples from the experience pool to form a training set, wherein the number of the samples in the training set is not particularly limited, and the number of the samples in the training set is recommended to be not less than 10.
Step 7.5: the second jitter state S (t+1) of the sample is input to the compensation target generation network, which outputs action a (t+1) representing the final jitter compensation data at the second moment, and a (t+1) is then input to the compensation target evaluation network together with S (t+1), which outputs the future discount rewards k×q (t+1).
Step 7.6: and training a compensation evaluation network by using the future discount rewards k (t+1), the sample utility value Q (t) at the last moment, the final jitter compensation data A (t) and the first jitter state S (t) of the sample through a Adams (Adaptive Moment Estimation) neural network optimizer to obtain network parameter updating data.
Step 7.7: and synchronously updating parameters of the compensation evaluation network and the compensation generation network to the compensation target generation network and the compensation target evaluation network, and returning to the step 7.4 and subsequent steps until the utility value Q (t) of the sample is not changed any more during training.
In the embodiment, the jitter compensation model is continuously trained in an iterative mode until the utility value of the sample is not changed by constructing the sample pool and acquiring the training set from the sample pool, so that the jitter compensation effect of the jitter compensation model is greatly improved.
Step S13: the shake compensator is controlled to perform shake compensation on the imaging element of the imaging device based on shake compensation data at the present time.
In one implementation, the shake compensation device is controlled to move according to the shake compensation data at the current moment, so as to realize shake compensation of an imaging element of the imaging device.
In a specific implementation scenario, as described above, the shake compensation data is expressed by the mathematical formula (X, Y, V, H, R), where X represents a first compensation amount of the imaging element in the X-axis direction, Y represents a second compensation amount of the imaging element in the Y-axis direction, V represents a third compensation amount of the imaging element in the vertical direction, H represents an inclination angle of the imaging element in the vertical direction, and R represents a rotation angle of the plane in which the imaging element is located. Referring to fig. 7, fig. 7 is a schematic diagram of an embodiment of a jitter compensator for performing jitter compensation. As shown in fig. 7, the plane in which the imaging element is located can be rotated and tilted along the X-axis, the Y-axis, the left-right tilt axis, and the up-down tilt axis, respectively. At this time, the shake compensator may be controlled to shift the imaging element by a first compensation amount in the X-axis direction, shift the imaging element by a second compensation amount in the Y-axis direction, tilt the imaging element by a third compensation amount in the vertical direction, tilt the imaging element by a fourth compensation amount in the vertical direction, and rotate the imaging element by a fifth compensation amount about the plane in which the imaging element is located.
In another implementation scenario, the shake compensation of the imaging element of the image pickup device may be achieved by searching a preset shake compensation table in which shake compensation actions corresponding to the shake compensation data are stored, and executing the shake compensation actions according to the preset shake compensation table.
In the above scheme, the first shake data of the current moment of the image pickup device is analyzed to obtain the shake state data of the image pickup device at the current moment, and then the shake state data is compensated and predicted to obtain the shake compensation data at the current moment, and finally the shake compensator is controlled to perform shake compensation on the imaging element of the image pickup device according to the shake compensation data, so that future shake trend and the current shake state are considered simultaneously through shake trend prediction analysis on the shake state data, and shake can be restrained in advance and as accurately as possible.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a frame of an imaging anti-shake apparatus 80 according to an embodiment of the application. Specifically, the imaging anti-shake apparatus 80 includes a shake state acquisition module 81, a shake compensation prediction module 82, and a shake compensation control module 83. The shake state acquisition module 81 is configured to obtain shake state data of the image capturing device at a current moment based on first shake data of the image capturing device at the current moment; wherein the shake state data includes first shake data and at least one second shake data of the image pickup device before the present moment; the shake compensation prediction module 82 is configured to perform compensation prediction based on shake state data of the image capturing device at a current time, so as to obtain shake compensation data at the current time; the shake compensation control module 83 is configured to control the shake compensator to perform shake compensation on the imaging element of the imaging device based on shake compensation data at the current time.
In the above scheme, the first shake data of the current moment of the image pickup device is analyzed to obtain the shake state data of the image pickup device at the current moment, and then the shake state data is compensated and predicted to obtain the shake compensation data at the current moment, and finally the shake compensator is controlled to perform shake compensation on the imaging element of the image pickup device according to the shake compensation data, so that future shake trend and the current shake state are considered simultaneously through shake trend prediction analysis on the shake state data, and shake can be restrained in advance and as accurately as possible.
In some disclosed embodiments, the compensation prediction is performed by a compensation generation network, the jitter compensation model includes a compensation evaluation network and a compensation generation network, the jitter compensation model is trained based on sample data, the sample data includes: the method comprises the steps of sampling a first jitter state of a sample at a first moment, sampling a first jitter compensation of the sample at the first moment, performing a sample utility value after the sampling the first jitter compensation, and performing a sampling a second jitter state of the sample at a second moment after the sampling the first jitter compensation.
Therefore, the compensation prediction is performed by the compensation generation model in the jitter compensation model, so that the compensation prediction can be more intelligent. In addition, training sample data to obtain a jitter compensation model, the sample data including: the first jitter state of the sample at the first moment, the first jitter compensation of the sample at the first moment, the utility value of the sample after the first jitter compensation of the sample is executed, and the second jitter state of the sample at the second moment after the first jitter compensation of the sample is executed, and the jitter states of the current moment and the next moment are fully considered by the sample, so that the jitter compensation model obtained through training can be finer, and the subsequent execution of the jitter compensation is more accurate.
In some disclosed embodiments, the jitter compensation prediction module 82 includes a sample image acquisition unit, an image utility value calculation unit, a sample utility value calculation unit. The sample image acquisition unit is used for acquiring a first sample image shot by the image pickup device on the target carrier after performing first sample jitter compensation; wherein, the target carrier is provided with a target object; the image utility value calculation unit is used for obtaining a first utility value based on the difference between the target object on the first image of the sample and the target object on the second image of the sample; the image pickup device shoots a second image of the sample on the target carrier in a static state, and the first utility value represents a jitter compensation effect of executing first jitter compensation of the sample; the sample utility value calculation unit is used for obtaining a sample utility value at least based on the first utility value.
Therefore, by comparing the difference between the specific target object on the sample image after executing the shake compensation and the specific target object on the sample image in the static state of the image pickup device, the first utility value is obtained, the first utility value fully characterizes the shake compensation effect through the state difference of the image pickup device in the static state and after executing the shake compensation, and further, the sample utility value is obtained at least according to the first utility value, and the shake compensation effect can be effectively characterized.
In some disclosed embodiments, the image utility value calculation unit includes a sample contour acquisition subunit, a sample distance calculation subunit. The sample contour obtaining subunit is used for detecting a sample first contour of the target object on the sample first image and detecting a sample second contour of the target object on the sample second image; the sample distance calculating subunit is used for obtaining a sample first distance between two points with the farthest distances on a sample first contour and obtaining a sample second distance between two points with the farthest distances on a sample second contour; the image utility value calculating unit is used for obtaining a first utility value based on an absolute difference value between a first distance of a sample and a second distance of the sample.
Therefore, by detecting the sample contour of the target object on the image and obtaining the sample distance of the farthest two points on the sample contour, the influence of jitter on the imaging of the target carrier is fully considered, and the jitter distance is calculated to the greatest extent based on the contour, so that the first utility value obtained according to the absolute difference of the sample distances of the first image and the second image can fully represent the jitter compensation effect.
In some disclosed embodiments, the jitter compensation prediction module 82 further includes a jitter compensation utility value calculation unit. The jitter compensation utility value calculation unit is used for obtaining a second utility value based on the first jitter compensation of the sample; wherein the second utility value characterizes an execution complexity of the jitter compensator to execute the first jitter compensation of the sample; the sample utility value calculation unit is further used for fusing the first utility value and the second utility value to obtain a sample utility value.
Therefore, the second utility value is analyzed through the first jitter compensation, the complexity of jitter compensation execution is reflected, the first utility value is fused, the sample utility value is obtained, the complexity of jitter compensation execution and the effect of jitter compensation are comprehensively considered, and the sample utility value is more comprehensive in evaluation of the jitter compensation.
In some disclosed embodiments, the jitter compensation prediction module 82 further includes a jitter compensation model training unit, a sample data acquisition unit. The jitter compensation model training unit is used for training a jitter compensation model based on the sample data; the sample data acquisition unit is used for predicting a new sample first jitter state of the image pickup device based on the trained compensation generation network to obtain new sample first jitter compensation, acquiring a new sample utility value and a new sample second jitter state after the new sample first jitter compensation is executed, and obtaining new sample data based on the new sample first jitter state, the new sample first jitter compensation, the new sample second jitter state and the new sample second jitter state; the jitter compensation model training unit is further configured to re-perform the step of training the jitter compensation model and subsequent steps based on the new sample data.
Therefore, the shake compensation model is trained based on the sample data, and further, training is performed again based on new sample data generated by the shake compensation model. Therefore, the jitter compensation model can be trained as much as possible, so that the trained jitter compensation model can be more perfect.
In some disclosed embodiments, the jitter compensation model training unit further comprises a future discount return acquisition subunit, a network parameter adjustment subunit. The jitter compensation prediction module 82 is further configured to predict a second jitter state of the sample based on the compensation generation network, so as to obtain a second jitter compensation of the sample; the future discount return obtaining subunit is used for evaluating the second jitter state of the sample and the second jitter compensation of the sample based on the compensation evaluation network to obtain the future discount return; the network parameter adjustment subunit is configured to adjust a network parameter of the jitter compensation model based on the sample utility value and the future discount returns.
Therefore, future discount returns are obtained according to the predicted sample second jitter compensation and sample second jitter state evaluation, the effect of the jitter compensation is executed according to the next moment, and the current sample utility value is combined, so that the optimization direction of the jitter compensation model can be effectively reflected, and the network parameters of the jitter compensation model can be effectively adjusted.
In some disclosed embodiments, the jitter compensation model training unit further comprises a training condition detection subunit. The training condition detection subunit is used for detecting whether the new sample utility value meets a preset training condition; the jitter compensation model training unit is further configured to re-perform the step of training the jitter compensation model and subsequent steps based on the new sample data in response to the new sample utility value satisfying the preset training condition.
Therefore, whether the jitter compensation model needs to be continuously trained is judged by whether the sample utility value meets the preset training condition, and further the jitter compensation effect is continuously improved by continuously training the jitter model.
In some disclosed embodiments, the jitter compensation data includes: the first compensation amount of the imaging element in the X-axis direction, the second compensation amount of the imaging element in the Y-axis direction, the third compensation amount of the imaging element in the vertical direction, the inclination angle of the imaging element in the vertical direction, and the rotation angle of the plane in which the imaging element is located.
Therefore, the shake compensation data is further expanded to the vertical direction on the basis of the horizontal direction, shake in the horizontal and numerical directions is comprehensively considered, granularity of the shake prevention control is reduced, precise control can be realized as much as possible, and the shake prevention effect is improved.
Referring to fig. 9, fig. 9 is a schematic diagram of a frame of an image capturing device 90 according to an embodiment of the present application. The imaging device 90 may be a thermal imager, a dome camera, or the like, which has imaging and image processing functions. Specifically, the image capturing device 90 includes a shake compensator 93, an imaging element 94, a processor 91, and a memory 92, where the shake compensator 93, the imaging element 94, and the memory 92 are respectively coupled to the processor 91, and the processor 91 is configured to execute program instructions stored in the memory 92 to implement steps in any embodiment of an imaging shake prevention method.
Specifically, the shake compensator 93 is configured to fine-tune the imaging element 94 in the opposite direction to cancel the shake, and the processor 91 may control itself and the memory 92 to perform steps in any embodiment of the imaging anti-shake method. The processor 91 may also be referred to as a CPU (Central Processing Unit ). The processor 91 may be an integrated circuit chip with signal processing capabilities. The Processor 91 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 91 may be commonly implemented by a plurality of circuit-forming chips. The imaging element 94 may be a lens or an image sensor.
In the above scheme, the first shake data of the current moment of the image pickup device is analyzed to obtain the shake state data of the image pickup device at the current moment, and then the shake state data is compensated and predicted to obtain the shake compensation data at the current moment, and finally the shake compensator is controlled to perform shake compensation on the imaging element of the image pickup device according to the shake compensation data, so that future shake trend and the current shake state are considered simultaneously through shake trend prediction analysis on the shake state data, and shake can be restrained in advance and as accurately as possible.
Referring to FIG. 10, FIG. 10 is a schematic diagram illustrating an exemplary embodiment of a computer-readable storage medium 10 according to the present application. In this embodiment, the computer readable storage medium 10 stores a program instruction 1001 executable by a processor, and the program instruction 1001 is used to execute the steps in the imaging anti-shake method embodiment described above.
The computer readable storage medium 10 may be a medium such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, which may store program instructions, or may be a server storing the program instructions, where the server may send the stored program instructions to another device for execution, or may also self-execute the stored program instructions.
In the above scheme, the first shake data of the current moment of the image pickup device is analyzed to obtain the shake state data of the image pickup device at the current moment, and then the shake state data is compensated and predicted to obtain the shake compensation data at the current moment, and finally the shake compensator is controlled to perform shake compensation on the imaging element of the image pickup device according to the shake compensation data, so that future shake trend and the current shake state are considered simultaneously through shake trend prediction analysis on the shake state data, and shake can be restrained in advance and as accurately as possible.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
Claims (10)
1. An imaging anti-shake method, comprising:
Obtaining jitter state data of the image pickup device at the current moment based on first jitter data of the image pickup device at the current moment; wherein the shake state data includes the first shake data and at least one second shake data of the image pickup device before the current time;
Based on the shake state data of the image pickup device at the current moment, compensation prediction is carried out to obtain shake compensation data at the current moment;
controlling a jitter compensator to perform jitter compensation on an imaging element of the imaging device based on the jitter compensation data of the current moment;
Wherein the compensation prediction is performed by a compensation generation network, a jitter compensation model includes a compensation evaluation network and the compensation generation network, the jitter compensation model is trained based on sample data, the sample data includes: a first jitter state of a sample at a first time, a first jitter compensation of the sample at the first time, a utility value of the sample after performing the first jitter compensation of the sample, and a second jitter state of the sample at a second time after performing the first jitter compensation of the sample; the step of obtaining the sample utility value comprises the following steps:
After the first jitter compensation of the sample is performed, acquiring a first image of the sample shot by the image pickup device on a target carrier; wherein, the target carrier is provided with a target object;
obtaining a first utility value based on the difference between the target object on the first image of the sample and the target object on the second image of the sample; the image pickup device shoots the second image of the sample on the target carrier in a static state, and the first utility value represents a jitter compensation effect of executing first jitter compensation of the sample;
and obtaining the sample utility value based at least on the first utility value.
2. The method of claim 1, wherein the deriving a first utility value based on a difference between the target object on the first image of the sample and the target object on the second image of the sample comprises:
Detecting a sample first contour of the target object on the sample first image, and detecting a sample second contour of the target object on the sample second image;
acquiring a first sample distance between two points farthest from the first sample contour, and acquiring a second sample distance between two points farthest from the second sample contour;
And obtaining the first utility value based on an absolute difference between the first distance of the sample and the second distance of the sample.
3. The method of claim 1, wherein prior to the deriving the sample utility value based at least on the first utility value, the method further comprises:
Obtaining a second utility value based on the sample first jitter compensation; wherein the second utility value characterizes an execution complexity of the jitter compensator to execute the first jitter compensation of the sample;
The obtaining the sample utility value based at least on the first utility value includes:
and fusing the first utility value and the second utility value to obtain the sample utility value.
4. The method of claim 1, wherein the training of the jitter compensation model comprises:
training the jitter compensation model based on the sample data;
Predicting a new sample first jitter state of the image pickup device based on the trained compensation generating network to obtain new sample first jitter compensation, acquiring a new sample utility value and a new sample second jitter state after the new sample first jitter compensation is executed, and obtaining new sample data based on the new sample first jitter state, the new sample first jitter compensation, the new sample second jitter state and the new sample second jitter state;
the step of training the jitter compensation model and subsequent steps are re-performed based on the new sample data.
5. The method of claim 4, wherein the training the jitter compensation model based on the sample data comprises:
Predicting the second jitter state of the sample based on the compensation generating network to obtain second jitter compensation of the sample;
evaluating the sample second jitter state and the sample second jitter compensation based on the compensation evaluation network to obtain future discount returns;
Based on the sample utility value and the future discount rewards, network parameters of the jitter compensation model are adjusted.
6. The method of claim 4, wherein prior to the step of re-executing the training the jitter compensation model and subsequent steps based on the new sample data, the method further comprises:
detecting whether the new sample utility value meets a preset training condition;
The step of training the jitter compensation model and subsequent steps are re-performed based on the new sample data, comprising:
And re-executing the step of training the jitter compensation model and subsequent steps based on the new sample data in response to the new sample utility value meeting the preset training condition.
7. The method of claim 1, wherein the jitter compensation data comprises: the imaging device comprises an imaging element, a first compensation amount in the X-axis direction of the imaging element, a second compensation amount in the Y-axis direction of the imaging element, a third compensation amount in the vertical direction of the imaging element, an inclination angle of the imaging element in the vertical direction and a rotation angle of a plane where the imaging element is located.
8. An imaging anti-shake apparatus, comprising:
The device comprises a jitter state acquisition module, a jitter state detection module and a jitter state detection module, wherein the jitter state acquisition module is used for acquiring jitter state data of an image pickup device at the current moment based on first jitter data of the image pickup device at the current moment; wherein the shake state data includes the first shake data and at least one second shake data of the image pickup device before the current time;
The jitter compensation prediction module is used for performing compensation prediction based on the jitter state data of the image pickup device at the current moment to obtain jitter compensation data at the current moment;
A shake compensation control module for controlling a shake compensator to perform shake compensation on an imaging element of the imaging device based on the shake compensation data of the current time;
wherein the compensation prediction is performed by a compensation generation network, a jitter compensation model includes a compensation evaluation network and the compensation generation network, the jitter compensation model is trained based on sample data, the sample data includes: a first jitter state of a sample at a first time, a first jitter compensation of the sample at the first time, a utility value of the sample after performing the first jitter compensation of the sample, and a second jitter state of the sample at a second time after performing the first jitter compensation of the sample; the jitter compensation prediction module further comprises a sample image acquisition unit, an image utility value calculation unit and a sample utility value calculation unit; the sample image acquisition unit is used for acquiring a first sample image shot by the imaging device on the target carrier after performing first sample jitter compensation; wherein, the target carrier is provided with a target object; the image utility value calculation unit is used for obtaining a first utility value based on the difference between the target object on the first sample image and the target object on the second sample image; the image pickup device shoots the second image of the sample on the target carrier in a static state, and the first utility value represents a jitter compensation effect of executing first jitter compensation of the sample; the sample utility value calculation unit is used for obtaining the sample utility value at least based on the first utility value.
9. An image pickup device comprising a shake compensator, an imaging element, a processor, and a memory, the shake compensator, the imaging element, and the memory being coupled to the processor, respectively; the processor is configured to execute program instructions stored in the memory to implement the imaging anti-shake method according to any one of claims 1-7.
10. A computer readable storage medium, characterized in that program instructions executable by a processor for implementing the imaging anti-shake method according to any one of claims 1-7 are stored.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210645359.9A CN115242967B (en) | 2022-06-07 | 2022-06-07 | Imaging anti-shake method, imaging anti-shake apparatus, image pickup device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210645359.9A CN115242967B (en) | 2022-06-07 | 2022-06-07 | Imaging anti-shake method, imaging anti-shake apparatus, image pickup device, and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115242967A CN115242967A (en) | 2022-10-25 |
CN115242967B true CN115242967B (en) | 2024-06-21 |
Family
ID=83669039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210645359.9A Active CN115242967B (en) | 2022-06-07 | 2022-06-07 | Imaging anti-shake method, imaging anti-shake apparatus, image pickup device, and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115242967B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104869310A (en) * | 2015-05-18 | 2015-08-26 | 成都平行视野科技有限公司 | Video shooting anti-shaking method based on mobile apparatus GPU and angular velocity sensor |
CN110166697A (en) * | 2019-06-28 | 2019-08-23 | Oppo广东移动通信有限公司 | Camera anti-fluttering method, device, electronic equipment and computer readable storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100330566B1 (en) * | 1999-12-31 | 2002-03-29 | 대표이사 서승모 | shaking compensation circuit using motion estimation and motion compensation in videophone |
JP4388293B2 (en) * | 2003-03-13 | 2009-12-24 | 京セラ株式会社 | Camera shake correction device |
CN100389601C (en) * | 2005-10-09 | 2008-05-21 | 北京中星微电子有限公司 | Video electronic flutter-proof device |
CN113542612B (en) * | 2021-09-17 | 2021-11-23 | 深圳思谋信息科技有限公司 | Lens anti-shake method and device, computer equipment and storage medium |
-
2022
- 2022-06-07 CN CN202210645359.9A patent/CN115242967B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104869310A (en) * | 2015-05-18 | 2015-08-26 | 成都平行视野科技有限公司 | Video shooting anti-shaking method based on mobile apparatus GPU and angular velocity sensor |
CN110166697A (en) * | 2019-06-28 | 2019-08-23 | Oppo广东移动通信有限公司 | Camera anti-fluttering method, device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115242967A (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102914293B (en) | Messaging device and information processing method | |
US11315264B2 (en) | Laser sensor-based map generation | |
CN108509970B (en) | Learning method, corresponding system, device and computer program product | |
CN103026171A (en) | Image processing apparatus and image processing method | |
CN108921898B (en) | Camera pose determination method and device, electronic equipment and computer readable medium | |
JP2019091121A (en) | Information processing device, background update method and background update program | |
US20230386065A1 (en) | Systems and methods for processing captured images | |
CN114581678A (en) | Automatic tracking and re-identifying method for template feature matching | |
CN114800500B (en) | Flexible constant force control method and system for polishing robot | |
JP7255436B2 (en) | Eyeball structure estimation device | |
CN110291771B (en) | Depth information acquisition method of target object and movable platform | |
CN115242967B (en) | Imaging anti-shake method, imaging anti-shake apparatus, image pickup device, and readable storage medium | |
KR101462007B1 (en) | Apparatus for estimating attitude and method for estimating attitude | |
CN107665495B (en) | Object tracking method and object tracking device | |
CN116259015A (en) | Intelligent 3D multi-target tracking method and system for vehicle-cloud coordination | |
CN116051636A (en) | Pose calculation method, device and equipment | |
US20220383652A1 (en) | Monitoring Animal Pose Dynamics from Monocular Images | |
US20210303060A1 (en) | An apparatus, method and computer program for adjusting output signals | |
CN112464815A (en) | Video multi-target tracking method, device and equipment | |
Cho et al. | Sector based scanning and adaptive active tracking of multiple objects | |
CN117207190B (en) | Accurate robot system that snatchs based on vision and sense of touch fuse | |
CN117495900B (en) | Multi-target visual tracking method based on camera motion trend estimation | |
JP6959444B2 (en) | Measurement information processing device | |
CN118052872A (en) | Visual odometer method, device, terminal and medium integrating depth visual constraint | |
CN117852602A (en) | Method for training machine learning model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |