WO2022242568A1 - 防抖效果评估方法、装置、计算机设备和存储介质 - Google Patents

防抖效果评估方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022242568A1
WO2022242568A1 PCT/CN2022/092751 CN2022092751W WO2022242568A1 WO 2022242568 A1 WO2022242568 A1 WO 2022242568A1 CN 2022092751 W CN2022092751 W CN 2022092751W WO 2022242568 A1 WO2022242568 A1 WO 2022242568A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
similarity
frame
images
Prior art date
Application number
PCT/CN2022/092751
Other languages
English (en)
French (fr)
Inventor
门泽华
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2022242568A1 publication Critical patent/WO2022242568A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present application relates to the technical field of image processing, and in particular to an anti-shake effect evaluation method, device, computer equipment and storage medium.
  • the shutter speed is too slow, the focal length is too long, and hand-held shaking will cause certain jitters in the obtained videos or images, thereby making the obtained videos or images blurred.
  • some anti-shake algorithms can be used to perform anti-shake processing on the obtained video or image, or anti-shake processing can be performed by mechanical stabilization during the shooting process. After anti-shake processing, it is usually necessary to make a quantitative judgment on the anti-shake effect to determine which anti-shake processing method has a better effect.
  • the anti-shake effect is usually evaluated based on the human visual system, that is, the user judges whether the anti-shake effect is good or bad based on visual perception. Since the anti-shake effect is evaluated through the human visual system, there is no objective evaluation basis based on visual intuition, so the evaluation results are not accurate enough. In addition, because the anti-shake effect is evaluated through visual perception, it usually takes a long time to watch multiple pictures or a video taken before a general evaluation result can be produced, and the evaluation takes a long time.
  • a method for evaluating an anti-shake effect comprising:
  • the anti-shake performance score of the video is obtained, and the anti-shake performance score is used to evaluate the anti-shake effect of the anti-shake processing.
  • the image frame parameters include image similarity; correspondingly, according to the image frame parameters corresponding to the video, the anti-shake performance score of the video is obtained, including:
  • the image similarity between the previous frame image and the next frame image in each group of adjacent preset intervals of two frame images is obtained, and used as each The image similarity corresponding to two frames of images with adjacent preset intervals;
  • the anti-shake performance score of the video is obtained.
  • the preset interval is 1, and for any two frames of images with adjacent preset intervals in the video, the two frames of images are respectively recorded as the t-th frame image and the t-1-th frame image; correspondingly , to obtain the image similarity between the previous frame image and the next frame image in each set of adjacent preset intervals of two frame images, including:
  • the first sub-region and the second sub-region are divided according to the same division method and are located at the same position in the respective images; or,
  • each subregion group is made up of the third subregion in the tth frame image and the fourth subregion in the t-1 frame image, and the third subregion in the tth frame image
  • the fourth sub-region in the t-1th frame image is obtained in the same division manner, and the third sub-region and the fourth sub-region in each sub-region group are located at the same position in each image.
  • the anti-shake performance score of the video is obtained according to the image similarity corresponding to two frame images of each group of adjacent preset intervals in the video, including:
  • each image similarity corresponding to each group of adjacent preset intervals of two frame images in the video and the weight corresponding to each image similarity, obtain the video corresponding to each group of adjacent preset intervals of two frame images similarity score;
  • each group of adjacent presets in the video is obtained.
  • the similarity scores corresponding to the two frames of images at intervals including:
  • the weighted summation result is obtained, and the weighted summation result is used as each group in the video Similarity scores corresponding to two frames of images at adjacent preset intervals; or,
  • each image similarity corresponding to two frames of adjacent preset intervals in the video as the power base, and use the weight corresponding to each image similarity as the power exponent to obtain each set of adjacent preset intervals in the video
  • each group of images in the video is obtained The similarity score corresponding to two frames of images adjacent to the preset interval.
  • the image similarity includes at least one of the following three items of similarity, and the following three items of similarity are brightness similarity, contrast similarity and structure similarity.
  • the video is single-channel video or multi-channel video.
  • An anti-shake effect evaluation device comprising:
  • the first acquisition module is used to acquire the video formed by anti-shake processing
  • the second acquisition module is configured to acquire the anti-shake performance score of the video according to the image frame parameters corresponding to the video, and the anti-shake performance score is used to evaluate the anti-shake effect of the anti-shake processing.
  • a computer device including a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program: acquire a video formed through anti-shake processing; acquire the anti-shake of the video according to the image frame parameters corresponding to the video Performance score, the anti-shake performance score is used to evaluate the anti-shake effect of the anti-shake processing.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented: obtaining a video formed through anti-shake processing; obtaining the anti-shake performance of the video according to the image frame parameters corresponding to the video Score, the anti-shake performance score is used to evaluate the anti-shake effect of the anti-shake processing.
  • the anti-shake effect evaluation method, device, computer equipment and storage medium described above obtain the anti-shake performance score of the video according to the image frame parameters corresponding to the video by acquiring the video formed by the anti-shake process. Since the anti-shake performance score is a relatively objective evaluation basis obtained based on the image frame parameters corresponding to the video, compared with the human visual system, the anti-shake performance score is more accurate as an evaluation result. In addition, because the anti-shake performance score is directly obtained according to the image frame parameters corresponding to the video to evaluate the anti-shake effect, it does not need to spend a long time evaluating the anti-shake effect through visual and intuitive feelings, so the time-consuming is shorter and the evaluation efficiency is higher. high.
  • Fig. 1 is an application environment diagram of the anti-shake effect evaluation method in an embodiment
  • FIG. 2 is a schematic flow chart of a method for evaluating the anti-shake effect in an embodiment
  • FIG. 3 is a schematic flow chart of a method for evaluating the anti-shake effect in another embodiment
  • Fig. 4 is a structural block diagram of an anti-shake effect evaluation device in an embodiment
  • Figure 5 is an internal block diagram of a computer device in one embodiment.
  • the terms “first” and “second” used in this application may be used to describe various technical terms herein, but unless otherwise specified, these technical terms are not limited by these terms. These terms are only used to distinguish one term from another.
  • the third preset threshold and the fourth preset threshold may be the same or different.
  • the shutter speed is too slow, the focal length is too long, and hand-held shaking will cause certain jitters in the obtained videos or images, thereby making the obtained videos or images blurred.
  • some anti-shake algorithms can be used to perform anti-shake processing on the obtained video or image, or anti-shake processing can be performed by mechanical stabilization during the shooting process. After anti-shake processing, it is usually necessary to make a quantitative judgment on the anti-shake effect to determine which anti-shake processing method has a better effect.
  • the anti-shake effect is usually evaluated based on the human visual system, that is, the user judges whether the anti-shake effect is good or bad based on visual perception. Since the anti-shake effect is evaluated through the human visual system, there is no objective evaluation basis based on visual intuition, so the evaluation results are not accurate enough. In addition, because the anti-shake effect is evaluated through visual perception, it usually takes a long time to watch multiple pictures or a video taken before a general evaluation result can be produced, and the evaluation takes a long time.
  • an embodiment of the present invention provides a method for evaluating an anti-shake effect, which can be applied to the application environment shown in FIG. 1 .
  • the terminal 101 communicates with the server 102 through a network.
  • the terminal 101 can send a processing instruction to the server 102, and the server obtains the video formed by the anti-shake processing according to the processing instruction; according to the image frame parameters corresponding to the video, obtains the anti-shake performance score of the video, and the anti-shake performance score is used to evaluate the anti-shake processing anti-shake effect.
  • the server 102 may return the anti-shake performance score corresponding to the anti-shake processing to the terminal 101 .
  • the terminal 101 can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices
  • the server 102 can be an independent physical server, or a server cluster or distributed server composed of multiple physical servers. system, or a cloud server that provides cloud computing services.
  • the terminal 102 and the server 104 may be connected directly or indirectly through wired or wireless communication, which is not limited in this application. It should be noted that the quantities such as “multiple” mentioned in various embodiments of the present application all refer to the quantity of "at least two", for example, “multiple” refers to "at least two".
  • the anti-shake effect evaluation method in this application is mainly applied to the selection scene of the anti-shake processing method. That is to say, the terminal 101 will shake during the movement, such as high-frequency sports such as running and shooting by the user, mountain bike riding and other high-frequency sports. mode, the server 102 may return the anti-shake performance score corresponding to each anti-shake processing method to the terminal 101, so that the terminal 101 selects the anti-shake processing method corresponding to the maximum anti-shake performance score, and based on the selected anti-shake processing method Perform anti-shake processing on the captured video or image.
  • a method for evaluating the anti-shake effect is provided.
  • the method is applied to the terminal 101 in FIG. 1 and the execution subject is the terminal 101 as an example for illustration.
  • the anti-shake effect evaluation method can also be applied to the server 102 and the corresponding execution subject is the server 102, or according to actual needs and feasibility, the anti-shake effect evaluation method can be applied to the terminal 101 and the server 102 at the same time , that is, a part of the steps in the anti-shake effect evaluation method may be executed by the terminal 101, and another part of the steps may be executed by the server 102, which is not specifically limited in this embodiment of the present invention.
  • step 201 in the method flow corresponding to FIG. 2 may be executed by the terminal 101, and then the terminal 101 sends the video to the server 102, so that step 202 is executed by the server 102.
  • the following steps are included:
  • the video can be captured by the terminal 101 which is in motion and has a shooting function.
  • the video may be obtained by performing anti-shake processing during the shooting process, or may be obtained by performing anti-shake processing on the video after shooting to form a video, or part of the video content may be obtained during the shooting process.
  • the anti-shake processing and part of the video content are obtained by performing anti-shake processing on the video after shooting the video, which is not specifically limited in this embodiment of the present invention.
  • the anti-shake processing may also be performed by the terminal 101, and the anti-shake processing may specifically be electronic anti-shake processing and/or optical anti-shake processing.
  • the anti-shake processing mentioned in step 201 may be a single anti-shake processing method, or may be a collection of multiple anti-shake processing methods, which is not specifically limited in this embodiment of the present invention.
  • the image frame parameters may include the degree of difference and/or similarity between the image frames, and the image frame parameters may be calculated based on the image parameters between the image frames in the video.
  • the image parameters may include brightness and/or contrast, etc., which are not specifically limited in this embodiment of the present invention.
  • the image frame parameter may include similarity and/or difference in brightness between image frames.
  • the image frame parameter may include similarity and/or difference of contrast between image frames.
  • image frame parameters may include brightness similarity and/or difference, and contrast similarity and/or difference.
  • the degree of difference can be obtained by calculating the difference
  • the degree of similarity can be obtained by calculating the degree of similarity.
  • the brightness difference between two image frames can be obtained by calculating the brightness difference between the two image frames.
  • the similarity of brightness between two image frames can be calculated by a similarity algorithm.
  • the similarity between the two brightness feature vectors can be calculated as two Similarity in brightness between image frames.
  • the image frame parameters can be mainly used to represent the degree of difference and/or similarity between image frames in the video.
  • the degree of difference and/or degree of similarity between which image frames in the video it can be set according to requirements, which is not specifically limited in this embodiment of the present invention.
  • the image frame parameters may only be composed of the difference and/or similarity between the start frame and the middle frame in the video, or only the difference and/or similarity between the middle frame and the end frame may constitute the image
  • the frame parameter may also be the degree of difference and/or similarity between the start frame and the intermediate frame, and the degree of difference and/or similarity between the intermediate frame and the end frame together constitute the image frame parameter.
  • the video is composed of frames of images.
  • some image parameters will be deformed due to shaking between image frames in the video.
  • the deformation of these image parameters will be combined together, which is reflected in the visual effect, and may present a bad shooting effect. For example, it will cause bad shooting effects such as shaking and blurring in the video, and the anti-shake processing can eliminate these parameters as much as possible. Warp to improve your shots.
  • the deformation of these image parameters will be reflected in the calculation results corresponding to the image parameters between image frames, that is, it can be reflected in the image frame parameters. Therefore, image frame parameters, as an external quantification of the visual effect presented by the video after anti-shake processing, can represent the anti-shake performance of the video after anti-shake processing, so that image frame parameters can be used to evaluate Video stabilization performance.
  • the embodiment of the present invention does not specifically limit the manner in which the terminal 101 obtains the anti-shake performance score of the video according to the image frame parameters corresponding to the video.
  • the ways to obtain the anti-shake performance score can be divided into the following ways:
  • Image frame parameters include the degree of difference between image frames.
  • the degree of difference between the image frames in the video can be set according to requirements. Regardless of the degree of difference between image frames, it is actually a group of two frames of images in the video, and is the degree of difference between the two frames of images in the group. Therefore, the image frame parameters may actually include several degrees of difference, and each degree of difference is determined by a certain group of two frames of images in the video. Wherein, "several" may refer to one or more.
  • the difference degree can be directly used as the anti-shake performance score of the video.
  • the image frame parameters include multiple degrees of difference, the multiple degrees of difference may be averaged, and the average value may be used as the anti-shake performance score of the video.
  • Image frame parameters include the similarity between image frames.
  • the similarity can be directly used as the anti-shake performance score of the video.
  • the image frame parameters include multiple similarities, the multiple similarities may be averaged, and the average value may be used as the anti-shake performance score of the video.
  • Image frame parameters include similarity and difference between image frames.
  • the image frame parameters may actually include several degrees of similarity and degrees of difference, and each degree of similarity or degree of difference is determined by a certain group of two frames of images in the video. Wherein, “several” may refer to one or more.
  • the anti-shake performance score of the video when obtaining the anti-shake performance score of the video according to the image frame parameters corresponding to the video, you can first take the average value of several degrees of difference in the image frame parameters to obtain the average value of the degree of difference, and then calculate the average value of several degrees of difference in the image frame parameters The similarity is averaged to obtain the average similarity. By performing weighted summation on the average value of the difference degree and the average value of the similarity degree, the weighted summation result is used as the anti-shake performance score of the video. Wherein, if the above-mentioned "several" are essentially one, the average value may not be used, and the one similarity or average degree may be directly used for weighted summation.
  • the difference degree may be directly used as the anti-shake performance score.
  • the image frame parameters including the difference between the start frame and the middle frame in the video, and the difference between the middle frame and the end frame as an example you can take the average of the two differences and use the average as the anti-shake performance score.
  • the anti-shake performance score of the video is obtained according to the image frame parameters corresponding to the video by acquiring the video formed through anti-shake processing. Since the anti-shake performance score is a relatively objective evaluation basis obtained based on the image frame parameters corresponding to the video, compared with the human visual system, the anti-shake performance score is more accurate as an evaluation result. In addition, because the anti-shake performance score is directly obtained according to the image frame parameters corresponding to the video to evaluate the anti-shake effect, it does not need to spend a long time evaluating the anti-shake effect through visual and intuitive feelings, so the time-consuming is shorter and the evaluation efficiency is higher. high.
  • a method for evaluating the anti-shake effect including the following steps:
  • the preset interval may be represented by n, where n represents an interval of n frames.
  • n can be 1 or 2, but cannot be greater than the value obtained by subtracting 1 from the total number of frames.
  • n should not be too large. If it is too large, the total amount of image similarity will be too small, which will lead to inaccurate subsequent anti-shake performance scores.
  • the embodiment of the present invention takes the preset interval as 1 as an example to explain the subsequent process.
  • each group of adjacent preset intervals of two frames of images in the video when the preset interval is 1, refers to the first frame and the second frame as a group of adjacent two frames of images
  • the second frame and the third frame are a group of two adjacent frames
  • the third frame and the fourth frame are a group of adjacent two frames, ..., until the n-1th frame and the nth frame are a group
  • Two adjacent frames of images can form n-1 groups in total.
  • the anti-shake of the video can be further obtained according to the image similarity corresponding to the two frame images of each group of adjacent preset intervals performance score.
  • the embodiment of the present invention does not specifically limit the method of obtaining the anti-shake performance score of the video according to the image similarity corresponding to two frames of images at adjacent preset intervals in each group in the video, including but not limited to: obtaining each group of images in the video A summation result of image similarities corresponding to two frames of images at adjacent preset intervals, and the summation result is used as the anti-shake performance score of the video.
  • the summation results are averaged, and the average value is used as the anti-shake performance score of the video.
  • the anti-shake performance score of the video may be further obtained based on multiple image similarities.
  • the image similarity is calculated based on image parameters between two adjacent frames of images in the video, and the image parameters may include brightness and/or contrast.
  • the image similarity can include two items, one is obtained based on the image parameters for brightness, which is recorded as brightness similarity, and the other is based on the image parameters obtained for the contrast. The obtained value is recorded as the contrast similarity.
  • the anti-shake performance score of the video can be obtained, which can be further: to obtain the two frame images of each group of adjacent preset intervals in the video
  • the summation result of each image similarity corresponding to the frame image is summed again to the summation result corresponding to each image similarity, and the final summation result is used as the anti-shake performance score of the video.
  • a method of weighted summation of multiple image similarities can also be adopted to obtain the anti-shake performance score of the video.
  • the image similarity including brightness similarity results obtained based on image parameters as brightness and contrast similarity results obtained based on image parameters as contrast
  • it can be based on each group of adjacent preset intervals in the video
  • Each image similarity corresponding to the two frames of images and the weight corresponding to each image similarity are weighted and summed, and the obtained weighted sum result is used as the anti-shake performance score of the video.
  • the improvement effect after the anti-shake processing will be between two frames of images in each group of adjacent preset intervals in the video. It is reflected in the comparison, and the image similarity corresponding to the two frame images of each group of adjacent preset intervals can reflect the actual improvement effect, so based on the image similarity corresponding to the two frame images of each group of adjacent preset intervals
  • the obtained anti-shake performance score can be used as a relatively objective evaluation basis, and the evaluation result is more accurate.
  • steps in the flow charts of FIG. 2 and FIG. 3 are shown sequentially according to the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in FIG. 2 and FIG. 3 may include multiple steps or stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. The steps or stages The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least a part of steps or stages in other steps.
  • the preset interval is 1, and for any group of two frames of images adjacent to the preset interval in the video, the two frames of images are recorded as the t-th frame image and the t-th frame image respectively.
  • -1 frame image correspondingly, the embodiment of the present invention does not specifically limit the method of obtaining the image similarity between the previous frame image and the subsequent frame image in each group of adjacent preset intervals of two frame images, including but Not limited to the following two methods:
  • the first way to obtain the image similarity obtain the image similarity between the first sub-region in the t-th frame image and the second sub-region in the t-1-th frame image, and use it as the t-th frame image and the t-th -
  • the image similarity between 1 frame of images, the first sub-region and the second sub-region are divided according to the same division method and are located in the same position in each image; or,
  • the second way to obtain image similarity obtain the image similarity between the third subregion and the fourth subregion in each subregion group, and obtain the t-th frame image according to the image similarity corresponding to multiple subregion groups and the image similarity between the t-1th frame image; wherein, each subregion group is composed of the third subregion in the tth frame image and the fourth subregion in the t-1th frame image, The third sub-region in the t-th frame image and the fourth sub-region in the t-1-th frame image are obtained according to the same division method, and the third sub-region and the fourth sub-region in each sub-region group are in respective at the same location in the image.
  • the t-th frame image and the t-1-th frame image are divided into 4 parts of 2*2 according to the same division method, and the first sub-region is the 4 parts divided by the t-th frame image
  • the part in the upper left corner, the second sub-region is the part in the upper left corner of the 4 parts divided by the t-1th frame image as an example
  • the first sub-region can be obtained respectively according to the method of calculating image similarity in the above example Image similarity between the region and the second subregion.
  • the average luminance value of all pixels in the first sub-area can be obtained first, and then the average luminance value of all pixels in the second sub-area can be obtained, and the average luminance value corresponding to the first sub-area can be compared with the average luminance value corresponding to the second sub-area The difference between values, as the image similarity between the first sub-region and the second sub-region.
  • the part in the upper right corner of the t-1th frame image can also be used as the first sub-region, and the part in the upper right corner of the t-th frame image can be used as the second sub-region.
  • the part of the lower left corner in the t-1th frame image can also be used as the first subregion, and the part of the lower left corner in the tth frame image can be used as the second subregion, so as to obtain the first
  • the image similarity between the sub-region and the second sub-region is not specifically limited in this embodiment of the present invention.
  • both the t-th frame image and the t-1-th frame image are divided into 4 parts of 2*2 according to the same division method.
  • the t-th frame of image includes 4 third sub-regions
  • the t-1 th frame of image includes 4 fourth sub-regions, and thus 4 sub-region groups can be formed.
  • the third subregion in the upper left corner of the tth frame image and the fourth subregion in the upper left corner of the t-1th frame image can form the first subregion group, and the third subregion in the upper right corner of the tth frame image
  • the area and the fourth sub-area in the upper right corner of the t-1th frame image can form a second sub-area group, and the third sub-area in the lower left corner of the t-th frame image and the fourth sub-area in the lower left corner of the t-1th frame image
  • the regions can form a third sub-region group, and the third sub-region in the lower right corner of the t-th frame image and the fourth sub-region in the lower right corner of the t-1-th frame image can form a fourth sub-region group.
  • the image similarity corresponding to each sub-region group in the four sub-region groups can be obtained respectively.
  • the image similarity between the t-th frame image and the t-1-th frame image can be obtained.
  • the embodiment of the present invention does not specifically limit the method of obtaining the image similarity between the t-th frame image and the t-1-th frame image according to the image similarity corresponding to multiple sub-region groups, including but not limited to: taking the summation result as The image similarity between the t-th frame image and the t-1-th frame image; or, based on the number of sub-region groups, obtain the average value of the summation results, and use the average value as the t-th frame image and the t-1-th frame image similarity between images.
  • the summation result is obtained after adding the image similarities corresponding to each subregion group. It should be noted that, the implementation process when the preset interval is 1 given in the above example, when the preset interval is other than 1, you can also refer to the process in the above example, which will not be repeated here.
  • the improvement effect after the anti-shake processing will be between two frames of images in each group of adjacent preset intervals in the video. It is reflected in the comparison, and the image similarity corresponding to two frames of images with adjacent preset intervals can reflect the actual improvement effect, so for a set of two frames of images with adjacent preset intervals, the two frames of images After using the same division method to divide, based on a certain area divided by the two frames of images at the same position or by taking all the divided areas as a global consideration, the image similarity corresponding to the two frames of images is obtained. It can be used as a relatively objective evaluation basis, and the evaluation results obtained based on this are more accurate.
  • the embodiment of the present invention does not specify the method of obtaining the anti-shake performance score of the video according to the image similarity corresponding to two frames of images at adjacent preset intervals in the video.
  • Restrictions including but not limited to: According to the similarity of each image corresponding to two frames of images in each group of adjacent preset intervals in the video, and the weight corresponding to each image similarity, obtain each group of adjacent presets in the video The similarity score corresponding to the two frames of images in the interval; according to the similarity score corresponding to the two frames of images in each group of adjacent preset intervals in the video, the anti-shake performance score of the video is obtained.
  • the method of the similarity score corresponding to the image is not specifically limited in this embodiment of the present invention, including but not limited to the following two methods:
  • the first way to obtain the similarity score based on each image similarity corresponding to each set of adjacent preset intervals in the video and the weight corresponding to each image similarity, the weighted summation result is obtained, and The weighted summation result is used as the similarity score corresponding to the two frame images of each group of adjacent preset intervals in the video.
  • the second way to obtain the similarity score take each image similarity corresponding to two adjacent preset intervals in the video as the power base, use the weight corresponding to each image similarity as the power exponent, and obtain The result of the power of each image similarity corresponding to each group of adjacent preset intervals of two frame images in the video, according to the multiplication of each image similarity corresponding to each group of adjacent preset intervals of two frame images in the video According to the square result, the similarity score corresponding to each group of adjacent preset intervals of two frame images in the video is obtained.
  • the embodiment of the present invention does not obtain the correspondence between the two frames of images of each group of adjacent preset intervals in the video according to the power result of each image similarity corresponding to each group of adjacent preset intervals in the video.
  • the method of similarity score is specifically defined, including but not limited to: summing the power results of each image similarity corresponding to two frames of images at adjacent preset intervals in the video, and using the summation result as The similarity score corresponding to two frames of images at adjacent preset intervals in each group; or multiply the power result of each image similarity corresponding to two frames of images in each group of adjacent preset intervals in the video, The result of the product is used as the similarity score corresponding to two frames of images with adjacent preset intervals.
  • the first image similarity corresponding to the two frame images of the t-1th group of adjacent preset intervals in the video is recorded as L t
  • the t-1th group of adjacent images in the video The second item of image similarity corresponding to the two frame images at the preset interval is denoted as C t
  • the third item of image similarity corresponding to the t-1th group of adjacent preset intervals of the two frame images in the video is denoted as S t .
  • the weight corresponding to the first image similarity is denoted as a
  • the weight corresponding to the second image similarity is denoted as b
  • the weight corresponding to the third image similarity is denoted as c.
  • P t represents the similarity score corresponding to the t-th group of adjacent preset intervals of two frames of images.
  • P t represents the similarity score corresponding to the t-th group of adjacent preset intervals of two frames of images.
  • the weight corresponding to each item of image similarity can be set according to actual needs. For example, if there are two image similarities, one of which is the image similarity calculated based on brightness, and the other is the image similarity calculated based on contrast, and the ambient brightness in the video is dark, then for these two The image similarity should minimize the error caused by the dark environment. Therefore, the weight corresponding to the image similarity calculated based on the brightness can be appropriately reduced, and the weight corresponding to the image similarity calculated based on the contrast can be appropriately increased.
  • the anti-shake of the video can be obtained according to the similarity scores corresponding to two frames of images at adjacent preset intervals in the video performance score.
  • the embodiment of the present invention does not specifically limit the method of obtaining the anti-shake performance score of the video according to the similarity scores corresponding to two frames of images at adjacent preset intervals in the video, including but not limited to: obtaining the accumulation of similarity scores As a result, the accumulation result is obtained by accumulating the similarity scores corresponding to two frames of images in each group of adjacent preset intervals in the video.
  • the method provided by the embodiment of the present invention can obtain the similarity score between two frames of images at adjacent preset intervals based on the similarity of each image corresponding to two frames of images at adjacent preset intervals, thus compared with The similarity score is obtained based on a single item of image similarity, and the obtained results are more accurate.
  • the weight of each image similarity can be set according to the actual needs, it can make it possible to focus on obtaining the similarity score and reduce the error caused by the image similarity corresponding to the low weight.
  • the anti-shake performance score is determined by The similarity score and weight are determined, which in turn makes the subsequently obtained anti-shake performance score more accurate.
  • the image similarity includes at least one of the following three items of similarity, and the following three items of similarity are brightness similarity, contrast similarity and structure similarity.
  • the brightness similarity corresponding to the two frame images of the t-1th group of adjacent preset intervals is calculated, that is, the t-th frame image and the t-1th frame of the two frame images of the t-1th group of adjacent preset intervals
  • the brightness similarity between images can refer to the following formula (3):
  • ⁇ t represents the average brightness value of the t-th frame image
  • ⁇ t-1 represents the brightness average value of the t-1-th frame image.
  • ⁇ t can be calculated by the following formula (4):
  • N represents the total number of pixels in the t-th frame image
  • i represents the i-th pixel in the t-th frame image
  • t i represents the brightness value of the i-th pixel
  • ⁇ t represents the brightness standard deviation of the t-th frame image, that is, the contrast of the t-th frame image
  • ⁇ t-1 represents the contrast of the t-1-th frame image
  • ⁇ t,t-1 represents the luminance covariance between the t-th frame image and the t-1-th frame image.
  • ⁇ t ,t-1 can be calculated by the following formula (8):
  • (t-1) i represents the brightness value of the i-th pixel in the t-1th frame image
  • ⁇ t-1 represents the brightness average value of the t-1th frame image
  • the method provided by the embodiment of the present invention can obtain the similarity between two frames of images at adjacent preset intervals based on the brightness similarity, contrast similarity, and structural similarity corresponding to two frames of images at adjacent preset intervals Compared with obtaining similarity scores based on a single item of image similarity, the obtained results are more accurate, and the anti-shake performance score is determined by the similarity score, so that the subsequent obtained anti-shake performance Scoring is more accurate.
  • the video is a single-channel video or a multi-channel video.
  • the single-channel video is a grayscale video
  • the multi-channel video is a color video. It should be noted that, if the video is a grayscale video, the anti-shake performance score of the grayscale video may be obtained directly according to the manner provided in the foregoing embodiment.
  • the video is a color video
  • the method provided in the above-mentioned embodiment first obtain the similarity of each image corresponding to each group of adjacent preset intervals of two frames of images in the video under each channel, for a certain same type Image similarity, and then add the similarity of the same type of image corresponding to the two frame images of each group of adjacent preset intervals in the video under each channel, and use the summation result as each group of adjacent presets in the video
  • the similarity of the image of the same type corresponding to the two frames of images at intervals.
  • the method provided by the embodiment of the present invention can be applied to single-channel video or multi-channel video at the same time, so it can be applied to a wider range of scenarios.
  • an anti-shake effect evaluation device including: a first acquisition module 401 and a second acquisition module 402, wherein:
  • the first acquisition module 401 is configured to acquire the video formed by anti-shake processing
  • the second obtaining module 402 is configured to obtain the anti-shake performance score of the video according to the image frame parameters corresponding to the video, and the anti-shake performance score is used to evaluate the anti-shake effect of the anti-shake processing.
  • the image frame parameters in the second acquisition module 402 include image similarity; correspondingly, the second acquisition module 402 includes: a first acquisition unit and a second acquisition unit;
  • the first acquisition unit is configured to acquire an image between the previous frame image and the subsequent frame image in each group of two adjacent preset intervals of two frame images in the video for each group of adjacent preset intervals of two frame images
  • the similarity is used as the image similarity corresponding to two frames of images at each set of adjacent preset intervals;
  • the second acquisition unit is configured to acquire the anti-shake performance score of the video according to the image similarity corresponding to two frames of images at adjacent preset intervals in the video.
  • the preset interval is 1, and for any group of two frames of images adjacent to the preset interval in the video, the two frames of images are recorded as the tth frame image and the t-th frame image respectively.
  • 1 frame of image correspondingly, the first acquisition unit includes: a first acquisition subunit or a second acquisition subunit.
  • the first acquiring subunit is used to acquire the image similarity between the first subregion in the tth frame image and the second subregion in the t-1th frame image, and use it as the tth frame image and the t-1th frame image Image similarity between frame images, the first sub-region and the second sub-region are divided according to the same division method and are located at the same position in the respective images;
  • the second acquisition subunit is used to obtain the image similarity between the third sub-region and the fourth sub-region in each sub-region group, and obtain the t-th frame image and the first frame image according to the image similarity corresponding to multiple sub-region groups Image similarity between t-1 frame images;
  • each subregion group is made up of the 3rd subregion in the tth frame image and the 4th subregion in the t-1th frame image, the tth
  • the third sub-region in the frame image and the fourth sub-region in the t-1th frame image are obtained by the same division method, and the third sub-region and the fourth sub-region in each sub-region group are in the respective images at the same location.
  • the second acquisition unit includes: a third acquisition subunit and a fourth acquisition subunit;
  • the third acquisition subunit is used to obtain each group of adjacent presets in the video according to each image similarity corresponding to two frame images of each group of adjacent preset intervals in the video, and the weight corresponding to each image similarity. Set the similarity score corresponding to the two frames of images at intervals;
  • the fourth obtaining subunit is used to obtain the anti-shake performance score of the video according to the similarity scores corresponding to two frames of images at adjacent preset intervals in the video.
  • the third obtaining subunit is configured to obtain a weighted sum based on each image similarity corresponding to each set of adjacent preset intervals in the video and the weight corresponding to each image similarity As a result, the weighted summation result is used as the similarity score corresponding to the two frame images of each group of adjacent preset intervals in the video; or,
  • each image similarity corresponding to two frames of adjacent preset intervals in the video as the power base, and use the weight corresponding to each image similarity as the power exponent to obtain each set of adjacent preset intervals in the video
  • each group of images in the video is obtained The similarity score corresponding to two frames of images adjacent to the preset interval.
  • the image similarity in each of the units mentioned above includes at least one of the following three similarities, the following three similarities are respectively brightness similarity, contrast similarity and structure similarity.
  • the videos in the above-mentioned various modules and units are single-channel videos or multi-channel videos.
  • Each module in the above anti-shake effect evaluation device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • a computer device is provided, and the computer device may be a server, and its internal structure may be as shown in FIG. 5 .
  • the computer device includes a processor, memory and a network interface connected by a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer programs and databases.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer device is used to store anti-shake performance scores.
  • the network interface of the computer device is used to communicate with an external terminal via a network connection. When the computer program is executed by the processor, an anti-shake effect evaluation method is realized.
  • FIG. 5 is only a block diagram of a part of the structure related to the solution of this application, and does not constitute a limitation to the computer equipment on which the solution of this application is applied.
  • the specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
  • a computer device including a memory and a processor, a computer program is stored in the memory, and the processor implements the following steps when executing the computer program: acquiring a video formed through anti-shake processing; corresponding to the video according to The image frame parameters of the video are obtained to obtain the anti-shake performance score of the video, and the anti-shake performance score is used to evaluate the anti-shake effect of the anti-shake processing.
  • the image frame parameters include image similarity; correspondingly, when the processor executes the computer program, the following steps are also implemented: for each group of adjacent preset intervals of two frame images in the video, obtain each group of adjacent The image similarity between the previous frame image and the next frame image in the two frame images of the preset interval is used as the image similarity corresponding to the two frame images of each group of adjacent preset intervals; according to each group in the video The image similarity corresponding to two frames of images at adjacent preset intervals is used to obtain the anti-shake performance score of the video.
  • the preset interval is 1, and for any two frames of images with adjacent preset intervals in the video, the two frames of images are respectively recorded as the t-th frame image and the t-1-th frame image; correspondingly,
  • the processor executes the computer program, the following steps are also implemented: obtaining the image similarity between the first subregion in the tth frame image and the second subregion in the t-1th frame image, and using it as the tth frame image and the tth frame image.
  • the image similarity between t-1 frame images, the first sub-region and the second sub-region are divided according to the same division method and are located at the same position in the respective images; or,
  • each subregion group is made up of the third subregion in the tth frame image and the fourth subregion in the t-1 frame image, and the third subregion in the tth frame image
  • the fourth sub-region in the t-1th frame image is obtained in the same division manner, and the third sub-region and the fourth sub-region in each sub-region group are located at the same position in each image.
  • the processor when the processor executes the computer program, the following steps are also implemented: according to each image similarity corresponding to each group of adjacent preset intervals of two frame images in the video, and the weight corresponding to each image similarity, Obtain the similarity score corresponding to each group of adjacent preset intervals of two frames of images in the video; obtain the anti-shake performance score of the video according to the corresponding similarity scores of each group of adjacent preset intervals of two frame images in the video.
  • the processor executes the computer program, the following steps are also implemented: based on the weights corresponding to each image similarity and each image similarity corresponding to each group of adjacent preset intervals of two frame images in the video, obtain The weighted summation result, and the weighted summation result is used as the similarity score corresponding to the two frame images of each group of adjacent preset intervals in the video; or,
  • each image similarity corresponding to two frames of adjacent preset intervals in the video as the power base, and use the weight corresponding to each image similarity as the power exponent to obtain each set of adjacent preset intervals in the video
  • each group of images in the video is obtained The similarity score corresponding to two frames of images adjacent to the preset interval.
  • the image similarity when the processor executes the computer program, includes at least one of the following three items of similarity, the following three items of similarity are respectively brightness similarity, contrast similarity and structure similarity.
  • the video when the processor executes the computer program, the video is single-channel video or multi-channel video.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the following steps are implemented: acquiring a video formed through anti-shake processing; parameter to obtain the anti-shake performance score of the video, and the anti-shake performance score is used to evaluate the anti-shake effect of the anti-shake processing.
  • the image frame parameters include image similarity; correspondingly, when the computer program is executed by the processor, the following steps are also implemented: for each group of adjacent preset intervals of two frame images in the video, obtain each group of relative The image similarity between the previous frame image and the next frame image in the two frame images adjacent to the preset interval is used as the image similarity corresponding to the two frame images of each group of adjacent preset intervals; according to each Set the image similarity corresponding to two frames of images with adjacent preset intervals to obtain the anti-shake performance score of the video.
  • the preset interval is 1, and for any two frames of images with adjacent preset intervals in the video, the two frames of images are respectively recorded as the t-th frame image and the t-1-th frame image; correspondingly,
  • the computer program is executed by the processor, the following steps are also implemented: obtaining the image similarity between the first subregion in the tth frame image and the second subregion in the t-1th frame image, and using it as the tth frame image and The image similarity between the t-1th frame images, the first sub-region and the second sub-region are divided according to the same division method and are located at the same position in the respective images; or,
  • each subregion group is made up of the third subregion in the tth frame image and the fourth subregion in the t-1 frame image, and the third subregion in the tth frame image
  • the fourth sub-region in the t-1th frame image is obtained in the same division manner, and the third sub-region and the fourth sub-region in each sub-region group are located at the same position in each image.
  • the following steps are also implemented: according to each image similarity corresponding to two frames of images at adjacent preset intervals in the video, and the weight corresponding to each image similarity , to obtain the similarity score corresponding to each set of adjacent preset intervals of two frame images in the video; according to the similarity score corresponding to each set of adjacent preset intervals of two frame images in the video, to obtain the anti-shake performance score of the video .
  • the following steps are also implemented: based on each image similarity corresponding to each group of adjacent preset intervals of two frame images in the video and the weight corresponding to each image similarity, Obtain the weighted summation result, and use the weighted summation result as the similarity score corresponding to the two frame images of each group of adjacent preset intervals in the video; or,
  • each image similarity corresponding to two frames of adjacent preset intervals in the video as the power base, and use the weight corresponding to each image similarity as the power exponent to obtain each set of adjacent preset intervals in the video
  • each group of images in the video is obtained The similarity score corresponding to two frames of images adjacent to the preset interval.
  • the image similarity when the computer program is executed by the processor, the image similarity includes at least one of the following three items of similarity, the following three items of similarity are brightness similarity, contrast similarity and structure similarity.
  • the video when the computer program is executed by the processor, the video is single-channel video or multi-channel video.
  • Non-volatile memory can include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory can include Random Access Memory (RAM) or external cache memory.
  • RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Studio Devices (AREA)

Abstract

一种防抖效果评估方法、装置、计算机设备和存储介质。所述方法包括:获取经由防抖处理所形成的视频(201);根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果(202)。方法通过获取经由防抖处理所形成的视频,根据视频对应的图像帧参数,获取视频的防抖性能得分。由于防抖性能得分是基于视频对应的图像帧参数所获取的相对客观的评估依据,从而相较于人类视觉系统,防抖性能得分作为评估结果更加精准。另外,由于是根据视频对应的图像帧参数,直接获取防抖性能得分以评估防抖效果,而不需要花费较长时间通过视觉直观感受来评估防抖效果,从而耗费时间较短,评估效率更高。

Description

防抖效果评估方法、装置、计算机设备和存储介质 技术领域
本申请涉及图像处理技术领域,特别是涉及一种防抖效果评估方法、装置、计算机设备和存储介质。
背景技术
目前在拍摄视频或图像过程中,快门速度过慢、焦距过长及手持晃动都会导致得到的视频或图像发生一定的抖动,从而使得到的视频或者图像模糊。为了去除视频或者图像的抖动以得到稳定清晰的视频或者图像,可以对得到的视频或图像通过一些防抖算法进行防抖处理,或者在拍摄过程中通过机械增稳的方式进行防抖处理。在进行防抖处理后,通常需要对防抖效果进行一定的量化评断,以确定哪种防抖处理方式的效果更佳。
技术问题
在相关技术中,通常是基于人类视觉系统对防抖效果进行评估,也即由用户基于视觉直观感受评判防抖效果的好坏。由于是通过人类视觉系统对防抖效果进行评估,而基于视觉直观感受是缺乏客观评估依据的,从而评估结果不够精准。另外,由于通过视觉直观感受评估防抖效果,通常需要花费较长时间观看拍摄的多张图片或者一段视频,才可能产生一个大体上的评估结果,评估耗费时间较长。
技术解决方案
基于此,有必要针对上述技术问题,提供一种能够精准评估防抖处理效果的防抖效果评估方法、装置、计算机设备和存储介质。
一种防抖效果评估方法,该方法包括:
获取经由防抖处理所形成的视频;
根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
在其中一个实施例中,图像帧参数包括图像相似度;相应地,根据视频对应的图像帧参数,获取视频的防抖性能得分,包括:
对于视频中每一组相邻预设间隔的两帧图像,获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,并作为每一组相邻预设间隔的两帧图像对应的图 像相似度;
根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分。
在其中一个实施例中,预设间隔为1,对于视频中任意一组相邻预设间隔的两帧图像,将两帧图像分别记为第t帧图像及第t-1帧图像;相应地,获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,包括:
获取第t帧图像中的第一子区域与第t-1帧图像中的第二子区域之间的图像相似度,并作为第t帧图像与第t-1帧图像之间的图像相似度,第一子区域与第二子区域是按照相同划分方式划分的且在各自图像中位于相同位置;或者,
获取每一子区域组中第三子区域与第四子区域之间的图像相似度,并根据多个子区域组对应的图像相似度,获取第t帧图像与第t-1帧图像之间的图像相似度;其中,每一子区域组是由第t帧图像中的第三子区域及第t-1帧图像中的第四子区域所组成的,第t帧图像中的第三子区域与第t-1帧图像中的第四子区域是按照相同的划分方式所得到的,每一子区域组中第三子区域与第四子区域在各自图像中位于相同位置。
在其中一个实施例中,根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分,包括:
根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分;
根据视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取视频的防抖性能得分。
在其中一个实施例中,根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分,包括:
基于视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度及每项图像相似度对应的权重,获取加权求和结果,并将加权求和结果作为视频中每一组相邻预设间隔的两帧图像对应的相似度得分;或者,
将视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度作为幂底数,将每项图像相似度对应的权重作为幂指数,获取视频中每一组相邻预设间隔的两帧图像对应的每项图 像相似度的乘方结果,根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分。
在其中一个实施例中,图像相似度包括以下三项相似度中的至少一项,以下三项相似度分别为亮度相似度、对比度相似度及结构相似度。
在其中一个实施例中,视频为单通道视频或多通道视频。
一种防抖效果评估装置,该装置包括:
第一获取模块,用于获取经由防抖处理所形成的视频;
第二获取模块,用于根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
一种计算机设备,包括存储器和处理器,存储器存储有计算机程序,处理器执行计算机程序时实现以下步骤:获取经由防抖处理所形成的视频;根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:获取经由防抖处理所形成的视频;根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
技术效果
上述防抖效果评估方法、装置、计算机设备和存储介质,通过获取经由防抖处理所形成的视频,根据视频对应的图像帧参数,获取视频的防抖性能得分。由于防抖性能得分是基于视频对应的图像帧参数所获取的相对客观的评估依据,从而相较于人类视觉系统,防抖性能得分作为评估结果更加精准。另外,由于是根据视频对应的图像帧参数,直接获取防抖性能得分以评估防抖效果,而不需要花费较长时间通过视觉直观感受来评估防抖效果,从而耗费时间较短,评估效率更高。
附图说明
图1为一个实施例中防抖效果评估方法的应用环境图;
图2为一个实施例中防抖效果评估方法的流程示意图;
图3为另一个实施例中防抖效果评估方法的流程示意图;
图4为一个实施例中防抖效果评估装置的结构框图;
图5为一个实施例中计算机设备的内部结构图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种专业名词,但除非特别说明,这些专业名词不受这些术语限制。这些术语仅用于将一个专业名词与另一个专业名词区分。举例来说,在不脱离本申请的范围的情况下,第三预设阈值与第四预设阈值可以相同可以不同。
目前在拍摄视频或图像过程中,快门速度过慢、焦距过长及手持晃动都会导致得到的视频或图像发生一定的抖动,从而使得到的视频或者图像模糊。为了去除视频或者图像的抖动以得到稳定清晰的视频或者图像,可以对得到的视频或图像通过一些防抖算法进行防抖处理,或者在拍摄过程中通过机械增稳的方式进行防抖处理。在进行防抖处理后,通常需要对防抖效果进行一定的量化评断,以确定哪种防抖处理方式的效果更佳。
在相关技术中,通常是基于人类视觉系统对防抖效果进行评估,也即由用户基于视觉直观感受评判防抖效果的好坏。由于是通过人类视觉系统对防抖效果进行评估,而基于视觉直观感受是缺乏客观评估依据的,从而评估结果不够精准。另外,由于通过视觉直观感受评估防抖效果,通常需要花费较长时间观看拍摄的多张图片或者一段视频,才可能产生一个大体上的评估结果,评估耗费时间较长。
针对上述相关技术中存在的问题,本发明实施例提供了一种防抖效果评估方法,可以应用于如图1所示的应用环境中。其中,终端101通过网络与服务器102进行通信。终端101可以向服务器102发送处理指令,服务器根据处理指令获取经由防抖处理所形成的视频;根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。服务器102可将防抖处理对应的防抖性能得分返回至终端101。
其中,终端101可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器102可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器。终端102以及服务器104可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。需要说明的是,本申请各实施例中提及的“多个”等的数量均指代“至少两个”的数量,比如,“多个”指“至 少两个”。
在对本申请的具体实施方式进行说明之前,先对本申请的主要应用场景进行说明。本申请中的防抖效果评估方法主要应用于防抖处理方式的选用场景。也即,终端101在运动过程中会产生抖动,如用户跑步手持拍摄,山地车骑行拍摄等高频运动,当终端101存在多种防抖处理方式,且需要选用其中某一种防抖处理方式时,可以由服务器102向终端101返回每一种防抖处理方式对应的防抖性能得分,从而由终端101选用最大防抖性能得分对应的防抖处理方式,并基于选用的防抖处理方式对拍摄的视频或者图像作防抖处理。
结合上述实施例的内容,在一个实施例中,参见图2,提供了一种防抖效果评估方法,以该方法应用于图1中的终端101,执行主体为终端101为例进行说明,可以理解的是,该防抖效果评估方法也可以应用于服务器102中且相应执行主体为服务器102,再或者根据实际需求及可行性,该防抖效果评估方法可以同时应用于终端101与服务器102中,也即该防抖效果评估方法中一部分步骤的执行主体可以为终端101,而另一部分步骤的执行主体可以为服务器102,本发明实施例对此不作具体限定。例如,图2对应方法流程中步骤201可以由终端101执行,再由终端101将视频发送至服务器102,从而步骤202由服务器102执行。在本实施例中,包括以下步骤:
201、获取经由防抖处理所形成的视频;
202、根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
在上述步骤201中,视频可以由处于运动状态下且带有拍摄功能的终端101所拍摄得到的。具体地,该视频可以是在拍摄过程中进行防抖处理所得到的,或者也可以是拍摄形成视频后再对该视频进行防抖处理所得到的,或者还可以是部分视频内容是拍摄过程中防抖处理所得到的以及部分视频内容是拍摄形成视频后再对该视频进行防抖处理所得到的,本发明实施例对此不作具体限定。其中,防抖处理也可以由终端101所执行,防抖处理具体可以为电子防抖处理和/或光学防抖处理等。需要说明的是,上述步骤201中提及的防抖处理可以为单一一种防抖处理方式,也可以为多种防抖处理方式的集合,本发明实施例对此不作具体限定。
在上述步骤202中,图像帧参数可以包括图像帧之间的差异度和/或相似度,图像帧参数可以基于视频中图像帧之间的图像参数计算得到。其中,图像参数可以包括亮度和/或对比度等,本发明实施例对此不作具体限定。以图像参数为亮度为例,图像帧参数可以包括图像帧 之间亮度的相似度和/或差异度。以图像参数为对比度为例,图像帧参数可以包括图像帧之间对比度的相似度和/或差异度。以图像参数包括亮度及对比度为例,图像帧参数可以包括亮度的相似度和/或差异度,以及,对比度的相似度和/或差异度。其中,差异度可以通过计算差值得到,相似度可以通过相似度算法计算得到。例如,两个图像帧之间亮度的差异度,可以通过计算两个图像帧之间亮度的差值得到。两个图像帧之间亮度的相似度,可以通过相似度算法计算得到,如对于两个图像帧各自对应的亮度特征向量,可以计算该两个亮度特征向量之间的相似度,以作为两个图像帧之间亮度的相似度。
由上述过程可知,图像帧参数主要可以用于表示视频中图像帧之间的差异度和/或相似度。至于是视频中哪些图像帧之间的差异度和/或相似度,可以根据需求设置,本发明实施例对此不作具体限定。例如,可以仅是由视频中起始帧与中间帧之间的差异度和/或相似度构成图像帧参数,也可以仅是中间帧与结束帧之间的差异度和/或相似度构成图像帧参数,还可以是起始帧与中间帧之间的差异度和/或相似度,以及中间帧与结束帧之间的差异度和/或相似度共同构成图像帧参数。
需要说明的是,视频是由一帧帧图像所构成的,当视频是由处于运动状态下的拍摄设备所拍摄得到时,视频中图像帧之间会因为抖动而产生些许图像参数的变形。而这些图像参数的变形会组合在一起,体现在视觉效果上,可能会呈现不好的拍摄效果,如会造成视频呈现拍摄抖动模糊等不好的拍摄效果,而防抖处理能够尽量消除这些参数变形以提高拍摄效果。在数据处理的角度,这些图像参数的变形会体现在图像帧之间的图像参数所对应的计算结果上,也即可以体现在图像帧参数上。因此,图像帧参数作为视频经过防抖处理后其所呈现的视觉效果的一种外在量化,是可以代表视频经过防抖处理后其防抖性能好坏的,从而可以利用图像帧参数来评估视频防抖性能。
另外,结合上述示例中的内容,关于终端101根据视频对应的图像帧参数,获取视频的防抖性能得分的方式,本发明实施例对此不作具体限定。基于图像帧参数中包含的内容,获取防抖性能得分的方式可以分为如下几种方式:
(1)图像帧参数包括图像帧之间的差异度。
由上述示例中的内容可知,在根据所述视频对应的图像帧参数,获取所述视频的防抖性能得分时,至于是视频中哪些图像帧之间的差异度,可以根据需求设置。无论是哪些图像帧之间的差异度,其实际均是视频中某两帧图像构成一组,并为该组内两帧图像之间的差异度。 因此,图像帧参数实际上可以包括若干个差异度,每一差异度均是由视频中某组两帧图像所确定的。其中,“若干个”可以指的是一个或多个。相应地,在根据视频对应的图像帧参数,获取视频的防抖性能得分时,若图像帧参数中包含一个差异度,则可以直接将该差异度作为视频的防抖性能得分。若图像帧参数中包含多个差异度,则可以对该多个差异度取平均值,将平均值作为视频的防抖性能得分。
(2)图像帧参数包括图像帧之间的相似度。
与上述第(1)种情形类似,由上述示例中的内容可知,在根据所述视频对应的图像帧参数,获取所述视频的防抖性能得分时,至于是视频中哪些图像帧之间的差异度,可以根据需求设置。无论是哪些图像帧之间的相似度,其实际均是视频中某两帧图像构成一组,并为该组内两帧图像之间的相似度。因此,图像帧参数实际上可以包括若干个相似度,每一相似度均是由视频中某组两帧图像所确定的。其中,“若干个”可以指的是一个或多个。相应地,在根据视频对应的图像帧参数,获取视频的防抖性能得分时,若图像帧参数中包含一个相似度,则可以直接将该相似度作为视频的防抖性能得分。若图像帧参数中包含多个相似度,则可以对该多个相似度取平均值,将平均值作为视频的防抖性能得分。
(3)图像帧参数包括图像帧之间的相似度及差异度。
与上述第(1)种及第(2)种情形类似,无论是哪些图像帧之间的相似度或差异度,其实际均是视频中某两帧图像构成一组,并为该组内两帧图像之间的相似度或差异度。因此,图像帧参数实际上可以包括若干个相似度及若干个差异度,每一相似度或差异度均是由视频中某组两帧图像所确定的。其中,“若干个”可以指的是一个或多个。相应地,在根据视频对应的图像帧参数,获取视频的防抖性能得分时,可以先对图像帧参数中若干个差异度取平均值,得到差异度平均值,并对图像帧参数中若干个相似度取平均值,得到相似度平均值。通过对差异度平均值与相似度平均值进行加权求和,将加权求和结果作为视频的防抖性能得分。其中,如果上述“若干个”实质为一个,则可以不作平均值,直接使用该一个相似度或平均度进行加权求和。
例如,结合上述示例内容,以图像帧参数包括视频中起始帧与结束帧之间的差异度为例,可以将该差异度直接作为防抖性能得分。以图像帧参数包括视频中起始帧与中间帧之间的差异度,以及中间帧与结束帧之间的差异度为例,可以将两个差异度取平均值,并将平均值作为防抖性能得分。以图像帧参数包括视频中起始帧与中间帧之间的差异度,以及视频中起始 帧与中间帧之间的相似度为例,可先按照差异度与相似度在让视频呈现更好拍摄效果所占据的重要程度上,设置差异度与相似度各自的权重,从而对差异度与相似度进行加权求和,从而将加权求和结果作为防抖性能得分。
本发明实施例提供的方法,通过获取经由防抖处理所形成的视频,根据视频对应的图像帧参数,获取视频的防抖性能得分。由于防抖性能得分是基于视频对应的图像帧参数所获取的相对客观的评估依据,从而相较于人类视觉系统,防抖性能得分作为评估结果更加精准。另外,由于是根据视频对应的图像帧参数,直接获取防抖性能得分以评估防抖效果,而不需要花费较长时间通过视觉直观感受来评估防抖效果,从而耗费时间较短,评估效率更高。
结合上述实施例的内容,在一个实施例中,参见图3,提供了一种防抖效果评估方法,包括以下步骤:
301、获取经由防抖处理所形成的视频;
302、对于视频中每一组相邻预设间隔的两帧图像,获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,并作为每一组相邻预设间隔的两帧图像对应的图像相似度;
303、根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分。
在上述步骤302中,预设间隔可以用n表示,n表示间隔n帧。具体地,n可以为1,也可以为2,但不能大于总帧数减1所得到的数值。其中,n也不宜过大,过大则图像相似度的总量太少,会导致后续防抖性能得分不够准确。基于上述理由以及为了便于说明,本发明实施例以预设间隔为1为例,对后续过程进行解释说明。
以视频中一共包含n帧图像,分别为第1帧、第2帧、…、第n帧为例。上述过程中所提及的视频中每一组相邻预设间隔的两帧图像,在预设间隔为1时,指的是第1帧与第2帧作为一组相邻的两帧图像、第2帧与第3帧作为一组相邻的两帧图像、第3帧与第4帧作为一组相邻的两帧图像、……、直至第n-1帧与第n帧作为一组相邻的两帧图像,这样一共可以形成n-1组。其中,每一组相邻预设间隔的两帧图像对应的图像相似度的计算方式,可以参考上述示例种图像相似度的相关定义。
在获取到视频中每一组相邻预设间隔的两帧图像对应的图像相似度后,可以根据每一组相邻预设间隔的两帧图像对应的图像相似度,进一步获取视频的防抖性能得分。本发明实施 例不对根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分的方式作具体限定,包括但不限于:获取视频中每一组相邻预设间隔的两帧图像对应的图像相似度的求和结果,并将求和结果作为视频的防抖性能得分。或者,进一步地,基于视频中每一组相邻预设间隔的两帧图像所形成的总组数,对求和结果取平均值,将平均值作为视频的防抖性能得分。
再或者,若上述求得的图像相似度不止一种,则可进一步基于多种图像相似度来获取视频的防抖性能得分。比如结合上述示例中的说明,图像相似度是基于视频中相邻两帧图像之间的图像参数计算得到的,图像参数可以包括亮度和/或对比度。以图像参数包括亮度和对比度为例,相应地,图像相似度可以包括两项,一项是基于图像参数为亮度所求得的,记为亮度相似度,另一项是基于图像参数为对比度所求得的,记为对比度相似度。
基于上述说明,根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分,可以进一步为:获取视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的求和结果,对每项图像相似度对应的求和结果再进行求和,将最终的求和结果作为视频的防抖性能得分。当然,除了该方式之外,对于多项图像相似度的情形,还可以采取对多项图像相似度进行加权求和的方式,来获取视频的防抖性能得分。例如,以图像相似度包括基于图像参数为亮度所求得的亮度相似度结果,以及基于图像参数为对比度所求得的对比度相似度结果为例,可以基于视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重进行加权求和,将得到的加权求和结果作为视频的防抖性能得分。
本发明实施例提供的方法,由于拍摄抖动是连续的,在经过防抖处理的前提下,防抖处理后的提升效果会在视频中每一组相邻预设间隔的两帧图像之间的对比中有所体现,而每一组相邻预设间隔的两帧图像对应的图像相似度能够反映实际提升效果,从而基于每一组相邻预设间隔的两帧图像对应的图像相似度所获取的防抖性能得分,能够作为相对客观的评估依据,以此作为评估结果更加精准。
应该理解的是,虽然图2及图3的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2及图3中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行 完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
结合上述实施例的内容,在一个实施例中,预设间隔为1,对于视频中任意一组相邻预设间隔的两帧图像,将该两帧图像分别记为第t帧图像及第t-1帧图像;相应地,本发明实施例不对获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度的方式作具体限定,包括但不限于如下两种方式:
第一种获取图像相似度的方式:获取第t帧图像中的第一子区域与第t-1帧图像中的第二子区域之间的图像相似度,并作为第t帧图像与第t-1帧图像之间的图像相似度,第一子区域与第二子区域是按照相同划分方式划分的且在各自图像中位于相同位置;或者,
第二种获取图像相似度的方式:获取每一子区域组中第三子区域与第四子区域之间的图像相似度,并根据多个子区域组对应的图像相似度,获取第t帧图像与第t-1帧图像之间的图像相似度;其中,每一子区域组是由第t帧图像中的第三子区域及第t-1帧图像中的第四子区域所组成的,第t帧图像中的第三子区域与第t-1帧图像中的第四子区域是按照相同的划分方式所得到的,每一子区域组中第三子区域与第四子区域在各自图像中位于相同位置。
在上述第一种方式中,以第t帧图像及第t-1帧图像均按照相同划分方式划分为2*2的4个部分,第一子区域为第t帧图像所划分的4个部分中左上角的那部分,第二子区域为第t-1帧图像所划分的4个部分中左上角的那部分为例,可以按照上述示例中计算图像相似度的方式来分别获取第一子区域与第二子区域之间的图像相似度。例如,可以先获取第一子区域中所有像素的平均亮度值,再获取第二子区域中所有像素的平均亮度值,将第一子区域对应的平均亮度值与第二子区域对应的平均亮度值之间的差值,作为第一子区域与第二子区域之间的图像相似度。
当然,在按照上述划分方式所形成的4个部分中,也可以将第t-1帧图像中右上角的那部分作为第一子区域,将第t帧图像中右上角的那部分作为第二子区域,同样地,还可以将第t-1帧图像中左下角的那部分作为第一子区域,将第t帧图像中左下角的那部分作为第二子区域,以此来获取第一子区域与第二子区域之间的图像相似度,本发明实施例对此不作具体限定。
在上述第二种方式中,以第t帧图像及第t-1帧图像均按照相同划分方式划分为2*2的4个部分为例。相应地,第t帧图像中包括4个第三子区域,第t-1帧图像包括4个第四子区域, 并由此可形成4个子区域组。
具体地,第t帧图像位于左上角的第三子区域与第t-1帧图像位于左上角的第四子区域可形成第一个子区域组,第t帧图像位于右上角的第三子区域与第t-1帧图像位于右上角的第四子区域可形成第二个子区域组,第t帧图像位于左下角的第三子区域与第t-1帧图像位于左下角的第四子区域可形成第三个子区域组,第t帧图像位于右下角的第三子区域与第t-1帧图像位于右下角的第四子区域可形成第四个子区域组。
结合上述示例的内容,基于相同的图像相似度计算方式,可以分别获取这四个子区域组中每一子区域组对应的图像相似度。由此,根据多个子区域组对应的图像相似度,可获取第t帧图像与第t-1帧图像之间的图像相似度。本发明实施例不对根据多个子区域组对应的图像相似度,获取第t帧图像与第t-1帧图像之间的图像相似度的方式作具体限定,包括担不限于:将求和结果作为第t帧图像与第t-1帧图像之间的图像相似度;或者,基于子区域组的数量,获取求和结果的平均值,将平均值作为第t帧图像与第t-1帧图像之间的图像相似度。其中,求和结果是对每一子区域组对应的图像相似度进行相加后得到的。需要说明的是,上述示例给出的预设间隔为1时的实现过程,预设间隔为除1之外的其它值时,也可以参考上述示例中的过程,此处不再赘述。
本发明实施例提供的方法,由于拍摄抖动是连续的,在经过防抖处理的前提下,防抖处理后的提升效果会在视频中每一组相邻预设间隔的两帧图像之间的对比中有所体现,而每一组相邻预设间隔的两帧图像对应的图像相似度能够反映实际提升效果,从而对于一组相邻预设间隔的两帧图像,在将该两帧图像采用相同的划分方式进行划分后,基于该两帧图像位于相同位置所划分得到的某一块区域或者将所划分得到的所有区域作为全局考虑,以此来获取该两帧图像对应的图像相似度,能够作为相对客观的评估依据,基于此所获取的评估结果更加精准。
结合上述实施例的内容,在一个实施例中,本发明实施例不对根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分的方式作具体限定,包括但不限于:根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分;根据视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取视频的防抖性能得分。
其中,关于根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项 图像相似度对应的权重,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分的方式,本发明实施例对此也不作具体限定,包括但不限于如下两种方式:
第一种获取相似度得分的方式:基于视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度及每项图像相似度对应的权重,获取加权求和结果,并将加权求和结果作为视频中每一组相邻预设间隔的两帧图像对应的相似度得分。
第二种获取相似度得分的方式:将视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度作为幂底数,将每项图像相似度对应的权重作为幂指数,获取视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分。
其中,本发明实施例不对根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分的方式作具体限定,包括但不限于:对视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果进行求和,将求和结果作为每一组相邻预设间隔的两帧图像对应的相似度得分;或者,对视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果进行相乘,将乘积结果作为每一组相邻预设间隔的两帧图像对应的相似度得分。
例如,以图像相似度为3项为例,视频中第t-1组相邻预设间隔的两帧图像对应的第一项图像相似度记为L t,视频中第t-1组相邻预设间隔的两帧图像对应的第二项图像相似度记为C t,视频中第t-1组相邻预设间隔的两帧图像对应的第三项图像相似度记为S t。而第一项图像相似度对应的权重记为a,第二项图像相似度对应的权重记为b,第三项图像相似度对应的权重记为c。
对于上述第一种获取相似度得分的方式,可以参考如下公式(1)来计算:
P t=a*L t+b*C t+c*S t;(1)
对于上述第二种获取相似度得分的方式,若获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分,是采用将乘方结果进行相乘的方式,则第二种获取相似度得分的方式,可以参考如下公式(2)来计算:
Figure PCTCN2022092751-appb-000001
在上述公式(1)及公式(2)中,P t表示第t组相邻预设间隔的两帧图像对应的相似度得 分。在上述公式(2),
Figure PCTCN2022092751-appb-000002
表示第t-1组相邻预设间隔的两帧图像对应的第一项图像相似度的乘方结果,
Figure PCTCN2022092751-appb-000003
表示第t-1组相邻预设间隔的两帧图像对应的第二项图像相似度的乘方结果,
Figure PCTCN2022092751-appb-000004
表示第t-1组相邻预设间隔的两帧图像对应的第三项图像相似度的乘方结果。
需要说明的是,在上述两种获取相似度得分的方式中,每项图像相似度对应的权重可以根据实际需求进行设置。例如,若存在两项图像相似度,其中一项是基于亮度所计算得到的图像相似度,另一项是基于对比度计算得到的图像相似度,而视频中环境亮度较暗,则对于这两项图像相似度,应当尽量减少环境亮度较暗所带来的误差。由此,可适当减小基于亮度所计算得到的图像相似度对应的权重,而适当提升基于对比度所计算得到的图像相似度对应的权重。
在获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分之后,可以根据视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取视频的防抖性能得分。本发明实施例不对根据视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取视频的防抖性能得分的方式作具体限定,包括但不限于:获取相似度得分的累加结果,累加结果是对视频中每一组相邻预设间隔的两帧图像对应的相似度得分进行累加后所得到的。
本发明实施例提供的方法,由于可以基于相邻预设间隔的两帧图像对应的每项图像相似度,来获取相邻预设间隔的两帧图像之间的相似度得分,从而相较于基于单一一项图像相似度来获取相似度得分,获取到的结果更加精准。另外,由于可以按照实际需求设置每项图像相似度的权重,从而可以使得获取相似度得分时能够有所侧重,减少权重低对应的图像相似度所带来的误差,而防抖性能得分是由相似度得分及权重所确定的,进而使得后续获取到的防抖性能得分更加精准。
结合上述实施例的内容,在一个实施例中,图像相似度包括以下三项相似度中的至少一项,以下三项相似度分别为亮度相似度、对比度相似度及结构相似度。
结合上述实施例、具体示例中的内容以及相似度的定义,以预设间隔为1为例,现对上述三项相似度的计算过程进行说明,以视频中第t-1组相邻预设间隔的两帧图像对应的亮度相似度记为L t,视频中第t-1组相邻预设间隔的两帧图像对应的对比度相似度记为C t,视频中第t-1组相邻预设间隔的两帧图像对应的结构相似度记为S t
其中,计算第t-1组相邻预设间隔的两帧图像对应的亮度相似度,也即第t-1组相邻预设间隔的两帧图像中第t帧图像与第t-1帧图像之间的亮度相似度,可参考如下公式(3):
Figure PCTCN2022092751-appb-000005
在上述公式(3)中,μ t表示第t帧图像的亮度均值,μ t-1表示第t-1帧图像的亮度均值。其中,μ t可采用如下公式(4)计算:
Figure PCTCN2022092751-appb-000006
在上述公式(4)中,N表示第t帧图像中的像素总数,i表示第t帧图像中的第i个像素,t i表示第i个像素的亮度值。
计算第t-1组相邻预设间隔的两帧图像对应的对比度相似度,也即第t-1组相邻预设间隔的两帧图像中第t帧图像与第t-1帧图像之间的对比度相似度,可参考如下公式(5):
Figure PCTCN2022092751-appb-000007
在上述公式(5)中,δ t表示第t帧图像的亮度标准偏差,也即第t帧图像的对比度,δ t-1表示第t-1帧图像的对比度。其中,δ t可采用如下公式(6)计算:
Figure PCTCN2022092751-appb-000008
在上述公式(6)中,各个参数的定义可参考上述公式中的相关说明。
计算第t-1组相邻预设间隔的两帧图像对应的结构相似度,也即第t-1组相邻预设间隔的两帧图像中第t帧图像与第t-1帧图像之间的结构相似度,可参考如下公式(7):
Figure PCTCN2022092751-appb-000009
在上述公式(7)中,δ t,t-1表示第t帧图像与第t-1帧图像之间的亮度协方差。其中,δ t,t-1可采用如下公式(8)计算:
Figure PCTCN2022092751-appb-000010
在上述公式(8)中,(t-1) i表示第t-1帧图像中的第i个像素的亮度值,μ t-1表示第t-1帧图像的亮度均值。
本发明实施例提供的方法,由于可以基于相邻预设间隔的两帧图像对应的亮度相似度、对比度相似度及结构相似度,来获取相邻预设间隔的两帧图像之间的相似度得分,从而相较 于基于单一一项图像相似度来获取相似度得分,获取到的结果更加精准,而防抖性能得分是由相似度得分所确定的,进而使得后续获取到的防抖性能得分更加精准。
结合上述实施例的内容,在一个实施例中,视频为单通道视频或多通道视频。其中,单通道视频为灰度视频,多通道视频为彩色视频。需要说明的是,若该视频为灰度视频,则可以直接按照上述实施例提供的方式,获取该灰度视频的防抖性能得分。若该视频为彩色视频,则可以按照上述实施例提供的方式,先获取每一通道下视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,对于某一同类型的图像相似度,再将每一通道下视频中每一组相邻预设间隔的两帧图像对应的该同类型图像相似度进行加和,将加和结果作为视频中每一组相邻预设间隔的两帧图像对应的该同类型图像相似度。通过上述过程,即可得到视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,再采用上述实施例提供的方式,即可获取该视频的防抖性能得分。
本发明实施例提供的方法,由于可以同时适用于单通道视频或多通道视频,从而适用场景更加广泛。
需要说明的是,上述阐述的技术方案在实际实施过程中可以作为独立实施例来实施,也可以彼此之间进行组合并作为组合实施例实施。另外,在对上述本发明实施例内容进行阐述时,仅基于方便阐述的思路,按照相应顺序对不同实施例进行阐述,如按照数据流流向的顺序,而并非是对不同实施例之间的执行顺序进行限定。相应地,在实际实施过程中,若需要实施本发明提供的多个实施例,则不一定需要按照本发明阐述实施例时所提供的执行顺序,而是可以根据需求安排不同实施例之间的执行顺序。
结合上述实施例的内容,在一个实施例中,如图4所示,提供了一种防抖效果评估装置,包括:第一获取模块401和第二获取模块402,其中:
第一获取模块401,用于获取经由防抖处理所形成的视频;
第二获取模块402,用于根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
在一个实施例中,第二获取模块402中的图像帧参数包括图像相似度;相应地,第二获取模块402,包括:第一获取单元及第二获取单元;
第一获取单元,用于对于视频中每一组相邻预设间隔的两帧图像,获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,并作为每一组相邻预设间隔 的两帧图像对应的图像相似度;
第二获取单元,用于根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分。
在一个实施例中,在第一获取单元中,预设间隔为1,对于视频中任意一组相邻预设间隔的两帧图像,将两帧图像分别记为第t帧图像及第t-1帧图像;相应地,第一获取单元,包括:第一获取子单元或者第二获取子单元。
第一获取子单元,用于获取第t帧图像中的第一子区域与第t-1帧图像中的第二子区域之间的图像相似度,并作为第t帧图像与第t-1帧图像之间的图像相似度,第一子区域与第二子区域是按照相同划分方式划分的且在各自图像中位于相同位置;
第二获取子单元,用于获取每一子区域组中第三子区域与第四子区域之间的图像相似度,并根据多个子区域组对应的图像相似度,获取第t帧图像与第t-1帧图像之间的图像相似度;其中,每一子区域组是由第t帧图像中的第三子区域及第t-1帧图像中的第四子区域所组成的,第t帧图像中的第三子区域与第t-1帧图像中的第四子区域是按照相同的划分方式所得到的,每一子区域组中第三子区域与第四子区域在各自图像中位于相同位置。
在一个实施例中,第二获取单元包括:第三获取子单元及第四获取子单元;
第三获取子单元,用于根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分;
第四获取子单元,用于根据视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取视频的防抖性能得分。
在一个实施例中,第三获取子单元,用于基于视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度及每项图像相似度对应的权重,获取加权求和结果,并将加权求和结果作为视频中每一组相邻预设间隔的两帧图像对应的相似度得分;或者,
将视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度作为幂底数,将每项图像相似度对应的权重作为幂指数,获取视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分。
在一个实施例中,上述提及到的各项单元中的图像相似度包括以下三项相似度中的至少 一项,以下三项相似度分别为亮度相似度、对比度相似度及结构相似度。
在一个实施例中,上述提及到的各项模块及单元中的视频为单通道视频或多通道视频。
关于防抖效果评估装置的具体限定可以参见上文中对于防抖效果评估方法的限定,在此不再赘述。上述防抖效果评估装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图5所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储防抖性能得分。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种防抖效果评估方法。
本领域技术人员可以理解,图5中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:获取经由防抖处理所形成的视频;根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
在一个实施例中,图像帧参数包括图像相似度;相应地,处理器执行计算机程序时还实现以下步骤:对于视频中每一组相邻预设间隔的两帧图像,获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,并作为每一组相邻预设间隔的两帧图像对应的图像相似度;根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取视频的防抖性能得分。
在一个实施例中,预设间隔为1,对于视频中任意一组相邻预设间隔的两帧图像,将两帧图像分别记为第t帧图像及第t-1帧图像;相应地,处理器执行计算机程序时还实现以下步骤:获取第t帧图像中的第一子区域与第t-1帧图像中的第二子区域之间的图像相似度,并作 为第t帧图像与第t-1帧图像之间的图像相似度,第一子区域与第二子区域是按照相同划分方式划分的且在各自图像中位于相同位置;或者,
获取每一子区域组中第三子区域与第四子区域之间的图像相似度,并根据多个子区域组对应的图像相似度,获取第t帧图像与第t-1帧图像之间的图像相似度;其中,每一子区域组是由第t帧图像中的第三子区域及第t-1帧图像中的第四子区域所组成的,第t帧图像中的第三子区域与第t-1帧图像中的第四子区域是按照相同的划分方式所得到的,每一子区域组中第三子区域与第四子区域在各自图像中位于相同位置。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分;根据视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取视频的防抖性能得分。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:基于视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度及每项图像相似度对应的权重,获取加权求和结果,并将加权求和结果作为视频中每一组相邻预设间隔的两帧图像对应的相似度得分;或者,
将视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度作为幂底数,将每项图像相似度对应的权重作为幂指数,获取视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分。
在一个实施例中,处理器在执行计算机程序时,图像相似度包括以下三项相似度中的至少一项,以下三项相似度分别为亮度相似度、对比度相似度及结构相似度。
在一个实施例中,处理器在执行计算机程序时,视频为单通道视频或多通道视频。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:获取经由防抖处理所形成的视频;根据视频对应的图像帧参数,获取视频的防抖性能得分,防抖性能得分用于评估防抖处理的防抖效果。
在一个实施例中,图像帧参数包括图像相似度;相应地,计算机程序被处理器执行时还实现以下步骤:对于视频中每一组相邻预设间隔的两帧图像,获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,并作为每一组相邻预设间隔的两帧图像对应的图像相似度;根据视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取 视频的防抖性能得分。
在一个实施例中,预设间隔为1,对于视频中任意一组相邻预设间隔的两帧图像,将两帧图像分别记为第t帧图像及第t-1帧图像;相应地,计算机程序被处理器执行时还实现以下步骤:获取第t帧图像中的第一子区域与第t-1帧图像中的第二子区域之间的图像相似度,并作为第t帧图像与第t-1帧图像之间的图像相似度,第一子区域与第二子区域是按照相同划分方式划分的且在各自图像中位于相同位置;或者,
获取每一子区域组中第三子区域与第四子区域之间的图像相似度,并根据多个子区域组对应的图像相似度,获取第t帧图像与第t-1帧图像之间的图像相似度;其中,每一子区域组是由第t帧图像中的第三子区域及第t-1帧图像中的第四子区域所组成的,第t帧图像中的第三子区域与第t-1帧图像中的第四子区域是按照相同的划分方式所得到的,每一子区域组中第三子区域与第四子区域在各自图像中位于相同位置。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分;根据视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取视频的防抖性能得分。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:基于视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度及每项图像相似度对应的权重,获取加权求和结果,并将加权求和结果作为视频中每一组相邻预设间隔的两帧图像对应的相似度得分;或者,
将视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度作为幂底数,将每项图像相似度对应的权重作为幂指数,获取视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,根据视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,获取视频中每一组相邻预设间隔的两帧图像对应的相似度得分。
在一个实施例中,计算机程序被处理器执行时,图像相似度包括以下三项相似度中的至少一项,以下三项相似度分别为亮度相似度、对比度相似度及结构相似度。
在一个实施例中,计算机程序被处理器执行时,视频为单通道视频或多通道视频。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所 提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种防抖效果评估方法,其特征在于,所述方法包括:
    获取经由防抖处理所形成的视频;
    根据所述视频对应的图像帧参数,获取所述视频的防抖性能得分,所述防抖性能得分用于评估所述防抖处理的防抖效果。
  2. 根据权利要求1所述的方法,其特征在于,所述图像帧参数包括图像相似度;相应地,所述根据所述视频对应的图像帧参数,获取所述视频的防抖性能得分,包括:
    对于所述视频中每一组相邻预设间隔的两帧图像,获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,并作为每一组相邻预设间隔的两帧图像对应的图像相似度;
    根据所述视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取所述视频的防抖性能得分。
  3. 根据权利要求2所述的方法,其特征在于,所述预设间隔为1,对于所述视频中任意一组相邻预设间隔的两帧图像,将所述两帧图像分别记为第t帧图像及第t-1帧图像;相应地,所述获取每一组相邻预设间隔的两帧图像中前一帧图像与后一帧图像之间的图像相似度,包括:
    获取第t帧图像中的第一子区域与第t-1帧图像中的第二子区域之间的图像相似度,并作为第t帧图像与第t-1帧图像之间的图像相似度,所述第一子区域与所述第二子区域是按照相同划分方式划分的且在各自图像中位于相同位置;或者,获取每一子区域组中第三子区域与第四子区域之间的图像相似度,并根据多个子区域组对应的图像相似度,获取第t帧图像与第t-1帧图像之间的图像相似度;其中,每一子区域组是由第t帧图像中的第三子区域及第t-1帧图像中的第四子区域所组成的,第t帧图像中的第三子区域与第t-1帧图像中的第四子区域是按照相同的划分方式所得到的,每一子区域组中第三子区域与第四子区域在各自图像中位于相同位置。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述视频中每一组相邻预设间隔的两帧图像对应的图像相似度,获取所述视频的防抖性能得分,包括:
    根据所述视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取所述视频中每一组相邻预设间隔的两帧图像对应的相似度得分;
    根据所述视频中每一组相邻预设间隔的两帧图像对应的相似度得分,获取所述视频的防 抖性能得分。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度,及每项图像相似度对应的权重,获取所述视频中每一组相邻预设间隔的两帧图像对应的相似度得分,包括:
    基于所述视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度及每项图像相似度对应的权重,获取加权求和结果,并将所述加权求和结果作为所述视频中每一组相邻预设间隔的两帧图像对应的相似度得分;或者,
    将所述视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度作为幂底数,将每项图像相似度对应的权重作为幂指数,获取所述视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,根据所述视频中每一组相邻预设间隔的两帧图像对应的每项图像相似度的乘方结果,获取所述视频中每一组相邻预设间隔的两帧图像对应的相似度得分。
  6. 根据权利要求2至5中任一项所述的方法,其特征在于,所述图像相似度包括以下三项相似度中的至少一项,所述以下三项相似度分别为亮度相似度、对比度相似度及结构相似度。
  7. 根据权利要求1至5中任一项所述的方法,其特征在于,所述视频为单通道视频或多通道视频。
  8. 一种防抖效果评估装置,其特征在于,所述装置包括:
    第一获取模块,用于获取经由防抖处理所形成的视频;
    第二获取模块,用于根据所述视频对应的图像帧参数,获取所述视频的防抖性能得分,所述防抖性能得分用于评估所述防抖处理的防抖效果。
  9. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述的方法的步骤。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤。
PCT/CN2022/092751 2021-05-18 2022-05-13 防抖效果评估方法、装置、计算机设备和存储介质 WO2022242568A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110541029.0 2021-05-18
CN202110541029.0A CN113436085A (zh) 2021-05-18 2021-05-18 防抖效果评估方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022242568A1 true WO2022242568A1 (zh) 2022-11-24

Family

ID=77802658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/092751 WO2022242568A1 (zh) 2021-05-18 2022-05-13 防抖效果评估方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN113436085A (zh)
WO (1) WO2022242568A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436085A (zh) * 2021-05-18 2021-09-24 影石创新科技股份有限公司 防抖效果评估方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10336509A (ja) * 1997-05-30 1998-12-18 Canon Inc 撮像装置、撮像システム及び記録媒体
CN108010059A (zh) * 2017-12-05 2018-05-08 北京小米移动软件有限公司 电子防抖算法的性能分析方法及装置
CN108322666A (zh) * 2018-02-12 2018-07-24 广州视源电子科技股份有限公司 摄像头快门的调控方法、装置、计算机设备及存储介质
CN111193923A (zh) * 2019-09-24 2020-05-22 腾讯科技(深圳)有限公司 视频质量评估方法、装置、电子设备及计算机存储介质
CN113436085A (zh) * 2021-05-18 2021-09-24 影石创新科技股份有限公司 防抖效果评估方法、装置、计算机设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194878B (zh) * 2018-11-08 2021-02-19 深圳市闻耀电子科技有限公司 视频图像防抖方法、装置、设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10336509A (ja) * 1997-05-30 1998-12-18 Canon Inc 撮像装置、撮像システム及び記録媒体
CN108010059A (zh) * 2017-12-05 2018-05-08 北京小米移动软件有限公司 电子防抖算法的性能分析方法及装置
CN108322666A (zh) * 2018-02-12 2018-07-24 广州视源电子科技股份有限公司 摄像头快门的调控方法、装置、计算机设备及存储介质
CN111193923A (zh) * 2019-09-24 2020-05-22 腾讯科技(深圳)有限公司 视频质量评估方法、装置、电子设备及计算机存储介质
CN113436085A (zh) * 2021-05-18 2021-09-24 影石创新科技股份有限公司 防抖效果评估方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN113436085A (zh) 2021-09-24

Similar Documents

Publication Publication Date Title
RU2564832C2 (ru) Способ стабилизации видеоизображения для многофункциональных платформ
US8229172B2 (en) Algorithms for estimating precise and relative object distances in a scene
US9396523B2 (en) Image restoration cascade
CN108492287B (zh) 一种视频抖动检测方法、终端设备及存储介质
US20210312641A1 (en) Determining multiple camera positions from multiple videos
CN113286194A (zh) 视频处理方法、装置、电子设备及可读存储介质
WO2021115136A1 (zh) 视频图像的防抖方法、装置、电子设备和存储介质
US9749534B2 (en) Devices, systems, and methods for estimation of motion blur from a single image
US11532089B2 (en) Optical flow computing method and computing device
CN113179421B (zh) 视频封面选择方法、装置、计算机设备和存储介质
WO2022242568A1 (zh) 防抖效果评估方法、装置、计算机设备和存储介质
WO2022242569A1 (zh) 延迟校准方法、装置、计算机设备和存储介质
CN111667504B (zh) 一种人脸追踪方法、装置及设备
CN114390201A (zh) 对焦方法及其装置
JP2019096222A (ja) 画像処理装置、画像処理方法、コンピュータプログラム
CN111445487A (zh) 图像分割方法、装置、计算机设备和存储介质
US9699371B1 (en) Image processing system with saliency integration and method of operation thereof
US8559518B2 (en) System and method for motion estimation of digital video using multiple recursion rules
CN115439386A (zh) 图像融合方法、装置、电子设备和存储介质
CN115550558A (zh) 拍摄设备的自动曝光方法、装置、电子设备和存储介质
CN115294493A (zh) 视角路径获取方法、装置、电子设备及介质
US11195247B1 (en) Camera motion aware local tone mapping
CN111754411B (zh) 图像降噪方法、图像降噪装置及终端设备
CN112637496A (zh) 图像矫正方法及装置
CN112565595B (zh) 图像抖动消除方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22803890

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22803890

Country of ref document: EP

Kind code of ref document: A1