WO2021078276A1 - Procédé d'obtention de photos prises en continu, terminal intelligent et support de stockage - Google Patents

Procédé d'obtention de photos prises en continu, terminal intelligent et support de stockage Download PDF

Info

Publication number
WO2021078276A1
WO2021078276A1 PCT/CN2020/123355 CN2020123355W WO2021078276A1 WO 2021078276 A1 WO2021078276 A1 WO 2021078276A1 CN 2020123355 W CN2020123355 W CN 2020123355W WO 2021078276 A1 WO2021078276 A1 WO 2021078276A1
Authority
WO
WIPO (PCT)
Prior art keywords
photos
photo
difference value
sharpness
target area
Prior art date
Application number
PCT/CN2020/123355
Other languages
English (en)
Chinese (zh)
Inventor
蒋佳
阮志峰
Original Assignee
Tcl科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tcl科技集团股份有限公司 filed Critical Tcl科技集团股份有限公司
Publication of WO2021078276A1 publication Critical patent/WO2021078276A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • This application belongs to the field of computer technology, and in particular relates to a method for obtaining continuous photographs, an intelligent terminal and a storage medium.
  • One of the objectives of the embodiments of the present application is to provide a method for acquiring continuous photographs, an intelligent terminal, and a storage medium, so as to solve the problem of inefficient and inefficient selection of continuous photographs.
  • the first aspect of the embodiments of the present application provides a method for acquiring continuous photographs, including:
  • the respectively performing blurring processing on each of the first photos to obtain a respective blurred photo corresponding to each of the first photos includes:
  • the target area of each of the first photos is detected separately, and the target area includes a face image and /2 or an object image; the respective target areas corresponding to each of the first photos are subjected to fuzzy filtering processing to obtain each Fuzzy photos corresponding to each of the first photos.
  • the separately detecting the target area of each of the first photos includes:
  • a trained image detection model is used to detect the target area of each photo, and the trained image detection model is a neural network model.
  • the determining the sharpness of each of the first photos based on the difference value, and obtaining a second photo from the first photos based on the sharpness includes:
  • the sharpness of each of the first photos is determined based on the difference value, and a second photo with the highest sharpness is obtained from all the first photos.
  • the determining the sharpness of each of the first photos based on the difference value, and obtaining the second photo with the highest sharpness from all the first photos includes:
  • Calculate the peak signal-to-noise ratio of the difference value determine the sharpness of each of the first photos according to the peak signal-to-noise ratio; obtain the second photo with the highest sharpness from all the first photos.
  • the calculating the peak signal-to-noise ratio of the image difference value includes:
  • the peak signal-to-noise ratio of the differential value is calculated based on a preset peak signal-to-noise ratio calculation formula, and the preset peak signal-to-noise ratio calculation formula is:
  • a ij represents the distance between the first pixel with the pixel coordinate (i, j) corresponding to the grayscale image of the first photo and the second pixel with the pixel coordinate (i, j) of the corresponding blurred photo
  • the difference value of i represents the abscissa of the pixel
  • j represents the ordinate of the pixel
  • n represents the width of the current photo
  • m represents the length of the current photo.
  • the respectively performing blurring processing on each of the first photos to obtain a respective blurred photo corresponding to each of the first photos includes:
  • Gaussian filtering processing is performed on each of the first photos respectively to obtain respective blurred photos corresponding to each of the first photos.
  • the calculating the difference value between each of the first photos and the respective corresponding blurred photos includes:
  • a second aspect of the embodiments of the present application provides a device for acquiring continuous photographs, including:
  • the first obtaining module 501 is configured to obtain the first photo continuously shot
  • the processing module 502 is configured to perform blurring processing on each of the first photos to obtain respective blurred photos corresponding to each of the first photos;
  • the calculation module 503 is configured to calculate the difference value between each of the first photos and the corresponding blurred photos
  • the second acquisition module 504 is configured to determine the sharpness of each of the first photos based on the difference value, and obtain a second photo from the first photos based on the sharpness.
  • the processing module 502 includes:
  • a detection unit configured to separately detect a target area of each of the first photos, where the target area includes a face image and/or an object image;
  • the first processing unit is configured to perform blur filtering processing on the respective target regions corresponding to each of the first photos to obtain respective blurred photos corresponding to each of the first photos.
  • the detection unit is specifically used for:
  • a trained image detection model is used to detect the target area of each photo, and the trained image detection model is a neural network model.
  • the second acquiring module 504 is specifically configured to:
  • the sharpness of each of the first photos is determined based on the difference value, and a second photo with the highest sharpness is obtained from all the first photos.
  • the second acquisition module 504 includes:
  • the first calculation unit is configured to calculate the peak signal-to-noise ratio of the difference value
  • a determining unit configured to determine the sharpness of each of the first photos according to the peak signal-to-noise ratio
  • the obtaining unit is configured to obtain the second photo with the highest definition from all the first photos.
  • the calculation unit is specifically used for:
  • the peak signal-to-noise ratio of the differential value is calculated based on a preset peak signal-to-noise ratio calculation formula, and the preset peak signal-to-noise ratio calculation formula is:
  • a ij represents the distance between the first pixel with the pixel coordinate (i, j) corresponding to the grayscale image of the first photo and the second pixel with the pixel coordinate (i, j) of the corresponding blurred photo
  • the difference value of i represents the abscissa of the pixel
  • j represents the ordinate of the pixel
  • n represents the width of the current photo
  • m represents the length of the current photo.
  • the processing module 502 is specifically configured to: respectively perform Gaussian filtering processing on each of the first photos to obtain respective fuzzy photos corresponding to each of the first photos.
  • the calculation module 503 includes:
  • the second processing unit is configured to perform grayscale processing on each of the first photos to obtain a grayscale image corresponding to each of the first photos;
  • the second calculation unit is configured to calculate the difference value between the grayscale image corresponding to each first photo and the blurred photo corresponding to each first photo.
  • the third aspect of the embodiments of the present application provides an intelligent terminal, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program, The steps of the method for obtaining continuous shots as described in the first aspect are implemented.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the continuous shooting of photos as described in the first aspect is achieved. Method steps.
  • the method for acquiring continuous shots has the following beneficial effects: by acquiring the first continuous shot; blurring each of the first photos to obtain each The fuzzy photos corresponding to each of the first photos; calculate the difference value between each of the first photos and the corresponding fuzzy photos; determine the sharpness of each of the first photos based on the difference value, based on The sharpness derives a second photo from the first photo.
  • the difference value between each of the first photos and the corresponding blurred photos is calculated, and each of the first photos is determined based on the difference value.
  • the sharpness of the first photo, and the second photo is obtained from the first photo based on the sharpness. It is realized that the second photo based on the sharpness is obtained from the first photos of the continuous shooting quickly and accurately, and the efficiency of selecting a clear photo from the multiple continuous photos is improved.
  • the terminal provided in the second aspect of the present application and the computer-readable storage medium provided in the fourth aspect have beneficial effects.
  • the method for acquiring continuous photographs provided in the first aspect of the present application has The beneficial effects are the same and will not be repeated here.
  • FIG. 1 is an implementation process of a method for acquiring continuous shots provided by an embodiment of the present application
  • FIG. 2 is a flow chart of the specific implementation of S102 in Figure 1;
  • FIG. 3 is a flow chart of the specific implementation of S103 in Figure 1;
  • Figure 4 is a flow chart of the specific implementation of S104 in Figure 1;
  • FIG. 5 is a schematic diagram of the device for acquiring continuous photographs provided by the present application.
  • Fig. 6 is a schematic diagram of a terminal provided by the present application.
  • the terminal can automatically and quickly determine the best photo from multiple continuous photos and store it, it can save a lot of time for users to select photos.
  • some methods for automatically identifying the image definition have been proposed.
  • the common traditional algorithm is the edge detection algorithm, which is based on the assumption that the image is blurry, then the edge information will be blurred, and the corresponding detected edge information will be less.
  • the variance calculation method is used to calculate the variance of the edge picture.
  • a picture with a relatively small variance is regarded as a blurred picture, and a picture with a relatively large variance is regarded as a clear picture.
  • the present invention provides a new method for blurring photos.
  • FIG. 1 it is a flow chart of the method for acquiring continuous shots provided by the first embodiment of the present application, and the execution subject of this embodiment is a terminal. The details are as follows:
  • the shooting button when the user sees his favorite scenery or captures some specific actions for his family and friends, he will quickly press the shooting button to start the shooting function to shoot. At this time, the shooting will be detected.
  • the first photo continuously taken when it is detected that the photo is taken, the first photo continuously taken can be obtained.
  • S102 Perform blurring processing on each of the first photos, respectively, to obtain a blurry photo corresponding to each of the first photos.
  • the sharpness of each first photo can be determined by detecting the target area included in each first photo, and by the sharpness of the target area.
  • the first photo includes a human face image and/or an object image
  • the target area is the human face image and/or an object image.
  • FIG. 2 is a flowchart of specific implementation of S102 in FIG. 1.
  • S102 includes:
  • S1021 Detect a target area of each of the first photos, where the target area includes a face image and/or an object image.
  • the trained image detection model detects the target area of each photo, and the trained image detection model is a neural network model.
  • the detecting the target area of each of the first photos separately includes: inputting each of the first photos into the neural network model to perform target area detection, and obtaining each output of the neural network model The target area of the first photo.
  • the training process of the trained image detection model includes: obtaining a first preset number of target photos, where the target photos are the first photos marked with the target area, and the first photos include human face images and /Or an object image, and the target area is a human face image and/or an object image.
  • the pre-established model structure is usually selected, for example, the structure of the neural network model in this embodiment, and then the training samples are input into the pre-established model structure for training.
  • the training samples are the first preset number of target photos.
  • the first preset number of target photos are all the target photos marked with the target area. Narrate the first photo.
  • the training process of the neural network model includes: obtaining a first preset target photo; inputting the first preset number of target photos into a pre-established neural network model for training, and obtaining the neural network model after training.
  • Network model acquiring a second preset number of target photos, and sequentially inputting the second preset number of target photos into the neural network model after training for analysis, and obtaining the annotations output by the neural network model after training.
  • the target photo of the target area if the target photo marked with the target area is compared with the preset target photo marked with the target area, the probability that the target area overlaps is greater than the preset probability threshold, then determine
  • the neural network model after training is a trained neural network model; if the target photo with the target area is compared with the preset target photo with the target area, the probability that the target area overlaps is less than Or equal to the preset probability threshold, then increase the first preset number of target photos, and perform training by inputting the first preset number of target photos into a pre-established neural network model to obtain
  • the first photo of the unlabeled target area is input into the trained neural network model for target area labeling, and the output of the trained neural network model with the target area marked will be obtained.
  • the first photo is input into the trained neural network model for target area labeling, and the output of the trained neural network model with the target area marked.
  • S1022 Perform fuzzy filtering processing on the target area corresponding to each of the first photos respectively, to obtain respective blurred photos corresponding to each of the first photos.
  • the image is further subjected to blur filtering processing, corresponding to the loss of sharpness of the image with blurring.
  • the respective target regions corresponding to each of the first photos are respectively subjected to blur filtering processing to obtain respective blurred photos corresponding to each of the first photos.
  • Gaussian filtering processing is performed on the target area corresponding to each of the first photos to obtain the fuzzy photos corresponding to each of the first photos.
  • the characteristics of different loss of images of different definitions by blur filtering are used, and the definition of the original image is determined by calculating the difference value between each of the first photos and the corresponding blurred photos.
  • the difference value of the image corresponds to the pixel difference value obtained by subtracting the corresponding pixel values of the two images.
  • S103 includes:
  • S1031 Perform grayscale processing on each of the first photos to obtain a grayscale image corresponding to each of the first photos.
  • the first photo is an image of three primary colors (RGB, where R represents the red channel, G represents the green channel, and B represents the blue channel) image
  • the blurred photo corresponding to each of the first photos is gray
  • the input images have the same type and size. Therefore, it is necessary to perform grayscale processing on each of the first photos first.
  • the specific method of gray-scale processing for each of the first photos is not limited here.
  • S1032 Calculate the difference value between the grayscale image corresponding to each first photo and the blurred photo corresponding to each first photo.
  • each of the first photos can be directly
  • the grayscale image corresponding to each first photo is subtracted from the fuzzy photo corresponding to each first photo to obtain a difference image, and the pixel value corresponding to each pixel of the difference image is the difference value.
  • the grayscale image and the blurred photo corresponding to each of the first photos may be normalized separately, and then the normalized image may be calculated after the normalization process.
  • the difference value between the grayscale image of and the blurred photo after normalization processing is not specifically limited.
  • the purpose of the normalization processing is to make the value of each pixel of the grayscale image and each pixel of the blurred photo They are between [0,1].
  • S104 Determine the sharpness of each of the first photos based on the difference value, and obtain a second photo from the first photos based on the sharpness.
  • the difference value is each of the difference images.
  • the pixel value of the pixel is often not clear to determine the sharpness of the image by comparing the size of the pixel value of each pixel of the image. Therefore, in this embodiment, the peak signal-to-noise ratio corresponding to the difference value is further calculated. , To determine the sharpness of each of the first photos.
  • the S104 specifically includes: determining the sharpness of each first photo based on the difference value, from all The second photo with the highest definition is obtained among the first photos.
  • S104 includes:
  • the peak signal-to-noise ratio is often used to indicate the measurement of pixel reconstruction quality in the fields of image compression and the like.
  • the peak signal-to-noise ratio of the differential value is calculated to determine the value of each first photo. Clarity.
  • the peak signal-to-noise ratio of the differential value is calculated based on a preset peak signal-to-noise ratio calculation formula. Further, the preset peak signal-to-noise ratio calculation formula is:
  • a ij represents the distance between the first pixel with the pixel coordinate (i, j) corresponding to the grayscale image of the first photo and the second pixel with the pixel coordinate (i, j) of the corresponding blurred photo
  • the difference value of i represents the abscissa of the pixel
  • j represents the ordinate of the pixel
  • n represents the width of the current photo
  • m represents the length of the current photo.
  • the common peak signal-to-noise ratio is a negative number.
  • the preset peak signal-to-noise ratio formula is based on the traditional peak signal-to-noise ratio.
  • the noise ratio formula evolved, and its value is positive.
  • S1042 Determine the sharpness of each first photo according to the peak signal-to-noise ratio.
  • the peak signal-to-noise ratio when the peak signal-to-noise ratio is larger, the corresponding first photo is blurred, and the peak signal-to-noise ratio is smaller, which means the corresponding first photo is clearer.
  • the remaining first photos are deleted to release cache space.
  • the method for acquiring continuous photographs obtains the first photograph of the continuous photograph; and blurs each of the first photographs to obtain the corresponding blurred photograph of each of the first photographs.
  • the difference value between each of the first photos and the corresponding blurred photos is calculated, and each of the first photos is determined based on the difference value.
  • the sharpness of the first photo, and the second photo is obtained from the first photo based on the sharpness. It is realized that the second photo based on the sharpness is obtained from the first photos continuously shot quickly and accurately, and the efficiency of selecting clear photos from the multiple consecutive photos is improved.
  • Fig. 5 is a schematic diagram of the device for acquiring continuous photographs provided by the present application.
  • the continuous-photograph acquisition device 5 of this embodiment includes: a first acquisition module 501, a processing module 502, a calculation module 503, and a second acquisition module 504. among them,
  • the first obtaining module 501 is configured to obtain the first photo continuously shot
  • the processing module 502 is configured to perform blurring processing on each of the first photos to obtain respective blurred photos corresponding to each of the first photos;
  • the calculation module 503 is configured to calculate the difference value between each of the first photos and the corresponding blurred photos
  • the second acquisition module 504 is configured to determine the sharpness of each of the first photos based on the difference value, and obtain a second photo from the first photos based on the sharpness.
  • the processing module 502 includes:
  • a detection unit configured to separately detect a target area of each of the first photos, where the target area includes a face image and/or an object image;
  • the first processing unit is configured to perform blur filtering processing on the respective target regions corresponding to each of the first photos to obtain respective blurred photos corresponding to each of the first photos.
  • the detection unit is specifically configured to: use a trained image detection model to detect the target area of each photo, and the trained image detection model is a neural network model.
  • the second obtaining module 504 is specifically configured to: determine the sharpness of each of the first photos based on the difference value, and obtain the second photo with the highest sharpness from all the first photos.
  • the second acquisition module 504 includes:
  • the first calculation unit is configured to calculate the peak signal-to-noise ratio of the difference value
  • a determining unit configured to determine the sharpness of each of the first photos according to the peak signal-to-noise ratio
  • the obtaining unit is configured to obtain the second photo with the highest definition from all the first photos.
  • the calculation unit is specifically configured to: calculate the peak signal-to-noise ratio of the differential value based on a preset peak signal-to-noise ratio calculation formula, and the preset peak signal-to-noise ratio calculation formula is:
  • a ij represents the distance between the first pixel with the pixel coordinate (i, j) corresponding to the grayscale image of the first photo and the second pixel with the pixel coordinate (i, j) of the corresponding blurred photo
  • the difference value of i represents the abscissa of the pixel
  • j represents the ordinate of the pixel
  • n represents the width of the current photo
  • m represents the length of the current photo.
  • the processing module 502 is specifically configured to: respectively perform Gaussian filtering processing on each of the first photos to obtain respective fuzzy photos corresponding to each of the first photos.
  • the calculation module 503 includes:
  • the second processing unit is configured to perform grayscale processing on each of the first photos to obtain a grayscale image corresponding to each of the first photos;
  • the second calculation unit is configured to calculate the difference value between the grayscale image corresponding to each first photo and the blurred photo corresponding to each first photo.
  • Fig. 6 is a schematic diagram of a terminal provided by the present application.
  • the terminal 6 of this embodiment includes a processor 60, a memory 61, and a computer program 62 stored in the memory 61 and running on the processor 60, such as a continuous photo acquisition program.
  • the processor 60 executes the computer program 62, the steps in the foregoing embodiments of the method for acquiring continuous photographs are implemented, for example, steps 101 to 104 shown in FIG. 1.
  • the processor 60 executes the computer program 62
  • the functions of the modules/units in the embodiment of the apparatus for acquiring continuous photographs are realized, for example, the functions of the modules 501 to 504 shown in FIG. 5.
  • the computer program 62 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 61 and executed by the processor 60 to complete This application.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 62 in the terminal 6.
  • the computer program 62 may be divided into a first acquisition module, a processing module, a calculation module, and a second acquisition module (a module in a virtual device), and the specific functions of each module are as follows:
  • the first acquisition module is used to acquire the first photo continuously shot
  • a processing module which is configured to perform blur processing on each of the first photos to obtain a blurry photo corresponding to each of the first photos;
  • a calculation module configured to calculate the difference between each of the first photos and the corresponding blurred photos
  • the second acquisition module is configured to determine the sharpness of each of the first photos based on the pixel difference value, and acquire and store the second photo with the highest sharpness from all the first photos.
  • the terminal further includes a photographing module, and the photographing module is used to photograph the first photo.
  • the disclosed device/terminal device and method may be implemented in other ways.
  • the device/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple communication units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit. If the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, it can implement the steps of the foregoing method embodiments.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunications signal
  • software distribution media etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction.
  • the computer-readable medium Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention peut être appliquée au domaine technique des ordinateurs et concerne un procédé d'obtention de photos prises en continu. Le procédé consiste à : obtenir des premières photos prises en continu ; flouter chaque première photo séparément pour obtenir une photo floue correspondant respectivement à chaque première photo ; calculer une valeur de différence entre chaque première photo et la photo floue lui correspondant respectivement ; déterminer la définition de chaque première photo sur la base de la valeur de différence et obtenir une seconde photo à partir des premières photos sur la base de la définition. Étant donné qu'à l'issue du floutage de chacune des premières photos prises en continu, la valeur de différence entre chaque première photo et la photo floue lui correspondant respectivement est calculée, la définition de chaque première photo est déterminée sur la base de la valeur de différence et la seconde photo est obtenue à partir des premières photos sur la base de la définition. La seconde photo est obtenue rapidement et fidèlement à partir des premières photos prises en continu sur la base de la définition, de façon à améliorer l'efficacité de sélection d'une photo nette à partir de multiples photos prises en continu.
PCT/CN2020/123355 2019-10-25 2020-10-23 Procédé d'obtention de photos prises en continu, terminal intelligent et support de stockage WO2021078276A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911025549.5A CN112714246A (zh) 2019-10-25 2019-10-25 连拍照片获取方法、智能终端及存储介质
CN201911025549.5 2019-10-25

Publications (1)

Publication Number Publication Date
WO2021078276A1 true WO2021078276A1 (fr) 2021-04-29

Family

ID=75540889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123355 WO2021078276A1 (fr) 2019-10-25 2020-10-23 Procédé d'obtention de photos prises en continu, terminal intelligent et support de stockage

Country Status (2)

Country Link
CN (1) CN112714246A (fr)
WO (1) WO2021078276A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658135A (zh) * 2021-08-17 2021-11-16 中国矿业大学 一种基于模糊pid自适应调光皮带异物检测方法与系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013206175A (ja) * 2012-03-28 2013-10-07 Fujitsu Ltd 画像判定装置、画像判定方法及び画像判定用コンピュータプログラム
CN103581662A (zh) * 2012-07-26 2014-02-12 腾讯科技(深圳)有限公司 视频清晰度测量方法和系统
CN105531988A (zh) * 2013-09-09 2016-04-27 苹果公司 从连拍照片捕获集合中自动选择保存者图像
CN105654463A (zh) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 应用于连拍过程中的图像处理方法和装置
CN106570028A (zh) * 2015-10-10 2017-04-19 比亚迪股份有限公司 移动终端及模糊图像的删除方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160277743A1 (en) * 2014-10-06 2016-09-22 Telefonaktiebolaget L M Ericsson (Publ) Evaluation Measure for HDR Video Frames
CN104394377B (zh) * 2014-12-08 2018-03-02 浙江省公众信息产业有限公司 一种监控图像的模糊异常的识别方法及装置
CN106934806B (zh) * 2017-03-09 2019-09-10 东南大学 一种基于结构清晰度的无参考图失焦模糊区域分割方法
CN109215010B (zh) * 2017-06-29 2021-08-31 沈阳新松机器人自动化股份有限公司 一种图像质量判断的方法及机器人人脸识别系统
CN108346139A (zh) * 2018-01-09 2018-07-31 阿里巴巴集团控股有限公司 一种图像筛选方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013206175A (ja) * 2012-03-28 2013-10-07 Fujitsu Ltd 画像判定装置、画像判定方法及び画像判定用コンピュータプログラム
CN103581662A (zh) * 2012-07-26 2014-02-12 腾讯科技(深圳)有限公司 视频清晰度测量方法和系统
CN105531988A (zh) * 2013-09-09 2016-04-27 苹果公司 从连拍照片捕获集合中自动选择保存者图像
CN106570028A (zh) * 2015-10-10 2017-04-19 比亚迪股份有限公司 移动终端及模糊图像的删除方法和装置
CN105654463A (zh) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 应用于连拍过程中的图像处理方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658135A (zh) * 2021-08-17 2021-11-16 中国矿业大学 一种基于模糊pid自适应调光皮带异物检测方法与系统
CN113658135B (zh) * 2021-08-17 2024-02-02 中国矿业大学 一种基于模糊pid自适应调光皮带异物检测方法与系统

Also Published As

Publication number Publication date
CN112714246A (zh) 2021-04-27

Similar Documents

Publication Publication Date Title
US11336840B2 (en) Matching foreground and virtual background during a video communication session
JP6395810B2 (ja) 動きゴーストフィルタリングのための基準画像選択
JP6905602B2 (ja) 画像照明方法、装置、電子機器および記憶媒体
JP5389903B2 (ja) 最適映像選択
CN111327824B (zh) 拍摄参数的选择方法、装置、存储介质及电子设备
WO2022183902A1 (fr) Procédé et appareil de détermination de définition d'image, dispositif, et support de stockage
CN103716547A (zh) 一种智能模式的拍照方法
JP2007087253A (ja) 画像補正方法および装置
US20210390341A1 (en) Image denoising model training method, imaging denoising method, devices and storage medium
US11917158B2 (en) Static video recognition
CN107911625A (zh) 测光方法、装置、可读存储介质和计算机设备
CN110807759A (zh) 照片质量的评价方法及装置、电子设备、可读存储介质
CN110728705B (zh) 图像处理方法、装置、存储介质及电子设备
WO2019029573A1 (fr) Procédé de floutage d'image, support d'informations lisible par ordinateur et dispositif informatique
WO2022151813A1 (fr) Dispositif électronique, processeur de signal d'image principal et procédé de traitement d'image
WO2021128593A1 (fr) Procédé, appareil, et système de traitement d'image faciale
WO2017177559A1 (fr) Procédé et appareil de gestion d'images
CN106773453B (zh) 一种照相机曝光的方法、装置及移动终端
WO2021078276A1 (fr) Procédé d'obtention de photos prises en continu, terminal intelligent et support de stockage
WO2015196681A1 (fr) Procédé de traitement d'images et dispositif électronique
JP2018084861A (ja) 情報処理装置、情報処理方法、及び情報処理プログラム
JP6977483B2 (ja) 画像処理装置、画像処理方法、画像処理システムおよびプログラム
Zhu et al. No-reference image quality assessment for photographic images of consumer device
WO2023273111A1 (fr) Procédé et appareil de traitement d'image, et dispositif informatique et support de stockage
US20080199073A1 (en) Red eye detection in digital images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20879997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20879997

Country of ref document: EP

Kind code of ref document: A1