WO2020206912A1 - Image definition recognition method, image definition recognition apparatus, and terminal device - Google Patents

Image definition recognition method, image definition recognition apparatus, and terminal device Download PDF

Info

Publication number
WO2020206912A1
WO2020206912A1 PCT/CN2019/103283 CN2019103283W WO2020206912A1 WO 2020206912 A1 WO2020206912 A1 WO 2020206912A1 CN 2019103283 W CN2019103283 W CN 2019103283W WO 2020206912 A1 WO2020206912 A1 WO 2020206912A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
recognized
sub
target object
area
Prior art date
Application number
PCT/CN2019/103283
Other languages
French (fr)
Chinese (zh)
Inventor
惠慧
严明洋
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020206912A1 publication Critical patent/WO2020206912A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Definitions

  • This application belongs to the field of image processing technology, and in particular relates to an image definition recognition method, image definition recognition device, terminal equipment, and computer non-volatile readable storage medium.
  • the current image definition recognition method usually determines the image definition based on all the pixels in the entire image.
  • the human eye's perception of image sharpness is often affected by some areas in the image, and therefore, the sharpness recognition result obtained by the current image sharpness recognition method may be different from the sharpness perceived by the human eye.
  • this application provides an image definition recognition method, image definition recognition device, terminal equipment and computer non-volatile readable storage medium, which can make the recognized image definition closer to human to a certain extent.
  • the first aspect of this application provides an image definition recognition method, including:
  • the image definition of each sub-image is recognized, and the image definition of the image to be recognized is determined according to the recognized image definition of each sub-image.
  • the second aspect of the present application provides an image definition recognition device, including:
  • the target acquisition module is used to acquire the image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
  • the target interception module is configured to intercept one or more sub-images containing the target object in the image to be recognized according to the position of each target object in the image to be recognized;
  • the definition recognition module is used to recognize the image definition of each sub-image, and determine the image definition of the image to be recognized according to the recognized image definition of each sub-image.
  • the third aspect of the present application provides a terminal device, including a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor executes the computer-readable instructions, the implementation is as described above. The steps of the first aspect of the method.
  • a fourth aspect of the present application provides a computer non-volatile readable storage medium.
  • the computer non-volatile readable storage medium stores computer readable instructions.
  • the computer readable instructions are executed by a processor, the above-mentioned On the one hand the steps of the method.
  • the fifth aspect of the present application provides a computer-readable instruction product.
  • the computer-readable instruction product includes computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the steps of the method in the first aspect are implemented. .
  • this application provides a method for recognizing image clarity.
  • obtain an image to be recognized that contains one or more target objects and obtain the position of each target object in the image to be recognized.
  • the above target object is a dog
  • an image to be recognized containing a dog can be obtained X and the position of the dog in the image X to be recognized;
  • one or more sub-images containing the target object are intercepted in the image to be recognized, that is Yes
  • the sub-image Y containing the dog can be intercepted in the image X to be recognized;
  • the image definition of each sub-image is recognized, and the For each image definition, determine the image definition of the image to be recognized, that is, identify the image definition of the sub-image Y, and determine the image definition of the image X to be recognized according to the image definition of the sub-image Y ( For each image definition, determine the image definition of the image to be recognized, that is, identify the image definition of the sub-image Y, and determine the image definition of the image X to be recognized
  • the image clarity of the image to be recognized is based on the image clarity of the image area where the target object is located.
  • the human eye observes the image, it is often Attracted by a specific object, therefore, the human eye’s perception of image clarity is largely determined by the image area where the target object is located. Therefore, the recognition result of image clarity in this application will be closer to the human eye’s perception Image clarity.
  • FIG. 1 is a schematic diagram of the implementation process of an image definition recognition method provided by Embodiment 1 of the present application;
  • FIG. 2 is a schematic diagram of a method for capturing sub-images according to Embodiment 1 of the present application;
  • FIG. 3 is a schematic diagram of the implementation process of another image definition recognition method provided by Embodiment 2 of the present application.
  • FIG. 4 is a schematic structural diagram of an image definition recognition device provided by Embodiment 3 of the present application.
  • Fig. 5 is a schematic structural diagram of a terminal device provided in Embodiment 4 of the present application.
  • the image definition recognition method provided by the embodiments of the present application is applicable to terminal devices.
  • the terminal devices include, but are not limited to, smart phones, tablet computers, notebooks, smart wearable devices, desktop computers, and cloud servers.
  • the image definition recognition method is applied to terminal devices (such as smart phones). Please refer to attached drawing 1.
  • the image definition recognition method of the first embodiment of the application include:
  • step S101 an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized are obtained;
  • the aforementioned target objects are objects that are easily captured by human eyes, such as portraits, dogs, cats, flowers, and so on.
  • step S101 may include the following steps:
  • Step A Obtain the image to be processed
  • Step B Perform target detection on the image to be processed to obtain a detection result, which is used to indicate whether the target object is detected in the image to be processed, and if the target object is detected in the image to be processed, the detection The result is used to indicate the position of each target object in the image to be processed;
  • Step C If the detection result indicates that the target object is detected in the image to be processed, the image to be processed is determined as the image to be recognized, and based on the detection result, the position of each target object in the image to be recognized is determined position.
  • the method for acquiring the image to be processed in the foregoing step A may be: when it is detected that the user takes an image through the camera, the image captured by the camera is determined as the image to be processed.
  • step S101 is only a specific implementation method of step S101, and this step S101 can also have other specific implementation methods, for example, the terminal device can output the prompt message "Dear user, please You enter an image containing the target object (person, dog, or cat)", and then the user can select an image containing a person, dog, or cat from the images stored locally according to the prompt information, and the terminal device obtains the image selected by the user, And the image is determined as the above-mentioned image to be recognized.
  • the user can also inform the terminal device of the location of the target object in the image to be recognized. For example, the user informs the terminal device that the target object is in the image to be recognized by selecting the target object. s position.
  • the method of performing target detection on the image to be processed to obtain the detection result may be: performing target detection on the image to be processed using the trained target detection model, and obtaining the detection result output by the target detection model (specifically, The method of using the target detection model to perform target detection on an image is an existing technology, and will not be repeated here).
  • this application does not limit the target detection method specifically used in step B.
  • the "image to be recognized" in step S101 may be an image taken by the user through the camera APP of the terminal device; or it may be the camera APP in the terminal device or the preview image collected by the camera APP It can also be a frame of preview image in the terminal; alternatively, it can also be an image saved locally on the terminal device; or, it can also be a certain frame of an image in a video watched online or locally saved.
  • This application does not limit the source of the aforementioned image to be identified.
  • step S102 according to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized;
  • the number of sub-images obtained through this step S102 may be one or multiple, and the number of target objects contained in each sub-image may be one or multiple. This application does not limit the number of captured sub-images and the number of target objects contained in each sub-image.
  • step S102 uses FIG. 2 to describe in detail how to capture the sub-image.
  • step S101 the image to be recognized obtained in step S101 is an image 201, which contains a target object 202 (ie a portrait), and the target object 202 obtained in step S101 is in the image 201
  • the position of is ⁇ A(x a , y a ), B(x b , y b ) ⁇ (that is, the position of the target object 202 is: the rectangular area composed of point A and point B, as shown in Figure 2(a) Box).
  • the sub-image containing the target object 202 can be intercepted according to the coordinates of point A and point B.
  • the rectangular area composed of point A and point B can be directly used as the sub-image, or
  • the rectangular area formed by point A and point B is subjected to dilation operation, and the image area after dilation operation is used as the sub-image containing the target object 202 (please note that the method of obtaining the sub-image is not limited in this application).
  • step S101 the image to be recognized obtained in step S101 is image 203, which contains two target objects, target object 204 and target object 205, respectively.
  • the target obtained in step S101 is The position of the object 204 in the image 203 is ⁇ A(x a , y a ), B(x b , y b ) ⁇ , and the position of the target object 205 in the image 203 obtained by the above step S101 is ⁇ C(x c ,y c ),D(x d ,y d ) ⁇ .
  • the screenshot may contain the sub-images of the target object 204 and the target object 205, or two sub-images may be intercepted, each of which contains only the target The sub-image of the image 204 and the sub-image only containing the target object 205. That is, when the image to be recognized contains multiple target objects, the number of sub-images obtained in step S102 may be one or multiple.
  • step S103 the image clarity of each sub-image is identified, and the image clarity of the image to be identified is determined according to the identified image clarity of each sub-image;
  • the image definition of each sub-image acquired in step S102 is recognized (the image definition of each sub-image can be recognized through the trained neural network model, or the Tenengrad gradient method, Laplacian gradient method, and Variance method, etc. to identify the image clarity of each sub-image, this application does not limit the image clarity identification method of each sub-image), and then the acquired image clarity of each sub-image can be averaged or weighted to obtain the The image clarity of the image.
  • each sub-image is in the image to be recognized
  • the position of each sub-image and/or the proportion of the area occupied by each sub-image of the image to be recognized to determine the weight value corresponding to the image clarity of each sub-image.
  • a sub-image when a sub-image is located in the middle area of the image to be recognized, it occupies the image to be recognized
  • the image clarity of the sub-image can correspond to a larger weight value; then, according to The weight value of the image sharpness of each sub-image is weighted and averaged on the image sharpness of all the sub-images to obtain the image sharpness of the image to be identified.
  • the image clarity of the image area other than each sub-image in the image to be recognized can also be relied on to determine the image clarity of the image to be recognized. degree.
  • Figure 2(b) when determining the image clarity of the image 203, in addition to relying on the image clarity of the sub-image 204 and the sub-image 205, you can also rely on the image 203 in addition to the sub-image 204 and the sub-image 205.
  • the image clarity of the outer image area in addition to relying on the image clarity of the sub-image 204 and the sub-image 205.
  • the image clarity of the sub-image 204 and the sub-image 205 can be assigned a larger weight value to remove the sub-image 204 and The image clarity of the image area outside the sub-image 205 is assigned a smaller weight value, so that the final image clarity of the image 203 is obtained.
  • the method further includes the following steps: judging whether the image clarity of the image to be recognized is less than a preset threshold; if it is less than the preset threshold, super-resolution reconstruction is performed on the image to be recognized.
  • the image clarity of the image to be recognized is based on the image clarity of the image area where the target object is located. Normally, when the human eye observes the image, it is often attracted by the specific object in the image. Therefore, the human eye’s perception of image clarity is largely determined by the image area where the target object is located. Therefore, the recognition result of a pair of image clarity in the embodiment of this application will be closer to the human eye’s perception of image clarity. degree.
  • the image definition recognition method of the second embodiment of the present application includes:
  • step S301 an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized are obtained;
  • step S301 is completely the same as that of step S101 in the first embodiment.
  • step S101 in the first embodiment.
  • step S302 according to the position of each target object in the image to be recognized, the union of the image regions indicated by each position is determined;
  • the union of the image area occupied by each target object in the image to be recognized needs to be obtained.
  • step S303 calculate the proportion of the first area of the image to be recognized that the image area indicated by the union occupies
  • the above-mentioned first area ratio is: the area of the rectangular area composed of point A and point B/the area of image 201.
  • the above-mentioned first area ratio is: (area of a rectangular area composed of points A and B+area of a rectangular area composed of points C and D)/area of image 203.
  • step S304 it is determined whether the first area ratio is less than the first preset ratio, and if it is less than the first preset ratio, one or more sub-images containing the target object are intercepted in the image to be recognized;
  • the first area ratio is large (for example, greater than or equal to the first preset ratio), it means that most of the image areas in the image to be recognized are the target objects. In this case, it is completely There is no need to cut out the image area where the target object is located, and the traditional image definition recognition method can be used directly to recognize the image definition of the image to be recognized.
  • the above-mentioned first area ratio is less than the above-mentioned first preset ratio, it means that besides the target object in the image to be recognized, there are also some images that are not easy to capture by the human eye. In this case, you can set the target object
  • the image area is cut out, and the cut out sub-image is used to determine the image clarity of the image to be recognized.
  • step S304 can refer to the first embodiment.
  • a specific implementation manner of "intercepting one or more sub-images containing the target object in the image to be recognized” is provided:
  • Step D According to the position of each target object in the image to be recognized, the ratio of the image area indicated by each position to the second area of the image to be recognized is calculated;
  • Step E Determine whether there is an image area with a second area ratio greater than a second preset ratio based on the second area ratio, where the second preset ratio is less than the first preset ratio;
  • Step F If it exists, perform an expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
  • Step G Determine each corrected image area as each sub-image
  • step D-step F can ignore the influence of the target object occupying a small proportion of the area of the image to be recognized on the definition of the image to be recognized.
  • step S305 the image clarity of each sub-image is identified, and the image clarity of the image to be identified is determined according to the identified image clarity of each sub-image;
  • step S305 is completely the same as step S103 in Embodiment 1.
  • step S103 in Embodiment 1.
  • step S305 is completely the same as step S103 in Embodiment 1.
  • the technical solution defined in the second embodiment of the present application performs the interception operation only when the proportion of the area occupied by the target object in the image to be recognized is less than a certain value. Therefore, the technology defined in the second embodiment of the present application Compared with the first embodiment, the solution can reduce the processing burden of the terminal device to a certain extent.
  • the technical solution provided in the above step D-step F in the second embodiment of the present application can reduce the number of sub-images to a certain extent. Therefore, the processing burden on the terminal device can be further reduced.
  • the second embodiment of the present application is the same as the first embodiment, which can also make the image definition recognition result of the image to be recognized more close to the image definition perceived by the human eye.
  • the image clarity recognition device 400 includes:
  • the target acquisition module 401 is configured to acquire an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
  • the target capturing module 402 is configured to capture one or more sub-images containing the target object in the image to be recognized according to the position of each target object in the image to be recognized;
  • the definition recognition module 403 is configured to recognize the image definition of each sub-image, and determine the image definition of the image to be recognized according to the recognized image definition of each sub-image.
  • the foregoing target interception module 402 includes:
  • a union determining unit configured to determine the union of the image regions indicated by each position according to the position of each target object in the image to be recognized
  • the first ratio unit is used to calculate the ratio of the first area of the image to be recognized by the image area indicated by the union;
  • the target capture unit is configured to capture one or more sub-images containing the target object in the image to be recognized if the ratio is smaller than the first preset ratio.
  • the foregoing target interception unit includes:
  • the second ratio sub-unit is used to calculate the image area indicated by the previous position to occupy the second proportion of the image to be identified according to the position of each target object in the image to be identified if it is less than the first preset ratio.
  • a judging subunit for judging whether there is an image area with a second area ratio greater than a second preset ratio based on the second area ratio, where the second preset ratio is less than the first preset ratio;
  • An expansion operation subunit if it exists, perform expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
  • the aforementioned clarity recognition module 403 includes:
  • the weight determination unit is used to determine the size of each sub-image according to the category of the target object contained in each sub-image, the position of each sub-image in the image to be recognized, and/or the area ratio of each sub-image to the image to be recognized.
  • the weight value corresponding to the image clarity
  • the weighted average unit is used for weighting and averaging all the image definitions according to the weight value of each image definition to obtain the image definition of the image to be recognized.
  • the aforementioned image definition recognition device 400 further includes:
  • a judging module for judging whether the image clarity of the image to be recognized is less than a preset threshold
  • the reconstruction module is configured to perform super-resolution reconstruction on the image to be recognized if it is less than the preset threshold.
  • the foregoing target acquisition module 401 includes:
  • An image acquisition unit for acquiring an image to be processed
  • the target detection unit is used to perform target detection on the image to be processed and obtain a detection result, which is used to indicate whether the target object is detected in the image to be processed, and if the target object is detected in the image to be processed ,
  • the detection result is used to indicate the position of each target object in the image to be processed;
  • the target acquisition unit is configured to, if the detection result indicates that the target object is detected in the image to be processed, determine the image to be processed as the image to be recognized, and determine that each target object is in the image to be recognized based on the detection result. The position in the image.
  • the aforementioned image acquisition unit is specifically configured to: when it is detected that the user takes an image through a camera, the image taken by the camera is determined as the aforementioned image to be processed.
  • FIG. 5 is a schematic diagram of a terminal device provided in Embodiment 4 of the present application.
  • the terminal device 500 of this embodiment includes: a processor 501, a memory 502, and computer-readable instructions 503 stored in the foregoing memory 502 and running on the foregoing processor 501.
  • the processor 501 executes the computer-readable instruction 503
  • the steps in the foregoing method embodiments are implemented, for example, steps S101 to S103 shown in FIG. 1.
  • the processor 501 executes the computer readable instruction 503
  • the foregoing computer-readable instruction 503 may be divided into one or more modules/units, and the foregoing one or more modules/units are stored in the foregoing memory 502 and executed by the foregoing processor 501 to complete the application .
  • the foregoing one or more modules/units may be a series of computer-readable instruction instruction segments capable of completing specific functions, and the instruction segment is used to describe the execution process of the foregoing computer-readable instructions 503 in the foregoing terminal device 500.
  • the above-mentioned computer-readable instruction 503 can be divided into a target acquisition module, a target interception module, and a definition recognition module, and the specific functions of each module are as follows:
  • the foregoing terminal device may include, but is not limited to, a processor 501 and a memory 502.
  • FIG. 5 is only an example of the terminal device 500, and does not constitute a limitation on the terminal device 500. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the aforementioned terminal device may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 501 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the foregoing memory 502 may be an internal storage unit of the foregoing terminal device 500, such as a hard disk or memory of the terminal device 500.
  • the memory 502 may also be an external storage device of the terminal device 500, for example, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory equipped on the terminal device 500. Card (Flash Card) etc.
  • the aforementioned memory 502 may also include both an internal storage unit of the aforementioned terminal device 500 and an external storage device.
  • the aforementioned memory 502 is used to store the aforementioned computer-readable instructions and other programs and data required by the aforementioned terminal device.
  • the aforementioned memory 502 may also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/terminal device and method may be implemented in other ways.
  • the device/terminal device embodiments described above are merely illustrative.
  • the division of the above-mentioned modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or Components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the above integrated modules/units are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer readable storage medium. Based on this understanding, the present application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions.
  • the above-mentioned computer-readable instructions can be stored in a nonvolatile computer. In the read storage medium, when the computer-readable instructions are executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Channel
  • memory bus Rabus direct RAM
  • DRDRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Abstract

An image definition recognition method, an image definition recognition apparatus, and a terminal device. The method comprises: obtaining an image to be recognized that comprises one or more target objects, and the position of each target object in the image to be recognized (S101); according to the position of each target object in the image to be recognized, clipping one or more sub images comprising the target object from the image to be recognized (S102); and recognizing the image definition of each sub image, and according to the recognized image definition of each sub image, determining the image definition of the image to be recognized (S103). The method can make the image definition recognition result of the image to be recognized be closer to the image definition sensed by human eyes.

Description

图像清晰度识别方法、图像清晰度识别装置及终端设备Image definition recognition method, image definition recognition device and terminal equipment
本申请要求于2019年04月11日递交的申请号为CN 201910288549.8、发明名称为“图像清晰度识别方法、图像清晰度识别装置及终端设备”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。This application requires that the application number submitted on April 11, 2019 is CN 201910288549.8, the priority of the Chinese patent application with the title of "Image Clarity Recognition Method, Image Clarity Recognition Device and Terminal Equipment", the entire content of the Chinese patent application is incorporated into this application by reference.
技术领域Technical field
本申请属于图像处理技术领域,尤其涉及一种图像清晰度识别方法、图像清晰度识别装置、终端设备及计算机非易失性可读存储介质。This application belongs to the field of image processing technology, and in particular relates to an image definition recognition method, image definition recognition device, terminal equipment, and computer non-volatile readable storage medium.
背景技术Background technique
目前,图像清晰度的识别方法有很多种,比如,Tenengrad梯度方法、Laplacian梯度方法以及方差方法等。目前的图像清晰度识别方法通常是依据整张图像中的所有像素点来确定图像清晰度的。At present, there are many ways to recognize image definition, such as Tenengrad gradient method, Laplacian gradient method, and variance method. The current image definition recognition method usually determines the image definition based on all the pixels in the entire image.
然而,人眼对于图像清晰度的认识往往会受到图像中部分区域的影响,因此,可能会导致目前的图像清晰度识别方法所得到的清晰度识别结果与人眼感受到的清晰度不同。However, the human eye's perception of image sharpness is often affected by some areas in the image, and therefore, the sharpness recognition result obtained by the current image sharpness recognition method may be different from the sharpness perceived by the human eye.
技术问题technical problem
有鉴于此,本申请提供了一种图像清晰度识别方法、图像清晰度识别装置、终端设备及计算机非易失性可读存储介质,可以在一定程度上使得识别出的图像清晰度更加逼近人眼感受到的图像清晰度。In view of this, this application provides an image definition recognition method, image definition recognition device, terminal equipment and computer non-volatile readable storage medium, which can make the recognized image definition closer to human to a certain extent. The sharpness of the image perceived by the eyes.
技术解决方案Technical solutions
本申请第一方面提供了一种图像清晰度识别方法,包括:The first aspect of this application provides an image definition recognition method, including:
获取包含一个或多个目标对象的待识别图像以及每个目标对象在上述待识别图像中的位置;Acquiring an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
根据每个目标对象在上述待识别图像中的位置,在该待识别图像中截取包含目标对象的一个或多个子图像;According to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized;
识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定上述待识别图像的图像清晰度。The image definition of each sub-image is recognized, and the image definition of the image to be recognized is determined according to the recognized image definition of each sub-image.
本申请第二方面提供了一种图像清晰度识别装置,包括:The second aspect of the present application provides an image definition recognition device, including:
目标获取模块,用于获取包含一个或多个目标对象的待识别图像以及每个目标对象在上述待识别图像中的位置;The target acquisition module is used to acquire the image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
目标截取模块,用于根据每个目标对象在上述待识别图像中的位置,在该待识别图像中截取包含目标对象的一个或多个子图像;The target interception module is configured to intercept one or more sub-images containing the target object in the image to be recognized according to the position of each target object in the image to be recognized;
清晰度识别模块,用于识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定上述待识别图像的图像清晰度。The definition recognition module is used to recognize the image definition of each sub-image, and determine the image definition of the image to be recognized according to the recognized image definition of each sub-image.
本申请第三方面提供了一种终端设备,包括存储器、处理器以及存储在上述存储器中并可在上述处理器上运行的计算机可读指令,上述处理器执行上述计算机可读指令时实现如上述第一方面方法的步骤。The third aspect of the present application provides a terminal device, including a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor. When the processor executes the computer-readable instructions, the implementation is as described above. The steps of the first aspect of the method.
本申请第四方面提供了一种计算机非易失性可读存储介质,上述计算机非易失性可读存储介质存储有计算机可读指令,上述计算机可读指令被处理器执行时实现如上述第一方面方法的步骤。A fourth aspect of the present application provides a computer non-volatile readable storage medium. The computer non-volatile readable storage medium stores computer readable instructions. When the computer readable instructions are executed by a processor, the above-mentioned On the one hand the steps of the method.
本申请第五方面提供了一种计算机可读指令产品,上述计算机可读指令产品包括计算机可读指令,上述计算机可读指令被一个或多个处理器执行时实现如上述第一方面方法的步骤。The fifth aspect of the present application provides a computer-readable instruction product. The computer-readable instruction product includes computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the steps of the method in the first aspect are implemented. .
有益效果Beneficial effect
由上可见,本申请提供了一种图像清晰度识别方法。首先,获取包含一个或多个目标对象的待识别图像,并获取每个目标对象在该待识别图像中的位置,比如,若上述目标对象为狗,则可以获取包含一只狗的待识别图像X以及该只狗在该待识别图像X中的位置;其次,根据每个目标对象在上述待识别图像中的位置,在该待识别图像中截取包含目标对象的一个或多个子图像,也即是,在获取到狗在待识别图像X中的位置之后,则可以在上述待识别图像X中截取包含该狗的子图像Y;最后,识别每个子图像的图像清晰度,并根据识别出的每个图像清晰度,确定上述待识别图像的图像清晰度,也即是,识别上述子图像Y的图像清晰度,并根据该子图像Y的图像清晰度确定待识别图像X的图像清晰度(比如,可以直接将子图像Y的图像清晰度确定为待识别图像X的图像清晰度)。由此可见,本申请所提供的技术方案中,待识别图像的图像清晰度是基于目标对象所在图像区域的图像清晰度的,通常情况下,人眼在观察图像时,往往会被图像中的特定对象所吸引,因此,人眼对图像清晰度的感知,很大程度上是由目标对象所在的图像区域决定的,因此,本申请对图像清晰度的识别结果会更加逼近人眼感受到的图像清晰度。It can be seen from the above that this application provides a method for recognizing image clarity. First, obtain an image to be recognized that contains one or more target objects, and obtain the position of each target object in the image to be recognized. For example, if the above target object is a dog, then an image to be recognized containing a dog can be obtained X and the position of the dog in the image X to be recognized; secondly, according to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized, that is Yes, after obtaining the position of the dog in the image X to be recognized, the sub-image Y containing the dog can be intercepted in the image X to be recognized; finally, the image definition of each sub-image is recognized, and the For each image definition, determine the image definition of the image to be recognized, that is, identify the image definition of the sub-image Y, and determine the image definition of the image X to be recognized according to the image definition of the sub-image Y ( For example, the image definition of the sub-image Y can be directly determined as the image definition of the image X to be recognized). It can be seen that, in the technical solution provided by this application, the image clarity of the image to be recognized is based on the image clarity of the image area where the target object is located. Normally, when the human eye observes the image, it is often Attracted by a specific object, therefore, the human eye’s perception of image clarity is largely determined by the image area where the target object is located. Therefore, the recognition result of image clarity in this application will be closer to the human eye’s perception Image clarity.
附图说明Description of the drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only of the present application. For some embodiments, for those of ordinary skill in the art, other drawings can be obtained from these drawings without creative labor.
图1是本申请实施例一提供的一种图像清晰度识别方法的实现流程示意图;FIG. 1 is a schematic diagram of the implementation process of an image definition recognition method provided by Embodiment 1 of the present application;
图2是本申请实施例一提供的子图像截取方法示意图;FIG. 2 is a schematic diagram of a method for capturing sub-images according to Embodiment 1 of the present application;
图3是本申请实施例二提供的另一种图像清晰度识别方法的实现流程示意图;FIG. 3 is a schematic diagram of the implementation process of another image definition recognition method provided by Embodiment 2 of the present application;
图4是本申请实施例三提供的一种图像清晰度识别装置的结构示意图;FIG. 4 is a schematic structural diagram of an image definition recognition device provided by Embodiment 3 of the present application;
图5是本申请实施例四提供的终端设备的结构示意图。Fig. 5 is a schematic structural diagram of a terminal device provided in Embodiment 4 of the present application.
本发明的实施方式Embodiments of the invention
以下描述中,为了说明而不是为了限定,提出了诸如特定的具体细节。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of explanation rather than limitation, specific details such as specific are proposed. However, it should be clear to those skilled in the art that the present application can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to avoid unnecessary details from obstructing the description of this application.
本申请实施例提供的图像清晰度识别方法适用于终端设备,示例性地,该终端设备包括但不限于:智能手机、平板电脑、笔记本、智能可穿戴设备、桌上型计算机以及云端服务器等。The image definition recognition method provided by the embodiments of the present application is applicable to terminal devices. Illustratively, the terminal devices include, but are not limited to, smart phones, tablet computers, notebooks, smart wearable devices, desktop computers, and cloud servers.
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It should be understood that when used in this specification and appended claims, the term "comprising" indicates the existence of the described features, wholes, steps, operations, elements and/or components, but does not exclude one or more other features Existence or addition of, whole, step, operation, element, component and/or its collection.
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should also be understood that the terms used in the specification of this application are only for the purpose of describing specific embodiments and are not intended to limit the application. As used in the specification of this application and the appended claims, unless the context clearly indicates other circumstances, the singular forms "a", "an" and "the" are intended to include plural forms.
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should be further understood that the term "and/or" used in the specification and appended claims of this application refers to any combination and all possible combinations of one or more of the associated listed items, and includes these combinations .
另外,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of this application, the terms "first", "second", etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance.
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。In order to illustrate the technical solutions described in the present application, specific embodiments are used for description below.
下面对本申请实施例一提供的图像清晰度识别方法进行描述,该图像清晰度识别方法应用于终端设备(比如智能手机等),请参阅附图1,本申请实施例一的图像清晰度识别方法包括:The following describes the image definition recognition method provided in the first embodiment of the application. The image definition recognition method is applied to terminal devices (such as smart phones). Please refer to attached drawing 1. The image definition recognition method of the first embodiment of the application include:
在步骤S101中,获取包含一个或多个目标对象的待识别图像以及每个目标对象在该待识别图像中的位置;In step S101, an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized are obtained;
在本申请实施例中,上述目标对象为人眼容易捕捉的对象,比如人像、狗、猫、花朵等。In the embodiments of the present application, the aforementioned target objects are objects that are easily captured by human eyes, such as portraits, dogs, cats, flowers, and so on.
在本申请实施例中,该步骤S101具体实施过程可以包括以下步骤:In the embodiment of the present application, the specific implementation process of step S101 may include the following steps:
步骤A、获取待处理图像;Step A: Obtain the image to be processed;
步骤B、对上述待处理图像进行目标检测,获取检测结果,该检测结果用于指示是否在上述待处理图像中检测到目标对象,并且若在上述待处理图像中检测到目标对象时,上述检测结果用于指示上述待处理图像中每个目标对象的位置;Step B: Perform target detection on the image to be processed to obtain a detection result, which is used to indicate whether the target object is detected in the image to be processed, and if the target object is detected in the image to be processed, the detection The result is used to indicate the position of each target object in the image to be processed;
步骤C、若上述检测结果指示在上述待处理图像中检测到目标对象,则将上述待处理图像确定为上述待识别图像,并根据上述检测结果,确定每个目标对象在该待识别图像中的位置。Step C. If the detection result indicates that the target object is detected in the image to be processed, the image to be processed is determined as the image to be recognized, and based on the detection result, the position of each target object in the image to be recognized is determined position.
其中,上述步骤A中的待处理图像的获取方式可以为:当检测到用户通过摄像头拍摄图像时,将该摄像头拍摄的图像确定为上述待处理图像。Wherein, the method for acquiring the image to be processed in the foregoing step A may be: when it is detected that the user takes an image through the camera, the image captured by the camera is determined as the image to be processed.
请本领域技术人员注意,上述步骤A-步骤C仅仅是该步骤S101的一种具体实施方法,该步骤S101还可以有其他的具体实施方法,比如终端设备可以输出提示信息“尊敬的用户,请您输入包含目标对象(人像、狗或猫)的图像”,然后用户可以根据该提示信息,在本地存储的各个图像中选取包含人像、狗或猫的图像,该终端设备获取用户选取的图像,并将该图像确定为上述待识别图像,此外,待识别图像中目标对象的位置也可以由用户告知终端设备,比如用户通过框选目标对象的方式,告知终端设备中目标对象在待识别图像中的位置。Those skilled in the art should note that the above-mentioned step A-step C is only a specific implementation method of step S101, and this step S101 can also have other specific implementation methods, for example, the terminal device can output the prompt message "Dear user, please You enter an image containing the target object (person, dog, or cat)", and then the user can select an image containing a person, dog, or cat from the images stored locally according to the prompt information, and the terminal device obtains the image selected by the user, And the image is determined as the above-mentioned image to be recognized. In addition, the user can also inform the terminal device of the location of the target object in the image to be recognized. For example, the user informs the terminal device that the target object is in the image to be recognized by selecting the target object. s position.
在上述步骤B中,对待处理图像进行目标检测从而获取检测结果的方法可以为:利用训练后的目标检测模型对该待处理图像进行目标检测,获取该目标检测模型输出的检测结果(具体地,利用目标检测模型对图像进行目标检测的方法是现有技术,此处不再赘述)。另外,请本领域技术人员注意,本申请并不对步骤B中具体采用的目标检测方法进行限定。In the above step B, the method of performing target detection on the image to be processed to obtain the detection result may be: performing target detection on the image to be processed using the trained target detection model, and obtaining the detection result output by the target detection model (specifically, The method of using the target detection model to perform target detection on an image is an existing technology, and will not be repeated here). In addition, those skilled in the art should note that this application does not limit the target detection method specifically used in step B.
此外,在本申请实施例中,该步骤S101所述的“待识别图像”可以是用户通过终端设备的相机APP拍摄的图像;或者,可以是终端设备中的相机APP或者摄像机APP所采集预览画面中的一帧预览图像;或者,也可以是终端设备本地所保存的图像;或者,还可以是在线观看的视频或本地保存的视频中的某一帧图像。本申请对上述待识别图像的来源不作限定。In addition, in this embodiment of the present application, the "image to be recognized" in step S101 may be an image taken by the user through the camera APP of the terminal device; or it may be the camera APP in the terminal device or the preview image collected by the camera APP It can also be a frame of preview image in the terminal; alternatively, it can also be an image saved locally on the terminal device; or, it can also be a certain frame of an image in a video watched online or locally saved. This application does not limit the source of the aforementioned image to be identified.
在步骤S102中,根据每个目标对象在该待识别图像中的位置,在该待识别图像中截取包含目标对象的一个或多个子图像;In step S102, according to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized;
在本申请实施例中,通过该步骤S102所获取的子图像的个数可以是一个也可以是多个,并且,每个子图像中包含的目标对象的个数可以是一个也可以是多个。本申请并不对截取的子图像个数以及每个子图像中包含的目标对象个数进行限定。In this embodiment of the present application, the number of sub-images obtained through this step S102 may be one or multiple, and the number of target objects contained in each sub-image may be one or multiple. This application does not limit the number of captured sub-images and the number of target objects contained in each sub-image.
为了更清楚的描述该步骤S102的具体实施过程,下面利用附图2来详细描述如何截取子图像。In order to describe the specific implementation process of step S102 more clearly, the following uses FIG. 2 to describe in detail how to capture the sub-image.
如图2(a)所示,假设通过步骤S101获取的待识别图像为图像201,该图像201中包含一个目标对象202(即人像),通过上述步骤S101获取的该目标对象202在图像201中的位置为{A(x a,y a),B(x b,y b)}(即代表该目标对象202的位置为:点A和点B组成的矩形区域,如图2(a)虚线框所示)。则在上述步骤S102中,可以根据点A以及点B的坐标,截取包含目标对象202的子图像,在本申请中,可以直接将点A以及点B组成的矩形区域作为子图像,也可以对点A以及点B组成的矩形区域进行膨胀运算,将膨胀运算后的图像区域作为包含目标对象202的子图像(请本领域技术人员注意,本申请并不对子图像的获取方式进行限定)。 As shown in Figure 2(a), assume that the image to be recognized obtained in step S101 is an image 201, which contains a target object 202 (ie a portrait), and the target object 202 obtained in step S101 is in the image 201 The position of is {A(x a , y a ), B(x b , y b )} (that is, the position of the target object 202 is: the rectangular area composed of point A and point B, as shown in Figure 2(a) Box). Then in the above step S102, the sub-image containing the target object 202 can be intercepted according to the coordinates of point A and point B. In this application, the rectangular area composed of point A and point B can be directly used as the sub-image, or The rectangular area formed by point A and point B is subjected to dilation operation, and the image area after dilation operation is used as the sub-image containing the target object 202 (please note that the method of obtaining the sub-image is not limited in this application).
如图2(b)所示,假设通过步骤S101获取的待识别图像为图像203,该图像203中包含两个目标对象,分别为目标对象204以及目标对象205,通过上述步骤S101获取的该目标对象204在图像203中的位置为{A(x a,y a),B(x b,y b)},通过上述步骤S101获取的目标对象205在图像203中的位置为{C(x c,y c),D(x d,y d)}。则在上述步骤S102中,可以根据点A、点B、点C以及点D的坐标,截图同时包含目标对象204以及目标对象205的子图像,或者也可以截取两个子图像,分别为只包含目标图像204的子图像以及只包含目标对象205的子图像。也即是当待识别图像中包含多个目标对象时,通过步骤S102获取的子图像个数可以是一个,可以是多个。 As shown in Figure 2(b), assume that the image to be recognized obtained in step S101 is image 203, which contains two target objects, target object 204 and target object 205, respectively. The target obtained in step S101 is The position of the object 204 in the image 203 is {A(x a , y a ), B(x b , y b )}, and the position of the target object 205 in the image 203 obtained by the above step S101 is {C(x c ,y c ),D(x d ,y d )}. Then in the above step S102, according to the coordinates of point A, point B, point C, and point D, the screenshot may contain the sub-images of the target object 204 and the target object 205, or two sub-images may be intercepted, each of which contains only the target The sub-image of the image 204 and the sub-image only containing the target object 205. That is, when the image to be recognized contains multiple target objects, the number of sub-images obtained in step S102 may be one or multiple.
在步骤S103中,识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定上述待识别图像的图像清晰度;In step S103, the image clarity of each sub-image is identified, and the image clarity of the image to be identified is determined according to the identified image clarity of each sub-image;
在本申请实施例中,对步骤S102获取的每个子图像的图像清晰度进行识别(可以通过训练后的神经网络模型识别每个子图像的图像清晰度,也可以采用Tenengrad梯度方法、Laplacian梯度方法以及方差方法等识别每个子图像的图像清晰度,本申请并不对每个子图像的图像清晰度识别方法进行限定),然后可以对获取的每个子图像的图像清晰度进行平均或者加权平均,得到待识别图像的图像清晰度。In the embodiment of the present application, the image definition of each sub-image acquired in step S102 is recognized (the image definition of each sub-image can be recognized through the trained neural network model, or the Tenengrad gradient method, Laplacian gradient method, and Variance method, etc. to identify the image clarity of each sub-image, this application does not limit the image clarity identification method of each sub-image), and then the acquired image clarity of each sub-image can be averaged or weighted to obtain the The image clarity of the image.
下面具体论述一种对各个所述子图像的图像清晰度进行加权平均,得到待识别图像的清晰度的方法:根据每个子图像中所包含的目标对象的类别、每个子图像在待识别图像中的位置和/或每个子图像占据该待识别图像的面积比例,确定每个子图像的图像清晰度所对应的权重值,比如,当某个子图像位于待识别图像的中间区域,占据该待识别图像的面积比例较大,并且所包含的目标对象的类别是人像(通常情况下,人眼对人像较为感兴趣)时,该子图像的图像清晰度可以对应一较大的权重值;然后,根据每个所述子图像的图像清晰度的权重值,对所有所述子图像的图像清晰度进行加权平均,得到上述待识别图像的图像清晰度。The following specifically discusses a method of weighted average of the image definition of each sub-image to obtain the definition of the image to be recognized: according to the category of the target object contained in each sub-image, each sub-image is in the image to be recognized The position of each sub-image and/or the proportion of the area occupied by each sub-image of the image to be recognized to determine the weight value corresponding to the image clarity of each sub-image. For example, when a sub-image is located in the middle area of the image to be recognized, it occupies the image to be recognized When the area ratio of the sub-image is larger and the category of the target object included is a portrait (usually, the human eye is more interested in a portrait), the image clarity of the sub-image can correspond to a larger weight value; then, according to The weight value of the image sharpness of each sub-image is weighted and averaged on the image sharpness of all the sub-images to obtain the image sharpness of the image to be identified.
此外,在本申请实施例一中,除了依赖每个子图像的图像清晰度之外,还可以依赖待识别图像中各个子图像之外的图像区域的图像清晰度,来确定待识别图像的图像清晰度。如图2(b)所示,在确定图像203的图像清晰度时,除了依赖子图像204以及子图像205的图像清晰度之外,还可以依赖图像203中除了子图像204以及子图像205之外的图像区域的图像清晰度,此时,在确定图像203的最终图像清晰度时,可以给子图像204以及子图像205的图像清晰度分配数值较大的权重值,给除去子图像204以及子图像205之外的图像区域的图像清晰度分配较小的权重值,从而得到图像203最终的图像清晰度。In addition, in the first embodiment of the present application, in addition to relying on the image clarity of each sub-image, the image clarity of the image area other than each sub-image in the image to be recognized can also be relied on to determine the image clarity of the image to be recognized. degree. As shown in Figure 2(b), when determining the image clarity of the image 203, in addition to relying on the image clarity of the sub-image 204 and the sub-image 205, you can also rely on the image 203 in addition to the sub-image 204 and the sub-image 205. The image clarity of the outer image area. At this time, when the final image clarity of the image 203 is determined, the image clarity of the sub-image 204 and the sub-image 205 can be assigned a larger weight value to remove the sub-image 204 and The image clarity of the image area outside the sub-image 205 is assigned a smaller weight value, so that the final image clarity of the image 203 is obtained.
另外,在该步骤S103之后,还包括如下步骤:判断上述待识别图像的图像清晰度是否小于预设阈值;若小于该预设阈值,则对上述待识别图像进行超分辨率重建。In addition, after this step S103, the method further includes the following steps: judging whether the image clarity of the image to be recognized is less than a preset threshold; if it is less than the preset threshold, super-resolution reconstruction is performed on the image to be recognized.
在本申请实施例一中,待识别图像的图像清晰度是基于目标对象所在图像区域的图像清晰度的,通常情况下,人眼在观察图像时,往往会被图像中的特定对象所吸引,因此,人眼对图像清晰度的感知,很大程度上是由目标对象所在的图像区域决定的,因此,本申请实施例一对图像清晰度的识别结果会更加逼近人眼感受到的图像清晰度。In the first embodiment of the present application, the image clarity of the image to be recognized is based on the image clarity of the image area where the target object is located. Normally, when the human eye observes the image, it is often attracted by the specific object in the image. Therefore, the human eye’s perception of image clarity is largely determined by the image area where the target object is located. Therefore, the recognition result of a pair of image clarity in the embodiment of this application will be closer to the human eye’s perception of image clarity. degree.
下面对本申请实施例二提供的另一种图像清晰度识别方法进行描述,请参阅附图3,本申请实施例二的图像清晰度识别方法包括:The following describes another image definition recognition method provided in the second embodiment of the present application. Please refer to FIG. 3. The image definition recognition method of the second embodiment of the present application includes:
在步骤S301中,获取包含一个或多个目标对象的待识别图像以及每个目标对象在该待识别图像中的位置;In step S301, an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized are obtained;
该步骤S301的具体执行方式与实施例一中的步骤S101完全相同,具体可参见实施例一的描述,此处不再赘述。The specific implementation manner of this step S301 is completely the same as that of step S101 in the first embodiment. For details, please refer to the description of the first embodiment, which will not be repeated here.
在步骤S302中,根据每个目标对象在该待识别图像中的位置,确定每个位置分别指示的图像区域的并集;In step S302, according to the position of each target object in the image to be recognized, the union of the image regions indicated by each position is determined;
在本申请实施例二中,需要获取待识别图像中每个目标对象所占据的图像区域的并集。In the second embodiment of the present application, the union of the image area occupied by each target object in the image to be recognized needs to be obtained.
在图2(a)所示的例子中,可以确定每个位置分别指示的图像区域的并集是:点A以及点B组成的矩形区域。In the example shown in Fig. 2(a), it can be determined that the union of the image areas indicated by each position is: a rectangular area composed of point A and point B.
在图2(b)所示的例子中,可以确定每个位置分别指示的图像区域的并集是:点A以及点B组成的矩形区域+点C以及点D组成的矩形区域。In the example shown in Figure 2(b), it can be determined that the union of the image areas indicated by each position is: a rectangular area composed of point A and point B + a rectangular area composed of point C and point D.
在步骤S303中,计算该并集所指示的图像区域占据上述待识别图像的第一面积比例;In step S303, calculate the proportion of the first area of the image to be recognized that the image area indicated by the union occupies;
在图2(a)所示的例子中,上述第一面积比例为:点A以及点B组成的矩形区域面积/图像201的面积。In the example shown in FIG. 2(a), the above-mentioned first area ratio is: the area of the rectangular area composed of point A and point B/the area of image 201.
在图2(b)所示的例子中,上述第一面积比例为:(点A以及点B组成的矩形区域面积+点C以及点D组成的矩形区域面积)/图像203的面积。In the example shown in FIG. 2( b ), the above-mentioned first area ratio is: (area of a rectangular area composed of points A and B+area of a rectangular area composed of points C and D)/area of image 203.
在步骤S304中,判断上述第一面积比例是否小于第一预设比例,若小于上述第一预设比例,则在该待识别图像中截取包含目标对象的一个或多个子图像;In step S304, it is determined whether the first area ratio is less than the first preset ratio, and if it is less than the first preset ratio, one or more sub-images containing the target object are intercepted in the image to be recognized;
本领域技术人员容易理解,若上述第一面积比例较大(比如大于或等于上述第一预设比例)时,说明待识别图像中大部分图像区域都为目标对象,在这种情况下,完全没有任何必要将目标对象所在的图像区域截取出来,可以直接利用传统的图像清晰度识别方法对待识别图像的图像清晰度进行识别。Those skilled in the art can easily understand that if the first area ratio is large (for example, greater than or equal to the first preset ratio), it means that most of the image areas in the image to be recognized are the target objects. In this case, it is completely There is no need to cut out the image area where the target object is located, and the traditional image definition recognition method can be used directly to recognize the image definition of the image to be recognized.
若上述第一面积比例小于上述第一预设比例,则说明待识别图像中除了目标对象之外,还存在一些人眼并不容易捕捉的画面,在这种情况下,可以将目标对象所在的图像区域截取出来,并通过截取出来的子图像确定待识别图像的图像清晰度。If the above-mentioned first area ratio is less than the above-mentioned first preset ratio, it means that besides the target object in the image to be recognized, there are also some images that are not easy to capture by the human eye. In this case, you can set the target object The image area is cut out, and the cut out sub-image is used to determine the image clarity of the image to be recognized.
此外,该步骤S304所述的“在该待识别图像中截取包含目标对象的一个或多个子图像”的具体方式可以参见实施例一。此外,在本申请实施例二中,提供一种“在该待识别图像中截取包含目标对象的一个或多个子图像”的具体实施方式:In addition, the specific manner of “cutting one or more sub-images containing the target object in the image to be recognized” described in step S304 can refer to the first embodiment. In addition, in the second embodiment of the present application, a specific implementation manner of "intercepting one or more sub-images containing the target object in the image to be recognized" is provided:
步骤D:根据每个目标对象在上述待识别图像中的位置,计算每个位置所指示的图像区域分别占该待识别图像的第二面积比例;Step D: According to the position of each target object in the image to be recognized, the ratio of the image area indicated by each position to the second area of the image to be recognized is calculated;
步骤E:根据上述第二面积比例,判断是否存在有第二面积比例大于第二预设比例的图像区域,其中,上述第二预设比例小于上述第一预设比例;Step E: Determine whether there is an image area with a second area ratio greater than a second preset ratio based on the second area ratio, where the second preset ratio is less than the first preset ratio;
步骤F:若存在,则对大于上述第二预设比例的每个图像区域均进行膨胀运算,得到各个修正图像区域;Step F: If it exists, perform an expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
步骤G:将每个修正图像区域分别确定为每个子图像;Step G: Determine each corrected image area as each sub-image;
为了使本领域技术人员更清楚的理解上述步骤D-步骤G所述的技术方案,下面利用图2(b)来详细描述上述技术方案。In order to enable those skilled in the art to more clearly understand the technical solutions described in the above step D-step G, the above technical solutions are described in detail below using FIG. 2(b).
在图2(b)所示的例子中,首先需要计算点A以及点B组成的矩形区域占据图像203的第二面积比例(为便于后续描述,称该第二面积比例为面积比例1),以及,点C以及点D组成的矩形区域占据图像203的第二面积比例(为便于后续描述,称该第二面积比例为面积比例2);其次,判断面积比例1是否大于第二预设比例,以及,判断面积比例2是否大于第二预设比例,若面积比例1大于第二预设比例,面积比例2小于第二预设比例,则上述步骤E的判断结果为存在占据上述待识别图像的面积比例大于第二预设比例的图像区域(即为点A以及点B组成的图像区域);然后,执行上述步骤F以及步骤G,对点A以及点B组成的图像区域进行膨胀运算,将膨胀运算得到的修正图像区域确定为子图像。In the example shown in Figure 2(b), it is first necessary to calculate the second area ratio of the rectangular area composed of point A and point B to the image 203 (for the convenience of subsequent description, this second area ratio is called area ratio 1), And, the rectangular area composed of point C and point D occupies the second area ratio of the image 203 (for the convenience of subsequent description, the second area ratio is called area ratio 2); secondly, it is judged whether the area ratio 1 is greater than the second preset ratio , And, determine whether the area ratio 2 is greater than the second preset ratio, if the area ratio 1 is greater than the second preset ratio, and the area ratio 2 is less than the second preset ratio, the judgment result of the above step E is that there is a The area ratio of is greater than the image area of the second preset ratio (that is, the image area composed of point A and point B); then, the above step F and step G are performed to perform the expansion operation on the image area composed of point A and point B, The corrected image area obtained by the dilation operation is determined as a sub-image.
上述步骤D-步骤F所限定的技术方案,可以忽略占据待识别图像面积比例较小的目标对象对待识别图像清晰度的影响。The technical solution defined in the above step D-step F can ignore the influence of the target object occupying a small proportion of the area of the image to be recognized on the definition of the image to be recognized.
在步骤S305中,识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定上述待识别图像的图像清晰度;In step S305, the image clarity of each sub-image is identified, and the image clarity of the image to be identified is determined according to the identified image clarity of each sub-image;
该步骤S305的具体执行方式与实施例一中的步骤S103完全相同,具体可参见实施例一的描述,此处不再赘述。The specific implementation manner of step S305 is completely the same as step S103 in Embodiment 1. For details, please refer to the description of Embodiment 1, which will not be repeated here.
本申请实施例二所限定的技术方案相比于实施例一,只有在目标对象占据待识别图像的面积比例小于一定数值时,才会执行截取操作,因此,本申请实施例二所限定的技术方案相比于实施例一可以在一定程度上减轻终端设备的处理负担,此外,本申请实施例二中上述步骤D-步骤F所提供的技术方案,可以在一定程度上减少截取的子图像数量,因此,也可以进一步减轻终端设备的处理负担。此外,本申请实施例二与实施例一相同,也可以使得对待识别图像的图像清晰度识别结果更加逼近人眼感受到的图像清晰度。Compared with the first embodiment, the technical solution defined in the second embodiment of the present application performs the interception operation only when the proportion of the area occupied by the target object in the image to be recognized is less than a certain value. Therefore, the technology defined in the second embodiment of the present application Compared with the first embodiment, the solution can reduce the processing burden of the terminal device to a certain extent. In addition, the technical solution provided in the above step D-step F in the second embodiment of the present application can reduce the number of sub-images to a certain extent. Therefore, the processing burden on the terminal device can be further reduced. In addition, the second embodiment of the present application is the same as the first embodiment, which can also make the image definition recognition result of the image to be recognized more close to the image definition perceived by the human eye.
应理解,上述方法实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the above method embodiment does not mean the sequence of execution. The execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the application. .
本申请实施例三提供了一种图像清晰度识别装置,为了便于说明,仅示出与本申请相关的部分,如图4所示,该图像清晰度识别装置400包括:The third embodiment of the present application provides an image clarity recognition device. For ease of description, only the parts related to the present application are shown. As shown in FIG. 4, the image clarity recognition device 400 includes:
目标获取模块401,用于获取包含一个或多个目标对象的待识别图像以及每个目标对象在上述待识别图像中的位置;The target acquisition module 401 is configured to acquire an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
目标截取模块402,用于根据每个目标对象在上述待识别图像中的位置,在该待识别图像中截取包含目标对象的一个或多个子图像;The target capturing module 402 is configured to capture one or more sub-images containing the target object in the image to be recognized according to the position of each target object in the image to be recognized;
清晰度识别模块403,用于识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定上述待识别图像的图像清晰度。The definition recognition module 403 is configured to recognize the image definition of each sub-image, and determine the image definition of the image to be recognized according to the recognized image definition of each sub-image.
可选地,上述目标截取模块402包括:Optionally, the foregoing target interception module 402 includes:
并集确定单元,用于根据每个目标对象在上述待识别图像中的位置,确定每个位置分别指示的图像区域的并集;A union determining unit, configured to determine the union of the image regions indicated by each position according to the position of each target object in the image to be recognized;
第一比例单元,用于计算上述并集所指示的图像区域占据上述待识别图像的第一面积比例;The first ratio unit is used to calculate the ratio of the first area of the image to be recognized by the image area indicated by the union;
判断单元,用于判断上述第一面积比例是否小于第一预设比例;A judging unit for judging whether the first area ratio is smaller than a first preset ratio;
目标截取单元,用于若小于上述第一预设比例,则在上述待识别图像中截取包含目标对象的一个或多个子图像。The target capture unit is configured to capture one or more sub-images containing the target object in the image to be recognized if the ratio is smaller than the first preset ratio.
可选地,上述目标截取单元包括:Optionally, the foregoing target interception unit includes:
第二比例子单元,用于若小于上述第一预设比例,则根据每个目标对象在上述待识别图像中的位置,计算上个位置所指示的图像区域分别占上述待识别图像的第二面积比例;The second ratio sub-unit is used to calculate the image area indicated by the previous position to occupy the second proportion of the image to be identified according to the position of each target object in the image to be identified if it is less than the first preset ratio. Area ratio
判断子单元,用于根据上述第二面积比例,判断是否存在有第二面积比例大于第二预设比例的图像区域,其中,上述第二预设比例小于上述第一预设比例;A judging subunit for judging whether there is an image area with a second area ratio greater than a second preset ratio based on the second area ratio, where the second preset ratio is less than the first preset ratio;
膨胀运算子单元,用于若存在,则对大于上述第二预设比例的每个图像区域均进行膨胀运算,得到各个修正图像区域;An expansion operation subunit, if it exists, perform expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
子图像确定子单元,用于将每个修正图像区域分别确定为每个子图像;A sub-image determining sub-unit for determining each corrected image area as each sub-image;
可选地,上述清晰度识别模块403包括:Optionally, the aforementioned clarity recognition module 403 includes:
权重确定单元,用于根据每个子图像中所包含的目标对象的类别、每个子图像在上述待识别图像中的位置和/或每个子图像占据上述待识别图像的面积比例,确定每个子图像的图像清晰度所对应的权重值;The weight determination unit is used to determine the size of each sub-image according to the category of the target object contained in each sub-image, the position of each sub-image in the image to be recognized, and/or the area ratio of each sub-image to the image to be recognized. The weight value corresponding to the image clarity;
加权平均单元,用于根据每个图像清晰度的权重值,对所有图像清晰度进行加权平均,得到上述待识别图像的图像清晰度。The weighted average unit is used for weighting and averaging all the image definitions according to the weight value of each image definition to obtain the image definition of the image to be recognized.
可选地,上述图像清晰度识别装置400还包括:Optionally, the aforementioned image definition recognition device 400 further includes:
判断模块,用于判断上述待识别图像的图像清晰度是否小于预设阈值;A judging module for judging whether the image clarity of the image to be recognized is less than a preset threshold;
重建模块,用于若小于上述预设阈值,则对上述待识别图像进行超分辨率重建。The reconstruction module is configured to perform super-resolution reconstruction on the image to be recognized if it is less than the preset threshold.
可选地,上述目标获取模块401包括:Optionally, the foregoing target acquisition module 401 includes:
图像获取单元,用于获取待处理图像;An image acquisition unit for acquiring an image to be processed;
目标检测单元,用于对上述待处理图像进行目标检测,获取检测结果,该检测结果用于指示是否在上述待处理图像中检测到目标对象,并且若在上述待处理图像中检测到目标对象时,该检测结果用于指示上述待处理图像中每个目标对象的位置;The target detection unit is used to perform target detection on the image to be processed and obtain a detection result, which is used to indicate whether the target object is detected in the image to be processed, and if the target object is detected in the image to be processed , The detection result is used to indicate the position of each target object in the image to be processed;
目标获取单元,用于若上述检测结果指示在上述待处理图像中检测到目标对象,则将上述待处理图像确定为上述待识别图像,并根据上述检测结果,确定每个目标对象在该待识别图像中的位置。The target acquisition unit is configured to, if the detection result indicates that the target object is detected in the image to be processed, determine the image to be processed as the image to be recognized, and determine that each target object is in the image to be recognized based on the detection result. The position in the image.
可选地,上述图像获取单元具体用于:当检测到用户通过摄像头拍摄图像时,将该摄像头拍摄的图像确定为上述待处理图像。Optionally, the aforementioned image acquisition unit is specifically configured to: when it is detected that the user takes an image through a camera, the image taken by the camera is determined as the aforementioned image to be processed.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction and execution process between the above-mentioned devices/units are based on the same concept as the method embodiments of this application, and their specific functions and technical effects can be found in the method embodiments. I won't repeat it here.
图5是本申请实施例四提供的终端设备的示意图。如图5所示,该实施例的终端设备500包括:处理器501、存储器502以及存储在上述存储器502中并可在上述处理器501上运行的计算机可读指令503。上述处理器501执行上述计算机可读指令503时实现上述各个方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,上述处理器501执行上述计算机可读指令503时实现上述各装置实施例中各模块/单元的功能,例如图4所示模块401至403的功能。FIG. 5 is a schematic diagram of a terminal device provided in Embodiment 4 of the present application. As shown in FIG. 5, the terminal device 500 of this embodiment includes: a processor 501, a memory 502, and computer-readable instructions 503 stored in the foregoing memory 502 and running on the foregoing processor 501. When the processor 501 executes the computer-readable instruction 503, the steps in the foregoing method embodiments are implemented, for example, steps S101 to S103 shown in FIG. 1. Alternatively, when the processor 501 executes the computer readable instruction 503, the functions of the modules/units in the foregoing device embodiments, such as the functions of the modules 401 to 403 shown in FIG. 4, are implemented.
示例性的,上述计算机可读指令503可以被分割成一个或多个模块/单元,上述一个或者多个模块/单元被存储在上述存储器502中,并由上述处理器501执行,以完成本申请。上述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令指令段,该指令段用于描述上述计算机可读指令503在上述终端设备500中的执行过程。例如,上述计算机可读指令503可以被分割成目标获取模块、目标截取模块以及清晰度识别模块,各模块具体功能如下:Exemplarily, the foregoing computer-readable instruction 503 may be divided into one or more modules/units, and the foregoing one or more modules/units are stored in the foregoing memory 502 and executed by the foregoing processor 501 to complete the application . The foregoing one or more modules/units may be a series of computer-readable instruction instruction segments capable of completing specific functions, and the instruction segment is used to describe the execution process of the foregoing computer-readable instructions 503 in the foregoing terminal device 500. For example, the above-mentioned computer-readable instruction 503 can be divided into a target acquisition module, a target interception module, and a definition recognition module, and the specific functions of each module are as follows:
获取包含一个或多个目标对象的待识别图像以及每个目标对象在上述待识别图像中的位置;Acquiring an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
根据每个目标对象在上述待识别图像中的位置,在该待识别图像中截取包含目标对象的一个或多个子图像;According to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized;
识别每个子图像的图像清晰度,并根据识别出的每个图像清晰度,确定上述待识别图像的图像清晰度。Identify the image definition of each sub-image, and determine the image definition of the image to be identified according to the identified definition of each image.
上述终端设备可包括,但不仅限于,处理器501、存储器502。本领域技术人员可以理解,图5仅仅是终端设备500的示例,并不构成对终端设备500的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如上述终端设备还可以包括输入输出设备、网络接入设备、总线等。The foregoing terminal device may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art can understand that FIG. 5 is only an example of the terminal device 500, and does not constitute a limitation on the terminal device 500. It may include more or less components than shown in the figure, or a combination of certain components, or different components. For example, the aforementioned terminal device may also include input and output devices, network access devices, buses, and so on.
所称处理器501可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现场可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 501 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
上述存储器502可以是上述终端设备500的内部存储单元,例如终端设备500的硬盘或内存。上述存储器502也可以是上述终端设备500的外部存储设备,例如上述终端设备500上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,上述存储器502还可以既包括上述终端设备500的内部存储单元也包括外部存储设备。上述存储器502用于存储上述计算机可读指令以及上述终端设备所需的其它程序和数据。上述存储器502还可以用于暂时地存储已经输出或者将要输出的数据。The foregoing memory 502 may be an internal storage unit of the foregoing terminal device 500, such as a hard disk or memory of the terminal device 500. The memory 502 may also be an external storage device of the terminal device 500, for example, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory equipped on the terminal device 500. Card (Flash Card) etc. Further, the aforementioned memory 502 may also include both an internal storage unit of the aforementioned terminal device 500 and an external storage device. The aforementioned memory 502 is used to store the aforementioned computer-readable instructions and other programs and data required by the aforementioned terminal device. The aforementioned memory 502 may also be used to temporarily store data that has been output or will be output.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将上述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and conciseness of description, only the division of the above-mentioned functional units and modules is used as an example. In practical applications, the above-mentioned functions can be allocated to different functional units and modules as required. Module completion, that is, divide the internal structure of the above device into different functional units or modules to complete all or part of the functions described above. The functional units and modules in the embodiments can be integrated into one processing unit, or each unit can exist alone physically, or two or more units can be integrated into one unit. The above-mentioned integrated units can be hardware-based Formal realization can also be realized in the form of software functional units. In addition, the specific names of the functional units and modules are only used to facilitate distinguishing each other, and are not used to limit the protection scope of the present application. For the specific working process of the units and modules in the foregoing system, reference may be made to the corresponding process in the foregoing method embodiment, which is not repeated here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail or recorded in an embodiment, reference may be made to related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may be aware that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,上述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。In the embodiments provided in this application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the device/terminal device embodiments described above are merely illustrative. For example, the division of the above-mentioned modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or Components can be combined or integrated into another system, or some features can be omitted or not implemented.
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
上述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,上述的计算机可读指令可存储于一计算机非易失性可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。If the above integrated modules/units are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer readable storage medium. Based on this understanding, the present application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions. The above-mentioned computer-readable instructions can be stored in a nonvolatile computer. In the read storage medium, when the computer-readable instructions are executed by the processor, the steps of the foregoing method embodiments can be implemented.
其中,上述计算机可读指令包括计算机可读指令代码,上述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。以上上述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned computer-readable instruction includes computer-readable instruction code, and the above-mentioned computer-readable instruction code may be in the form of source code, object code, executable file, or some intermediate forms. Wherein, any reference to memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc. The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them. Although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that they can still compare the foregoing embodiments. The recorded technical solutions are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in this Within the scope of protection applied for.

Claims (20)

  1. 一种图像清晰度识别方法,其特征在于,包括:An image definition recognition method, characterized in that it comprises:
    获取包含一个或多个目标对象的待识别图像以及每个目标对象在所述待识别图像中的位置;Acquiring an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
    根据每个目标对象在所述待识别图像中的位置,在所述待识别图像中截取包含目标对象的一个或多个子图像;According to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized;
    识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定所述待识别图像的图像清晰度。The image definition of each sub-image is recognized, and the image definition of the image to be recognized is determined according to the recognized image definition of each sub-image.
  2. 如权利要求1所述的图像清晰度识别方法,其特征在于,所述根据每个目标对象在所述待识别图像中的位置,在所述待识别图像中截取包含目标对象的一个或多个子图像,包括:The image definition recognition method according to claim 1, characterized in that, according to the position of each target object in the image to be recognized, one or more sub-elements containing the target object are intercepted in the image to be recognized. Images, including:
    根据每个目标对象在所述待识别图像中的位置,确定每个位置分别指示的图像区域的并集;According to the position of each target object in the image to be recognized, determine the union of the image regions indicated by each position;
    计算所述并集所指示的图像区域占据所述待识别图像的第一面积比例;Calculating the proportion of the image area indicated by the union occupying the first area of the image to be recognized;
    判断所述第一面积比例是否小于第一预设比例;Determine whether the first area ratio is less than a first preset ratio;
    若小于所述第一预设比例,则在所述待识别图像中截取包含目标对象的一个或多个子图像。If it is less than the first preset ratio, one or more sub-images containing the target object are captured in the image to be recognized.
  3. 如权利要求2所述的图像清晰度识别方法,其特征在于,所述若小于所述第一预设比例,则在所述待识别图像中截取包含目标对象的一个或多个子图像,包括:3. The image definition recognition method according to claim 2, wherein if the ratio is smaller than the first preset ratio, intercepting one or more sub-images containing the target object in the image to be recognized includes:
    若小于所述第一预设比例,则If less than the first preset ratio, then
    根据每个目标对象在所述待识别图像中的位置,计算每个位置所指示的图像区域分别占所述待识别图像的第二面积比例;According to the position of each target object in the image to be recognized, calculating the ratio of the image area indicated by each position to the second area of the image to be recognized;
    根据所述第二面积比例,判断是否存在有第二面积比例大于第二预设比例的图像区域,其中,所述第二预设比例小于所述第一预设比例;Judging whether there is an image area with a second area ratio greater than a second preset ratio according to the second area ratio, where the second preset ratio is less than the first preset ratio;
    若存在,则对大于所述第二预设比例的每个图像区域均进行膨胀运算,得到各个修正图像区域;If it exists, performing an expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
    将每个修正图像区域分别确定为每个子图像。Each corrected image area is determined as each sub-image.
  4. 如权利要求1至3中任一项所述的图像清晰度识别方法,其特征在于,所述根据识别出的每个所述子图像的图像清晰度,确定所述待识别图像的图像清晰度,包括:The image definition identification method according to any one of claims 1 to 3, wherein the image definition of the image to be identified is determined according to the identified image definition of each of the sub-images ,include:
    根据每个子图像中所包含的目标对象的类别、每个子图像在所述待识别图像中的位置和/或每个子图像占据所述待识别图像的面积比例,确定每个子图像的图像清晰度所对应的权重值;According to the category of the target object contained in each sub-image, the position of each sub-image in the image to be recognized, and/or the proportion of the area occupied by each sub-image of the image to be recognized, the image definition of each sub-image is determined. The corresponding weight value;
    根据每个所述子图像的图像清晰度的权重值,对所有所述子图像的图像清晰度进行加权平均,得到所述待识别图像的图像清晰度。According to the weight value of the image sharpness of each sub-image, the image sharpness of all the sub-images is weighted and averaged to obtain the image sharpness of the image to be recognized.
  5. 如权利要求1至3中任一项所述的图像清晰度识别方法,其特征在于,在所述确定所述待识别图像的图像清晰度的步骤之后,还包括:The image definition recognition method according to any one of claims 1 to 3, characterized in that, after the step of determining the image definition of the image to be recognized, the method further comprises:
    判断所述待识别图像的图像清晰度是否小于预设阈值;Determine whether the image clarity of the image to be recognized is less than a preset threshold;
    若小于所述预设阈值,则对所述待识别图像进行超分辨率重建。If it is less than the preset threshold, super-resolution reconstruction is performed on the image to be recognized.
  6. 如权利要求1至3中任一项所述的图像清晰度识别方法,其特征在于,所述获取包含一个或多个目标对象的待识别图像以及每个目标对象在所述待识别图像中的位置,包括:The image definition recognition method according to any one of claims 1 to 3, wherein said acquiring the image to be recognized containing one or more target objects and the position of each target object in the image to be recognized Location, including:
    获取待处理图像;Obtain the image to be processed;
    对所述待处理图像进行目标检测,获取检测结果,所述检测结果用于指示是否在所述待处理图像中检测到目标对象,并且若在所述待处理图像中检测到目标对象时,所述检测结果用于指示所述待处理图像中每个目标对象的位置;Perform target detection on the image to be processed to obtain a detection result, the detection result is used to indicate whether a target object is detected in the image to be processed, and if the target object is detected in the image to be processed, the The detection result is used to indicate the position of each target object in the image to be processed;
    若所述检测结果指示在所述待处理图像中检测到目标对象,则:If the detection result indicates that a target object is detected in the image to be processed, then:
    将所述待处理图像确定为所述待识别图像,并根据所述检测结果,确定每个目标对象在所述待识别图像中的位置。The image to be processed is determined as the image to be recognized, and the position of each target object in the image to be recognized is determined according to the detection result.
  7. 如权利要求6所述的图像清晰度识别方法,其特征在于,所述获取待处理图像,包括:8. The image definition recognition method according to claim 6, wherein said acquiring the image to be processed comprises:
    当检测到用户通过摄像头拍摄图像时,将所述摄像头拍摄的图像确定为所述待处理图像。When it is detected that the user has taken an image through the camera, the image taken by the camera is determined as the image to be processed.
  8. 一种图像清晰度识别装置,其特征在于,包括:An image definition recognition device, characterized in that it comprises:
    目标获取模块,用于获取包含一个或多个目标对象的待识别图像以及每个目标对象在所述待识别图像中的位置;The target acquisition module is used to acquire an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
    目标截取模块,用于根据每个目标对象在所述待识别图像中的位置,在所述待识别图像中截取包含目标对象的一个或多个子图像;The target interception module is used to intercept one or more sub-images containing the target object in the image to be recognized according to the position of each target object in the image to be recognized;
    清晰度识别模块,用于识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定所述待识别图像的图像清晰度。The definition recognition module is configured to recognize the image definition of each sub-image, and determine the image definition of the image to be recognized according to the recognized image definition of each sub-image.
  9. 如权利要求8所述的图像清晰度识别装置,其特征在于,所述目标截取模块包括:8. The image definition recognition device according to claim 8, wherein the target interception module comprises:
    并集确定单元,用于根据每个目标对象在所述待识别图像中的位置,确定每个位置分别指示的图像区域的并集;A union determining unit, configured to determine the union of the image regions indicated by each position according to the position of each target object in the image to be recognized;
    第一比例单元,用于计算所述并集所指示的图像区域占据所述待识别图像的第一面积比例;The first proportion unit is used to calculate the proportion of the image area indicated by the union occupying the first area of the image to be recognized;
    判断单元,用于判断所述第一面积比例是否小于第一预设比例;A judging unit, configured to judge whether the first area ratio is less than a first preset ratio;
    目标截取单元,用于若小于所述第一预设比例,则在所述待识别图像中截取包含目标对象的一个或多个子图像。The target capture unit is configured to capture one or more sub-images containing the target object in the image to be recognized if the ratio is smaller than the first preset ratio.
  10. 如权利要求9所述的图像清晰度识别装置,其特征在于,所述目标截取单元包括:9. The image definition recognition device according to claim 9, wherein the target interception unit comprises:
    第二比例子单元,用于若小于所述第一预设比例,则根据每个目标对象在所述待识别图像中的位置,计算上个位置所指示的图像区域分别占所述待识别图像的第二面积比例;The second ratio subunit is used to calculate the image area indicated by the previous position to occupy the image to be identified according to the position of each target object in the image to be identified if it is less than the first preset ratio The second area ratio;
    判断子单元,用于根据所述第二面积比例,判断是否存在有第二面积比例大于第二预设比例的图像区域,其中,所述第二预设比例小于所述第一预设比例;A judging subunit for judging whether there is an image area with a second area ratio greater than a second preset ratio based on the second area ratio, wherein the second preset ratio is less than the first preset ratio;
    膨胀运算子单元,用于若存在,则对大于所述第二预设比例的每个图像区域均进行膨胀运算,得到各个修正图像区域;An expansion operation subunit, if it exists, perform expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
    子图像确定子单元,用于将每个修正图像区域分别确定为每个子图像。The sub-image determination sub-unit is used to determine each corrected image area as each sub-image.
  11. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:A terminal device, comprising a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor, wherein the processor executes the computer-readable instructions as follows step:
    获取包含一个或多个目标对象的待识别图像以及每个目标对象在所述待识别图像中的位置;Acquiring an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
    根据每个目标对象在所述待识别图像中的位置,在所述待识别图像中截取包含目标对象的一个或多个子图像;According to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized;
    识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定所述待识别图像的图像清晰度。The image definition of each sub-image is recognized, and the image definition of the image to be recognized is determined according to the recognized image definition of each sub-image.
  12. 如权利要求11所述的终端设备,其特征在于,所述根据每个目标对象在所述待识别图像中的位置,在所述待识别图像中截取包含目标对象的一个或多个子图像,包括:The terminal device according to claim 11, wherein the intercepting one or more sub-images containing the target object in the image to be recognized according to the position of each target object in the image to be recognized includes :
    根据每个目标对象在所述待识别图像中的位置,确定每个位置分别指示的图像区域的并集;According to the position of each target object in the image to be recognized, determine the union of the image regions indicated by each position;
    计算所述并集所指示的图像区域占据所述待识别图像的第一面积比例;Calculating the proportion of the image area indicated by the union occupying the first area of the image to be recognized;
    判断所述第一面积比例是否小于第一预设比例;Determine whether the first area ratio is less than a first preset ratio;
    若小于所述第一预设比例,则在所述待识别图像中截取包含目标对象的一个或多个子图像。If it is less than the first preset ratio, one or more sub-images containing the target object are captured in the image to be recognized.
  13. 如权利要求12所述的终端设备,其特征在于,所述若小于所述第一预设比例,则在所述待识别图像中截取包含目标对象的一个或多个子图像,包括:The terminal device according to claim 12, wherein, if the ratio is smaller than the first preset ratio, intercepting one or more sub-images containing the target object in the image to be recognized includes:
    若小于所述第一预设比例,则If less than the first preset ratio, then
    根据每个目标对象在所述待识别图像中的位置,计算每个位置所指示的图像区域分别占所述待识别图像的第二面积比例;According to the position of each target object in the image to be recognized, calculating the ratio of the image area indicated by each position to the second area of the image to be recognized;
    根据所述第二面积比例,判断是否存在有第二面积比例大于第二预设比例的图像区域,其中,所述第二预设比例小于所述第一预设比例;Judging whether there is an image area with a second area ratio greater than a second preset ratio according to the second area ratio, where the second preset ratio is less than the first preset ratio;
    若存在,则对大于所述第二预设比例的每个图像区域均进行膨胀运算,得到各个修正图像区域;If it exists, performing an expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
    将每个修正图像区域分别确定为每个子图像。Each corrected image area is determined as each sub-image.
  14. 如权利要求11至13中任一项所述的终端设备,其特征在于,所述根据识别出的每个所述子图像的图像清晰度,确定所述待识别图像的图像清晰度,包括:The terminal device according to any one of claims 11 to 13, wherein the determining the image clarity of the image to be identified according to the identified image clarity of each of the sub-images comprises:
    根据每个子图像中所包含的目标对象的类别、每个子图像在所述待识别图像中的位置和/或每个子图像占据所述待识别图像的面积比例,确定每个子图像的图像清晰度所对应的权重值;According to the category of the target object contained in each sub-image, the position of each sub-image in the image to be recognized, and/or the proportion of the area occupied by each sub-image of the image to be recognized, the image definition of each sub-image is determined. The corresponding weight value;
    根据每个所述子图像的图像清晰度的权重值,对所有所述子图像的图像清晰度进行加权平均,得到所述待识别图像的图像清晰度。According to the weight value of the image sharpness of each sub-image, the image sharpness of all the sub-images is weighted and averaged to obtain the image sharpness of the image to be recognized.
  15. 如权利要求11至13中任一项所述的终端设备,其特征在于,所述处理器执行所述计算机可读指令时还实现如下步骤:The terminal device according to any one of claims 11 to 13, wherein the processor further implements the following steps when executing the computer-readable instruction:
    判断所述待识别图像的图像清晰度是否小于预设阈值;Determine whether the image clarity of the image to be recognized is less than a preset threshold;
    若小于所述预设阈值,则对所述待识别图像进行超分辨率重建。If it is less than the preset threshold, super-resolution reconstruction is performed on the image to be recognized.
  16. 如权利要求11至13中任一项所述的终端设备,其特征在于,所述获取包含一个或多个目标对象的待识别图像以及每个目标对象在所述待识别图像中的位置,包括:The terminal device according to any one of claims 11 to 13, wherein the acquiring the image to be recognized containing one or more target objects and the position of each target object in the image to be recognized includes :
    获取待处理图像;Obtain the image to be processed;
    对所述待处理图像进行目标检测,获取检测结果,所述检测结果用于指示是否在所述待处理图像中检测到目标对象,并且若在所述待处理图像中检测到目标对象时,所述检测结果用于指示所述待处理图像中每个目标对象的位置;Perform target detection on the image to be processed to obtain a detection result, the detection result is used to indicate whether a target object is detected in the image to be processed, and if the target object is detected in the image to be processed, the The detection result is used to indicate the position of each target object in the image to be processed;
    若所述检测结果指示在所述待处理图像中检测到目标对象,则:If the detection result indicates that a target object is detected in the image to be processed, then:
    将所述待处理图像确定为所述待识别图像,并根据所述检测结果,确定每个目标对象在所述待识别图像中的位置。The image to be processed is determined as the image to be recognized, and the position of each target object in the image to be recognized is determined according to the detection result.
  17. 如权利要求16所述的终端设备,其特征在于,所述获取待处理图像,包括:The terminal device according to claim 16, wherein said acquiring the image to be processed comprises:
    当检测到用户通过摄像头拍摄图像时,将所述摄像头拍摄的图像确定为所述待处理图像。When it is detected that the user has taken an image through the camera, the image taken by the camera is determined as the image to be processed.
  18. 一种计算机非易失性可读存储介质,所述计算机非易失性可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:A computer non-volatile readable storage medium, the computer non-volatile readable storage medium storing computer readable instructions, wherein the computer readable instructions are executed by a processor to implement the following steps:
    获取包含一个或多个目标对象的待识别图像以及每个目标对象在所述待识别图像中的位置;Acquiring an image to be recognized containing one or more target objects and the position of each target object in the image to be recognized;
    根据每个目标对象在所述待识别图像中的位置,在所述待识别图像中截取包含目标对象的一个或多个子图像;According to the position of each target object in the image to be recognized, one or more sub-images containing the target object are intercepted in the image to be recognized;
    识别每个所述子图像的图像清晰度,并根据识别出的每个所述子图像的图像清晰度,确定所述待识别图像的图像清晰度。The image definition of each sub-image is recognized, and the image definition of the image to be recognized is determined according to the recognized image definition of each sub-image.
  19. 如权利要求18所述的计算机非易失性可读存储介质,其特征在于,所述根据每个目标对象在所述待识别图像中的位置,在所述待识别图像中截取包含目标对象的一个或多个子图像,包括:The computer non-volatile readable storage medium according to claim 18, wherein, according to the position of each target object in the image to be recognized, the image containing the target object is intercepted in the image to be recognized One or more sub-images, including:
    根据每个目标对象在所述待识别图像中的位置,确定每个位置分别指示的图像区域的并集;According to the position of each target object in the image to be recognized, determine the union of the image regions indicated by each position;
    计算所述并集所指示的图像区域占据所述待识别图像的第一面积比例;Calculating the proportion of the image area indicated by the union occupying the first area of the image to be recognized;
    判断所述第一面积比例是否小于第一预设比例;Determine whether the first area ratio is less than a first preset ratio;
    若小于所述第一预设比例,则在所述待识别图像中截取包含目标对象的一个或多个子图像。If it is less than the first preset ratio, one or more sub-images containing the target object are captured in the image to be recognized.
  20. 如权利要求19所述的计算机非易失性可读存储介质,其特征在于,所述若小于所述第一预设比例,则在所述待识别图像中截取包含目标对象的一个或多个子图像,包括:The computer non-volatile readable storage medium according to claim 19, wherein if the ratio is smaller than the first preset ratio, one or more sub-groups containing the target object are intercepted in the image to be recognized Images, including:
    若小于所述第一预设比例,则If less than the first preset ratio, then
    根据每个目标对象在所述待识别图像中的位置,计算每个位置所指示的图像区域分别占所述待识别图像的第二面积比例;According to the position of each target object in the image to be recognized, calculating the ratio of the image area indicated by each position to the second area of the image to be recognized;
    根据所述第二面积比例,判断是否存在有第二面积比例大于第二预设比例的图像区域,其中,所述第二预设比例小于所述第一预设比例;Judging whether there is an image area with a second area ratio greater than a second preset ratio according to the second area ratio, where the second preset ratio is less than the first preset ratio;
    若存在,则对大于所述第二预设比例的每个图像区域均进行膨胀运算,得到各个修正图像区域;If it exists, performing an expansion operation on each image area larger than the second preset ratio to obtain each corrected image area;
    将每个修正图像区域分别确定为每个子图像。Each corrected image area is determined as each sub-image.
PCT/CN2019/103283 2019-04-11 2019-08-29 Image definition recognition method, image definition recognition apparatus, and terminal device WO2020206912A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910288549.8 2019-04-11
CN201910288549.8A CN110175980A (en) 2019-04-11 2019-04-11 Image definition recognition methods, image definition identification device and terminal device

Publications (1)

Publication Number Publication Date
WO2020206912A1 true WO2020206912A1 (en) 2020-10-15

Family

ID=67689552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103283 WO2020206912A1 (en) 2019-04-11 2019-08-29 Image definition recognition method, image definition recognition apparatus, and terminal device

Country Status (2)

Country Link
CN (1) CN110175980A (en)
WO (1) WO2020206912A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175980A (en) * 2019-04-11 2019-08-27 平安科技(深圳)有限公司 Image definition recognition methods, image definition identification device and terminal device
CN110705511A (en) * 2019-10-16 2020-01-17 北京字节跳动网络技术有限公司 Blurred image recognition method, device, equipment and storage medium
CN111178347B (en) * 2019-11-22 2023-12-08 京东科技控股股份有限公司 Ambiguity detection method, ambiguity detection device, ambiguity detection equipment and ambiguity detection storage medium for certificate image
CN110969602B (en) * 2019-11-26 2023-09-05 北京奇艺世纪科技有限公司 Image definition detection method and device
CN111461070B (en) * 2020-04-29 2023-12-08 Oppo广东移动通信有限公司 Text recognition method, device, electronic equipment and storage medium
CN111861991A (en) * 2020-06-11 2020-10-30 北京百度网讯科技有限公司 Method and device for calculating image definition
CN111754491A (en) * 2020-06-28 2020-10-09 国网电子商务有限公司 Picture definition judging method and device
CN112052350B (en) * 2020-08-25 2024-03-01 腾讯科技(深圳)有限公司 Picture retrieval method, device, equipment and computer readable storage medium
CN112053343A (en) * 2020-09-02 2020-12-08 平安科技(深圳)有限公司 User picture data processing method and device, computer equipment and storage medium
CN112329522A (en) * 2020-09-24 2021-02-05 上海品览数据科技有限公司 Goods shelf goods fuzzy detection method based on deep learning and image processing
CN112949423A (en) * 2021-02-07 2021-06-11 深圳市优必选科技股份有限公司 Object recognition method, object recognition device, and robot
CN113256583A (en) * 2021-05-24 2021-08-13 北京百度网讯科技有限公司 Image quality detection method and apparatus, computer device, and medium
CN113392241B (en) * 2021-06-29 2023-02-03 中海油田服务股份有限公司 Method, device, medium and electronic equipment for identifying definition of well logging image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040223745A1 (en) * 2003-01-07 2004-11-11 Pioneer Corporation Information recording medium, information reproducing apparatus and method, and computer program product
CN102955947A (en) * 2011-08-19 2013-03-06 北京百度网讯科技有限公司 Equipment and method for determining image definition
CN104637046A (en) * 2013-11-13 2015-05-20 索尼公司 Image detection method and device
CN107644425A (en) * 2017-09-30 2018-01-30 湖南友哲科技有限公司 Target image choosing method, device, computer equipment and storage medium
CN110175980A (en) * 2019-04-11 2019-08-27 平安科技(深圳)有限公司 Image definition recognition methods, image definition identification device and terminal device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229367A (en) * 2017-12-28 2018-06-29 何世容 A kind of face identification method and device
CN108513068B (en) * 2018-03-30 2021-03-02 Oppo广东移动通信有限公司 Image selection method and device, storage medium and electronic equipment
CN108776819A (en) * 2018-06-05 2018-11-09 Oppo广东移动通信有限公司 A kind of target identification method, mobile terminal and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040223745A1 (en) * 2003-01-07 2004-11-11 Pioneer Corporation Information recording medium, information reproducing apparatus and method, and computer program product
CN102955947A (en) * 2011-08-19 2013-03-06 北京百度网讯科技有限公司 Equipment and method for determining image definition
CN104637046A (en) * 2013-11-13 2015-05-20 索尼公司 Image detection method and device
CN107644425A (en) * 2017-09-30 2018-01-30 湖南友哲科技有限公司 Target image choosing method, device, computer equipment and storage medium
CN110175980A (en) * 2019-04-11 2019-08-27 平安科技(深圳)有限公司 Image definition recognition methods, image definition identification device and terminal device

Also Published As

Publication number Publication date
CN110175980A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
WO2020206912A1 (en) Image definition recognition method, image definition recognition apparatus, and terminal device
WO2019153739A1 (en) Identity authentication method, device, and apparatus based on face recognition, and storage medium
JP5671533B2 (en) Perspective and parallax adjustment in stereoscopic image pairs
TW202006602A (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
WO2020199477A1 (en) Image labeling method and apparatus based on multi-model fusion, and computer device and storage medium
WO2017124940A1 (en) Method and device for recognizing whether image comprises watermark
US9053389B2 (en) Hough transform for circles
JP6688277B2 (en) Program, learning processing method, learning model, data structure, learning device, and object recognition device
WO2020082731A1 (en) Electronic device, credential recognition method and storage medium
CN111626163B (en) Human face living body detection method and device and computer equipment
CN110059666B (en) Attention detection method and device
WO2021051547A1 (en) Violent behavior detection method and system
WO2021184847A1 (en) Method and device for shielded license plate character recognition, storage medium, and smart device
CN109840883B (en) Method and device for training object recognition neural network and computing equipment
WO2019184140A1 (en) Vr-based application program opening method, electronic apparatus, device and storage medium
US11086977B2 (en) Certificate verification
WO2022017006A1 (en) Video processing method and apparatus, and terminal device and computer-readable storage medium
CN111145086A (en) Image processing method and device and electronic equipment
CN112651953A (en) Image similarity calculation method and device, computer equipment and storage medium
CN111667504A (en) Face tracking method, device and equipment
CN112434689A (en) Method, device and equipment for identifying information in picture and storage medium
WO2020098325A1 (en) Image synthesis method, electronic device and storage medium
US20150320311A1 (en) Method and apparatus for iris recognition using natural light
CN113158773B (en) Training method and training device for living body detection model
WO2020244076A1 (en) Face recognition method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19923896

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19923896

Country of ref document: EP

Kind code of ref document: A1