CN113902740A - Construction method of image blurring degree evaluation model - Google Patents

Construction method of image blurring degree evaluation model Download PDF

Info

Publication number
CN113902740A
CN113902740A CN202111477459.7A CN202111477459A CN113902740A CN 113902740 A CN113902740 A CN 113902740A CN 202111477459 A CN202111477459 A CN 202111477459A CN 113902740 A CN113902740 A CN 113902740A
Authority
CN
China
Prior art keywords
image
target
evaluation model
degree evaluation
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111477459.7A
Other languages
Chinese (zh)
Inventor
季思文
刘国清
杨广
王启程
郑伟
朱晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Minieye Innovation Technology Co Ltd
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202111477459.7A priority Critical patent/CN113902740A/en
Publication of CN113902740A publication Critical patent/CN113902740A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a construction method of an image blurring degree evaluation model, which comprises the following steps: adding a detection frame for the original image by using a detection network; cutting out a target image from the original image according to the detection frame; processing the target image into a standard image with a preset size; adding a fuzzy degree label to the standard image according to the Laplace operator to obtain a sample image; feeding the sample image into an initial learning network to obtain a target learning network; and combining the target learning network and the Laplace algorithm module to obtain an image fuzzy degree evaluation model. The invention also provides an image blurring degree evaluation model, an image blurring degree evaluation method, a computer readable storage medium and intelligent driving equipment. The image blurring degree evaluation model is obtained through the method, and the model is used for judging the blurring degree of the image obtained by the vehicle-mounted auxiliary driving system to obtain the accurate blurring degree of the target in the image.

Description

Construction method of image blurring degree evaluation model
Technical Field
The invention relates to the field of automatic driving, in particular to a construction method of an image blurring degree evaluation model, the image blurring degree evaluation model, an image blurring degree evaluation method, a computer-readable storage medium and intelligent driving equipment.
Background
In a vehicle-mounted driving assistance system, it is necessary to accurately determine the positions and motion states of various targets such as vehicles and pedestrians on a road ahead. According to a conventional process, firstly, a target on a road needs to be detected through a detection network, and then refined attribute analysis is performed on the detected target, wherein the refined attribute analysis specifically comprises position information of the target, state information of the target, specific attributes of the target and the like. Although the neural network has a high accuracy in judging the information such as the type, the position and the like of the target, the factors such as rain and snow weather, strong backlight or motion blur and the like can cause certain influence on the imaging of the target, so that the imaging quality of the target in the image is not high. In this case, the contour and texture of the target itself will be blurred, which will affect the property analysis of the target refined by the neural network. When the object in the image is blurred due to the above factors, a relatively accurate judgment on the degree of blurring of the image is required. A confidence level may be provided for subsequent analysis of the target attribute based on the degree of ambiguity. When the image is sharp, we consider the state of the current target to be authentic, and when the image is blurred, we consider the state of the current target to be unreliable.
Image blurring may be caused by many factors in the processes of image acquisition, transmission and processing, for example, when an image is acquired, out-of-focus blurring may be generated due to incorrect focusing, motion blurring may be caused by relative motion of an object and a camera, blurring may be caused by high frequency loss after image compression, blurring may be caused by dirt on a camera lens or light intensity, and the like. The image blurring reduces the definition of an image, seriously affects the image quality, and causes difficulty and even failure of image analysis and processing, so that an effective blurring evaluation method must be used to control the use of a blurred image, thereby improving the overall performance of the system. How to accurately and conveniently acquire the fuzzy degree of the target in the image becomes an indispensable ring in a vehicle-mounted auxiliary driving system. By judging the target state in the image, the reliability of the target attribute analysis can be obtained, so that the decision error of the whole auxiliary driving system is reduced, the driving experience of a user is further optimized, and the driving safety is improved.
Therefore, how to accurately and conveniently acquire a model of the blurring degree of an object in an image is an urgent problem to be solved.
Disclosure of Invention
The invention provides a construction method of an image blur degree evaluation model, the image blur degree evaluation model, an image blur degree evaluation method, a computer readable storage medium and intelligent driving equipment.
In a first aspect, an embodiment of the present invention provides a method for constructing an image blur degree evaluation model, where the method for constructing the image blur degree evaluation model includes:
adding a detection frame for the original image by using a detection network;
cutting out a target image from the original image according to the detection frame;
processing the target image into a standard image with a preset size;
adding a fuzzy degree label to the standard image according to the Laplace operator to obtain a sample image;
feeding the sample image into an initial learning network to obtain a target learning network;
and combining the target learning network and a Laplace algorithm module to obtain an image fuzzy degree evaluation model, wherein the Laplace algorithm module comprises a 3-x 3 convolution kernel defined by a Laplace operator.
In a second aspect, an embodiment of the present invention provides an image blur degree evaluation model, including:
a target image acquisition module: adding a detection frame for the original image by using a detection network; cutting out a target image from the original image according to the detection frame; processing the target image into a standard image with a preset size;
an ambiguity degree label generation module: adding a fuzzy degree label to the standard image according to the Laplace operator to obtain a sample image;
deep learning neural network training module: feeding the sample image into an initial learning network to obtain a target learning network; and combining the target learning network and a Laplace algorithm module to obtain an image fuzzy degree evaluation model, wherein the Laplace algorithm module comprises a 3-x 3 convolution kernel defined by a Laplace operator.
In a third aspect, an embodiment of the present invention provides an image blur degree evaluation method, including:
adding a detection frame for the image to be detected by using a detection network;
cutting out a target image from the image to be detected according to the detection frame;
processing the target image into a standard image with a preset size;
and inputting the standard image into a target model obtained according to the construction method of the image blurring degree evaluation model to obtain the blurring degree of the image to be detected.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium having stored thereon program instructions of a method for constructing an image blur degree evaluation model, which can be loaded and executed by a processor.
In a fifth aspect, an embodiment of the present invention provides an intelligent driving device, which includes a vehicle body and a computer device disposed on the vehicle body, where the computer device includes:
a memory for storing program instructions;
and a processor for executing program instructions to cause a computer device to implement the method of constructing the image blur degree evaluation model.
According to the construction method of the image blur degree evaluation model, the deep learning network with the image blur degree identification function and the Laplace operator are combined to obtain the model capable of accurately identifying the image blur degree, the model is used for judging the blur degree of the image acquired by the vehicle-mounted auxiliary driving system to acquire the accurate blur degree of the target in the image, the automatic driving vehicle is ensured to acquire clear and accurate image for predicting the surrounding environment, and the safety of the vehicle provided with the vehicle-mounted auxiliary driving system in the driving process is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of the invention and that other drawings may be derived from the structure shown in the drawings by those skilled in the art without the exercise of inventive faculty.
Fig. 1 is a flowchart of a method for constructing an image blur degree evaluation model according to a first embodiment of the present invention.
Fig. 2 is a sub-flowchart of a method for constructing an image blur degree evaluation model according to a first embodiment of the present invention.
Fig. 3 is a sub-flowchart of a method for constructing an image blur degree evaluation model according to a second embodiment of the present invention.
Fig. 4 is a sub-flowchart of a method for constructing an image blur degree evaluation model according to a third embodiment of the present invention.
Fig. 5 is an image blur degree evaluation model according to the first embodiment of the present invention.
Fig. 6 is a flowchart of an evaluation method for the degree of image blur according to the first embodiment of the present invention.
Fig. 7a is a schematic diagram of a detection frame added to an original image by a detection network according to a first embodiment of the present invention.
Fig. 7b is a schematic diagram of a sample picture according to the first embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an image blur degree evaluation model according to a first embodiment of the present invention.
Fig. 9a is a first diagram illustrating the statistical result of the histogram with respect to the variance distribution range according to the first embodiment of the present invention.
FIG. 9b is a second diagram of the statistical result of the histogram with respect to the variance distribution range according to the first embodiment of the present invention.
Fig. 9c is a third schematic diagram of the statistical result of the histogram for the variance distribution range according to the first embodiment of the present invention.
Fig. 10 is a schematic diagram of an internal structure of a computer according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Please refer to fig. 1, which is a flowchart illustrating a method for constructing an image blur level evaluation model according to a first embodiment of the present invention. The method for constructing the image blur degree evaluation model provided by the embodiment of the invention specifically comprises the following steps.
And step S101, adding a detection frame for the original image by using a detection network. In this embodiment, the detection network is YOLOv 5. The detected objects include vehicles, pedestrians, bicycles, traffic lights, signboards, and the like. And acquiring a detection frame of a target to be detected in the driving assistance scene through a detection network, thereby acquiring the position of the detected target. Specifically, please refer to fig. 7a in combination, a detection frame 10 is added to the vehicle in the original image by using a trained YOLOv5 with an image recognition function.
And step S102, cutting out the target image from the original image according to the detection frame. Specifically, the position corresponding to the target frame is clipped from the original image. Wherein the center coordinates and width and height of the detection frame are (c)x,cyW, h), the coordinates of the upper left corner and the lower right corner of the detection frame in the original image are:
Figure 429620DEST_PATH_IMAGE001
and
Figure 847963DEST_PATH_IMAGE002
is clipped, where 1.1 is the magnification factor. The values in this embodiment are only examples and are not limiting.
Step S103, processing the target image into a standard image of a preset size. Specifically, the clipped target image is warped to 128 × 128 (in units of squares of pixels). Please refer to fig. 7b in combination, which is an example of the standard image 12. The values in this embodiment are only examples and are not limiting.
And step S104, adding a fuzzy degree label to the standard image according to the Laplace operator to obtain a sample image. The fuzzy degree label is a numerical value in the range of 0-1. Please refer to step S1041 to step S1044.
And step S105, feeding the sample image into an initial learning network to obtain a target learning network.
Step S106, combining the target learning network 61 and the laplacian algorithm module 62 to obtain the image blur degree evaluation model 60, where the laplacian algorithm module 62 includes a 3 × 3 convolution kernel defined by a laplacian. Please refer to fig. 8, which is a schematic structural diagram of an image blur level evaluation model according to a first embodiment of the present invention. In the present embodiment, the image blur degree evaluation model 60 includes two branch target learning Networks 61 and a laplacian algorithm module 62, wherein the target learning network 61 is a conventional Neural network structure, for example, a Convolutional Neural Network (CNN) for training and learning the processed sample image; the laplacian algorithm module 62 defines the laplacian (Laplace Operator) as a 3 × 3 convolution kernel for calculating the blur degree of the image, and the convolution kernel is defined as follows:
Figure 78087DEST_PATH_IMAGE003
after the image blur degree evaluation model is obtained by using the construction method of the image blur degree evaluation model, the image 11 to be recognized is input into the result of the output of the integrated target learning network 61 and the laplacian algorithm module 62 to obtain the image blur degree 63. When the error of the output results of the two is not more than 15%, the judgment of the image blurring degree is judged to be credible. When the error of the results of both outputs exceeds 15%, the judgment of the degree of blurring of the image is judged to be not reliable. When the judgment of the degree of image blur is authentic, an image blur degree judgment result is given based on the output result of the integrated target learning network 61. The above-described embodiment can further reduce the determination error of the degree of image blur.
In some possible embodiments, the image blur degree evaluation model 60 directly outputs whether the image is available for the vehicle-mounted driving assistance system according to the image blur degree 63, so that a clear image is used for determining the position and motion state of various objects such as vehicles, pedestrians, and the like on the road ahead.
Please refer to fig. 2, which is a flowchart illustrating the sub-steps of step S104 according to an embodiment of the present invention. And step S104, adding a fuzzy degree label to the standard image according to the Laplace operator to obtain a sample image. The method specifically comprises the following steps.
Step S1041, calculating a second derivative of the standard image by using a laplacian edge detection algorithm. Specifically, the image is processed by adopting a Laplacian edge detection algorithm to obtain a second derivative of the standard image, and the second derivative reflects a region with rapidly changing density in the standard image. The formula of the laplacian Operator (Laplace Operator) is as follows:
Figure 669605DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 832733DEST_PATH_IMAGE005
is the second derivative of the object in the standard image, x is the abscissa of the object in the standard image, and y is the ordinate of the object in the standard image.
Step S1042, obtaining variance according to the second derivative. The boundary is clear in a normal picture, so the variance is large; however, the boundary information included in the blurred picture is very small, so the variance is small. In the present embodiment, the variance is obtained as an initial value of the degree of image blur by the second derivative.
And S1043, converting the variance into a fuzzy degree label between 0 and 1 according to a fitting rule.
And step S1044, adding the fuzzy degree label to the standard image to obtain a sample image.
Unlike the common object classification, the above embodiment has a relatively hard to define the degree of image blur. For example, the category information of chicken, duck, cat, dog, cow, sheep and pig can be clearly defined, and the labeling of the sample label can be carried out by using a manual labeling mode. However, the fuzzy degree of a target is judged manually, which is often influenced by human subjectivity, so that a label which can be learned by a neural network cannot be obtained. Therefore, the above-described embodiment can perform more accurate determination of the degree of blur of an image.
Please refer to fig. 3, which is a method for constructing an image blur level evaluation model according to a second embodiment of the present invention. The difference between the construction method of the image blur degree evaluation model provided by the second embodiment and the construction method of the image blur degree evaluation model provided by the first embodiment is that before the variance is converted into a blur degree label between 0 and 1 according to the fitting rule, the method further comprises obtaining the fitting rule, and the obtaining of the fitting rule specifically comprises the following steps.
In step S301, the distribution range of the variance is acquired. Specifically, the value threshold of the image blurring degree, namely the variance distribution range, is between 0 and 4000. The statistical results of the histogram for the variance distribution range are shown in fig. 9 a-9 c, where the abscissa is the distribution interval of the blur degree value and the ordinate is the number of interval samples.
And step S302, obtaining a constraint range according to the distribution interval of the distribution range. In the embodiment, most of the distribution results are concentrated in the interval of 8 to 2048, so the constraint range is:
Figure 218715DEST_PATH_IMAGE006
where V is the variance of the standard image and VN is the variance of the standard image within the constraint range.
And S303, obtaining a conversion formula according to the corresponding relation between the constraint range and the numerical value between 0 and 1. In this embodiment, the conversion formula is as follows:
Figure 732873DEST_PATH_IMAGE007
the VNN is a fuzzy degree label, and the value range of the VNN is between 0 and 1.
In the above embodiment, the VNN range between 0 and 1 is suitable for neural network learning.
And step S304, combining the constraint range and the conversion formula into a fitting rule.
Please refer to fig. 4, which is a method for constructing an image blur level evaluation model according to a third embodiment of the present invention. The difference between the method for constructing the image blur degree evaluation model provided in the third embodiment and the method for constructing the image blur degree evaluation model provided in the first embodiment is that before the sample image is fed into the initial learning network to be trained to obtain the target learning network, the method for constructing the image blur degree evaluation model provided in the third embodiment further includes the following steps.
Step S501, sample images are sorted according to the fuzzy degree label. In the present embodiment, all sample images are sorted from small to large according to the initial value of the degree of blur.
And step S502, obtaining the evaluation result of the blurring degree between two adjacent sample images by the staff.
And step S503, if the sequence between the evaluation result and the two adjacent sample images is not matched, rejecting unmatched sample images.
In this embodiment, all samples that need to be trained are sorted from small to large according to the initial value of the degree of blur, and then the comparison between the previous and subsequent frame sample images is performed, so as to screen out sample images with significant errors and eliminate them. Through the processing, a relatively accurate training set of the sample image with the image blurring degree can be obtained, so that the target model obtained through the sample image training has more accurate identification performance in order to reduce the error of the initial value of the image blurring degree of the sample.
Please refer to fig. 5, which is a block diagram illustrating an image blur level evaluation model 500 according to a first embodiment of the present invention. The image blur degree evaluation model 500 includes: a target image acquisition module 510, a blur degree label generation module 520, and a deep learning neural network training module 530.
Target image acquisition module 510: and adding a detection frame for the original image by using a detection network. And cutting out the target image from the original image according to the detection frame. And processing the target image into a standard image with a preset size.
The ambiguity degree label generating module 520: and adding a fuzzy degree label to the standard image according to the Laplace operator to obtain a sample image.
Deep learning neural network training module 530: and feeding the sample image into an initial learning network to obtain a target learning network. And combining the target learning network and a Laplace algorithm module to obtain an image fuzzy degree evaluation model, wherein the Laplace algorithm module comprises a 3-x 3 convolution kernel defined by a Laplace operator.
Please refer to fig. 6, which is a flowchart illustrating a method for evaluating blur level of an image according to a first embodiment of the present invention. The method for evaluating the degree of image blur according to the first embodiment of the present invention specifically includes the following steps.
Step S601, adding a detection frame for the image to be detected by using a detection network. In this embodiment, the detection network is YOLOv 5. The detected objects include vehicles, pedestrians, bicycles, traffic lights, signboards, and the like. And acquiring a detection frame of a target to be detected in the driving assistance scene through a detection network, thereby acquiring the position of the detected target. Specifically, please refer to fig. 7a in combination, a detection frame 10 is added to the vehicle in the original image by using a trained YOLOv5 with an image recognition function.
Step S602, cutting out a target image from the image to be detected according to the detection frame. Specifically, the position corresponding to the target frame is clipped from the original image. Wherein the center coordinates and width and height of the detection frame are (c)x,cyW, h), the coordinates of the upper left corner and the lower right corner in the original image according to the detection frame are as follows:
Figure 862503DEST_PATH_IMAGE001
and
Figure 614559DEST_PATH_IMAGE002
is clipped, where 1.1 is the magnification factor. The values in this embodiment are only examples and are not limiting.
In step S603, the target image is processed into a standard image of a preset size. Specifically, the clipped target image is warped to 128 × 128 (in units of squares of pixels). Please refer to fig. 7b in combination, which is an example of the standard image 12. The values in this embodiment are only examples and are not limiting.
And step S604, inputting the standard image into a target model obtained according to the construction method of the image blur degree evaluation model to obtain the blur degree of the image to be detected. Referring to fig. 8 in combination, after the image blur degree evaluation model is obtained by using the image blur degree evaluation model construction method, the image to be recognized 11 is input into the result of the output of the integrated target learning network 61 and the laplacian algorithm module 62 to obtain the image blur degree 63. When the error of the output results of the two is not more than 15%, the judgment of the image blurring degree is judged to be credible. When the error of the results of both outputs exceeds 15%, the judgment of the degree of blurring of the image is judged to be not reliable. When the judgment of the degree of image blur is authentic, an image blur degree judgment result is given based on the output result of the integrated target learning network 61. The above-described embodiment can further reduce the determination error of the degree of image blur.
In some possible embodiments, the image blur degree evaluation model 60 directly outputs whether the image is usable in the vehicle-mounted driving assistance system according to the image blur degree 63, so that a clear image is used to determine the positions and motion states of various objects such as vehicles, pedestrians, and the like on the road ahead.
In the embodiment, the method for constructing the image blur degree evaluation model obtains a model capable of accurately identifying the image blur degree by combining the deep learning network with the image blur degree and the laplacian operator, and obtains the accurate blur degree of the target in the image by judging the blur degree of the image obtained by the vehicle-mounted auxiliary driving system by using the model, so that the automatic driving vehicle is ensured to obtain a clear and accurate image for predicting the surrounding environment, and the safety in the automatic driving vehicle form process is improved.
The invention also provides a computer readable storage medium. The computer readable storage medium has stored thereon program instructions of the above-described construction method of the image blur degree evaluation model, which can be loaded and executed by a processor. Since the computer-readable storage medium adopts all the technical solutions of all the above embodiments, at least all the advantages brought by the technical solutions of the above embodiments are achieved, and no further description is given here.
The present invention further provides an intelligent driving device 100 (not shown), where the intelligent driving device 100 includes a vehicle body 110 (not shown), and a computer device 900 disposed on the vehicle body, the computer device 900, and the computer device 900 at least includes a memory 901 and a processor 902. The memory 901 is used to store program instructions of a method of constructing an image blur degree evaluation model. A processor 902 for executing program instructions to make a computer apparatus implement the above-described construction method of the image blur degree evaluation model. Please refer to fig. 10, which is a schematic diagram illustrating an internal structure of a computer apparatus 900 according to a first embodiment of the present invention.
The memory 901 includes at least one type of computer-readable storage medium, which includes flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 901 may in some embodiments be an internal storage unit of the computer device 900, such as a hard disk of the computer device 900. The memory 901 may also be an external storage device of the computer device 900 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital Card (SD), a Flash memory Card (Flash Card), etc., provided on the computer device 900. Further, the memory 901 may also include both internal storage units and external storage devices of the computer device 900. The memory 901 can be used not only for storing application software installed in the computer apparatus 900 and various types of data such as program instructions of a construction method of an image blur degree evaluation model and the like, but also for temporarily storing data that has been output or is to be output such as data generated by execution of the construction method of the image blur degree evaluation model and the like.
Processor 902 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip that executes program instructions or processes data stored in memory 901. Specifically, the processor 902 executes program instructions of a construction method of the image blur degree evaluation model to control the computer apparatus 900 to implement the construction method of the image blur degree evaluation model.
Further, the computer device 900 may further include a bus 903 which may be a Peripheral Component Interconnect (PCI) standard bus or an Extended Industry Standard Architecture (EISA) bus, etc. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
Further, computer device 900 may also include a display component 904. The display component 904 may be an LED (Light Emitting Diode) display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light Emitting Diode) touch panel, or the like. The display component 904 may also be referred to as a display device or display unit, as appropriate, for displaying information processed in the computer device 900 and for displaying a visual user interface, among other things.
Further, the computer device 900 may also include a communication component 905, and the communication component 905 may optionally include a wired communication component and/or a wireless communication component (e.g., a WI-FI communication component, a bluetooth communication component, etc.), typically used for establishing a communication connection between the computer device 900 and other computer devices.
While fig. 10 shows only a computer device 900 having components 901 and 905 and program instructions for implementing a method for constructing an image blur degree evaluation model, those skilled in the art will appreciate that the structure shown in fig. 10 does not constitute a limitation of the computer device 900, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components. Since the computer device 900 adopts all technical solutions of all the embodiments described above, at least all the advantages brought by the technical solutions of the embodiments described above are achieved, and are not described herein again.
The construction method of the image blurring degree evaluation model comprises one or more program instructions. The procedures or functions according to embodiments of the invention are generated in whole or in part when the program instructions are loaded and executed on a device. The apparatus may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The program instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the program instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above described systems, apparatuses and units may refer to the corresponding processes in the above described method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described embodiment of the method for constructing the image blur degree evaluation model is only illustrative, for example, the division of the unit is only a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a computer-readable storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned computer-readable storage media comprise: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program instructions.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, insofar as these modifications and variations of the invention fall within the scope of the claims of the invention and their equivalents, the invention is intended to include these modifications and variations.
The above-mentioned embodiments are only examples of the present invention, which should not be construed as limiting the scope of the present invention, and therefore, the present invention is not limited by the claims.

Claims (10)

1. A method for constructing an image blurring degree evaluation model is characterized by comprising the following steps:
adding a detection frame for the original image by using a detection network;
cutting out a target image from the original image according to the detection frame;
processing the target image into a standard image with a preset size;
adding a fuzzy degree label to the standard image according to a Laplace operator to obtain a sample image;
feeding the sample image into an initial learning network to obtain a target learning network; and
and combining the target learning network and a Laplace algorithm module to obtain an image fuzzy degree evaluation model, wherein the Laplace algorithm module comprises a 3-x 3 convolution kernel defined by a Laplace operator.
2. The method for constructing the image blur degree evaluation model according to claim 1, wherein a sample image is obtained by adding a blur degree label to the standard image according to a laplacian operator, and specifically comprises:
calculating a second derivative of the standard image by using a Laplacian edge detection algorithm;
obtaining a variance according to the second derivative;
converting the variance into the fuzzy degree label between 0 and 1 according to a fitting rule; and
and adding the fuzzy degree label to the standard image to obtain the sample image.
3. The method for constructing an image blur degree evaluation model according to claim 2, further comprising, before converting the variance into the blur degree label between 0 and 1 according to a fitting rule:
obtaining the distribution range of the variance;
obtaining a constraint range according to the distribution interval of the distribution range;
obtaining a conversion formula according to the corresponding relation between the constraint range and the numerical value between 0 and 1; and
and combining the constraint range and the conversion formula into the fitting rule.
4. The method for constructing an image blur degree evaluation model according to claim 3, wherein the constraint range and the conversion formula are combined as the fitting rule, wherein the constraint range specifically includes:
Figure 32083DEST_PATH_IMAGE001
v is the variance of the standard image, VN is the variance of the standard image within the constrained range;
wherein, the conversion formula is specifically as follows:
Figure 767958DEST_PATH_IMAGE002
VNN is a fuzzy degree label, and the value range of the VNN is between 0 and 1.
5. The method for constructing an image blur degree evaluation model according to claim 1, wherein before feeding the sample image into an initial learning network to obtain a target learning network, the method further comprises:
sorting the sample images according to the fuzzy degree label;
obtaining an evaluation result of the blurring degree between two adjacent sample images by a worker; and
and if the evaluation result is not matched with the sequence between the two adjacent sample images, rejecting the unmatched sample images.
6. The method of constructing an image blur degree evaluation model according to claim 1, wherein the detection network is YOLOv 5.
7. An image blur degree evaluation model, characterized by comprising:
a target image acquisition module: adding a detection frame for the original image by using a detection network; cutting out a target image from the original image according to the detection frame; processing the target image into a standard image with a preset size;
an ambiguity degree label generation module: adding a fuzzy degree label to the standard image according to a Laplace operator to obtain a sample image;
deep learning neural network training module: feeding the sample image into an initial learning network to obtain a target learning network; and combining the target learning network and a Laplace algorithm module to obtain an image fuzzy degree evaluation model, wherein the Laplace algorithm module comprises a 3-x 3 convolution kernel defined by a Laplace operator.
8. An evaluation method of an image blur degree, characterized by comprising:
adding a detection frame for the image to be detected by using a detection network;
cutting out a target image from the image to be detected according to the detection frame;
processing the target image into a standard image with a preset size;
inputting the standard image into a target model obtained according to the construction method of the image blurring degree evaluation model claimed in any one of claims 1 to 6 to obtain the blurring degree of the image to be detected.
9. A computer-readable storage medium, wherein program instructions of the method for constructing the image blur degree evaluation model according to any one of claims 1 to 6 are stored on the computer-readable storage medium and can be loaded and executed by a processor.
10. An intelligent driving device, the intelligent driving device comprises a vehicle body and a computer device arranged on the vehicle body, and is characterized in that the computer device comprises:
a memory for storing program instructions; and
a processor for executing the program instructions to cause the computer device to implement the method of constructing an image blur degree evaluation model according to any one of claims 1 to 6.
CN202111477459.7A 2021-12-06 2021-12-06 Construction method of image blurring degree evaluation model Pending CN113902740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111477459.7A CN113902740A (en) 2021-12-06 2021-12-06 Construction method of image blurring degree evaluation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111477459.7A CN113902740A (en) 2021-12-06 2021-12-06 Construction method of image blurring degree evaluation model

Publications (1)

Publication Number Publication Date
CN113902740A true CN113902740A (en) 2022-01-07

Family

ID=79195366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111477459.7A Pending CN113902740A (en) 2021-12-06 2021-12-06 Construction method of image blurring degree evaluation model

Country Status (1)

Country Link
CN (1) CN113902740A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114594770A (en) * 2022-03-04 2022-06-07 深圳市千乘机器人有限公司 Inspection method for inspection robot without stopping
CN115713501A (en) * 2022-11-10 2023-02-24 深圳市探鸽智能科技有限公司 Detection processing method and system suitable for camera blurred picture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553431A (en) * 2020-04-30 2020-08-18 上海眼控科技股份有限公司 Picture definition detection method and device, computer equipment and storage medium
CN111553880A (en) * 2020-03-26 2020-08-18 北京中科虹霸科技有限公司 Model generation method, label labeling method, iris image quality evaluation method and device
CN111932510A (en) * 2020-08-03 2020-11-13 深圳回收宝科技有限公司 Method and device for determining image definition
CN112561879A (en) * 2020-12-15 2021-03-26 北京百度网讯科技有限公司 Ambiguity evaluation model training method, image ambiguity evaluation method and device
WO2021179471A1 (en) * 2020-03-09 2021-09-16 苏宁易购集团股份有限公司 Face blur detection method and apparatus, computer device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021179471A1 (en) * 2020-03-09 2021-09-16 苏宁易购集团股份有限公司 Face blur detection method and apparatus, computer device and storage medium
CN111553880A (en) * 2020-03-26 2020-08-18 北京中科虹霸科技有限公司 Model generation method, label labeling method, iris image quality evaluation method and device
CN111553431A (en) * 2020-04-30 2020-08-18 上海眼控科技股份有限公司 Picture definition detection method and device, computer equipment and storage medium
CN111932510A (en) * 2020-08-03 2020-11-13 深圳回收宝科技有限公司 Method and device for determining image definition
CN112561879A (en) * 2020-12-15 2021-03-26 北京百度网讯科技有限公司 Ambiguity evaluation model training method, image ambiguity evaluation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘传正 等: "《中国地质灾害区域预警方法与应用》", 31 December 2009 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114594770A (en) * 2022-03-04 2022-06-07 深圳市千乘机器人有限公司 Inspection method for inspection robot without stopping
CN114594770B (en) * 2022-03-04 2024-04-26 深圳市千乘机器人有限公司 Inspection method for inspection robot without stopping
CN115713501A (en) * 2022-11-10 2023-02-24 深圳市探鸽智能科技有限公司 Detection processing method and system suitable for camera blurred picture

Similar Documents

Publication Publication Date Title
CN110705405B (en) Target labeling method and device
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN108009543B (en) License plate recognition method and device
CN109284674B (en) Method and device for determining lane line
US9740967B2 (en) Method and apparatus of determining air quality
CN111444921A (en) Scratch defect detection method and device, computing equipment and storage medium
EP3617938B1 (en) Lane line processing method and device
CN110443212B (en) Positive sample acquisition method, device, equipment and storage medium for target detection
CN110569856B (en) Sample labeling method and device, and damage category identification method and device
WO2020186790A1 (en) Vehicle model identification method, device, computer apparatus, and storage medium
CN113902740A (en) Construction method of image blurring degree evaluation model
CN110751012B (en) Target detection evaluation method and device, electronic equipment and storage medium
CN111598913B (en) Image segmentation method and system based on robot vision
CN107748882B (en) Lane line detection method and device
CN111553302B (en) Key frame selection method, device, equipment and computer readable storage medium
CN113449632B (en) Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
WO2020259416A1 (en) Image collection control method and apparatus, electronic device, and storage medium
CN115880260A (en) Method, device and equipment for detecting base station construction and computer readable storage medium
CN114267032A (en) Container positioning identification method, device, equipment and storage medium
CN117372415A (en) Laryngoscope image recognition method, device, computer equipment and storage medium
CN112967224A (en) Electronic circuit board detection system, method and medium based on artificial intelligence
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN116259021A (en) Lane line detection method, storage medium and electronic equipment
CN115273025A (en) Traffic asset checking method, device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518049 Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong

Applicant after: Shenzhen Youjia Innovation Technology Co.,Ltd.

Address before: 518049 401, building 1, Shenzhen new generation industrial park, No. 136, Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong Province

Applicant before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information