CN113177917A - Snapshot image optimization method, system, device and medium - Google Patents

Snapshot image optimization method, system, device and medium Download PDF

Info

Publication number
CN113177917A
CN113177917A CN202110450382.8A CN202110450382A CN113177917A CN 113177917 A CN113177917 A CN 113177917A CN 202110450382 A CN202110450382 A CN 202110450382A CN 113177917 A CN113177917 A CN 113177917A
Authority
CN
China
Prior art keywords
contour
point
points
correction
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110450382.8A
Other languages
Chinese (zh)
Other versions
CN113177917B (en
Inventor
黄超
陈婉婉
夏伟
董康
周国亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unisinsight Technology Co Ltd
Original Assignee
Chongqing Unisinsight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unisinsight Technology Co Ltd filed Critical Chongqing Unisinsight Technology Co Ltd
Priority to CN202110450382.8A priority Critical patent/CN113177917B/en
Publication of CN113177917A publication Critical patent/CN113177917A/en
Application granted granted Critical
Publication of CN113177917B publication Critical patent/CN113177917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a snap-shot image optimization method, a snap-shot image optimization system, snap-shot image optimization equipment and a snap-shot image optimization medium, wherein the snap-shot image optimization method comprises the steps of obtaining a plurality of images to be selected shot in a preset shooting area, identifying target objects in the images to be selected and identification key points of the target objects, determining quality scoring parameters, determining the quality score of the images to be selected according to the quality score parameters, determining a preferred snap shot image from each image to be selected, the method and the device can determine one or more preferred snap-shot images in a plurality of images to be selected, improve the image quality of the snap-shot images, solve the problem that an intelligent algorithm cannot generate an ideal output effect due to poor quality of the snap-shot images, improve the reliability of the snap-shot images, make a good cushion for the follow-up intelligent algorithm to generate the ideal effect by using the snap-shot images, and effectively improve the follow-up output result of the intelligent algorithm depending on the snap-shot images.

Description

Snapshot image optimization method, system, device and medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, system, device, and medium for selecting a snapshot image.
Background
With the deep advance of smart cities, more and more intelligent algorithms are popularized, popularized and expanded, wherein typical algorithm application examples include control alarm, vehicle searching by images and the like. The intelligent algorithm is based on data as if the fish were in water.
Based on different algorithms, the required data types are often different, for example, people's faces are required to take pictures for control alarm, vehicles are required to take pictures for vehicle searching, and therefore, for various algorithms, the taken pictures often play a key role in the accuracy of the algorithms. However, in the related art, the snapshot is usually shot and selected based on a random snapshot mode, and there may be a poor output effect of the subsequent intelligent algorithm and an unsatisfactory output result due to low quality of the snapshot image.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides a snapshot image optimization method, system, device and medium to solve the above-mentioned technical problems.
The invention provides a preferable method for snapshotting an image, which comprises the following steps:
acquiring a plurality of images to be selected shot in a preset shooting area;
identifying a target object in the image to be selected and an identification key point of the target object;
determining a quality scoring parameter, wherein the quality scoring parameter comprises at least one of a correction characteristic parameter and a contour parameter, the correction characteristic parameter is determined according to the position information of the identification key points and/or the number of the identification key points, and the contour parameter is determined according to the position information of the identification key points and the influence factors of the identification key points;
and determining the quality score of the images to be selected according to the quality score parameters, and determining a preferred snapshot image from each image to be selected.
Optionally, the identification key points include correction feature points of the target object, the identification key point number includes a correction feature point number of the correction feature points, and the determination manner of the correction feature parameters includes:
acquiring a preset correction characteristic point threshold value and the number of correction characteristic points of the correction characteristic points in the image to be selected;
and determining the correction characteristic parameters according to the preset correction characteristic point threshold value and the correction characteristic point quantity.
Optionally, the identification keypoints include correction feature points and contour points of the target object, the correction feature points are located inside a contour formed by the contour points, the identification keypoint position information includes correction feature point position information of the correction feature points and contour position information of the contour points, and the determination manner of the correction feature parameters includes:
and determining the correction characteristic parameters according to the position information of the correction characteristic points and the position information of the outline.
Optionally, the determining the correction feature parameter according to the correction feature point position information of the correction feature point and the contour position information of the contour point includes:
dividing the contour into a first region and a second region by the corrected feature points;
forming a contour according to the contour position information, and determining the contour area of the contour according to the contour position information;
respectively determining a first area of a first region and a second area of a second region according to the corrected feature point position information and the contour position information;
and determining the correction characteristic parameter according to the first area, the second area and the outline area.
Optionally, the identification keypoints include at least two types of correction feature points of the target object, a correction feature sub-parameter is determined according to each type of correction feature point, and the correction feature parameter is determined according to each correction feature sub-parameter.
Optionally, the identification keypoints include contour points of the target object, the identification keypoint position information includes contour point position information of the contour points, the identification keypoint influence factors include contour point influence factors of the contour points, and the determination manner of the contour parameters includes:
dividing the preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining distribution information of the contour points in the sub-shooting area according to the position information of the contour points;
acquiring a preset contour point threshold value of the target object;
and determining the contour parameters according to the distribution information, the sub-shooting area influence factors, the contour point influence factors and the preset contour point threshold.
Optionally, if the quality scoring parameter includes the correction feature parameter and the contour parameter, the determining the quality score of the image to be selected according to the quality scoring parameter includes:
Figure BDA0003038419250000021
wherein, P is the quality score, A is the contour parameter, B is the correction characteristic parameter, E is the preset correction constant, and N is the preset contour point threshold.
Optionally, the target object includes a human face, the recognition key points include a human face contour point and at least two facial feature points, and the quality score is determined in a manner that:
dividing the face contour formed by the face contour points into a first face area and a second face area through the face feature points;
respectively determining a first face area of the first face area, a second face area of the second face area and a face contour area of the face contour;
dividing the preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining the distribution information of the face contour points in the sub-shooting area according to the position information of the face contour points;
acquiring a preset correction constant, a preset human face contour point threshold and a human face contour point influence factor of the human face contour point;
Figure BDA0003038419250000031
wherein P is the quality score, WDiFor the ith sub-shot region influence factor, WsjThe face contour point influence factor of the jth personal face contour point in the ith sub-shooting area, E is a preset correction constant, Sum (D)j) For the preset face contour point threshold, S1l is the area of the first face region, S1rIs the area of the second face region, S1Is the face contour area.
Optionally, the target object includes a vehicle, the recognition key points include vehicle contour points and identification information in a license plate, and the quality score determination method includes:
dividing the preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining distribution information of the vehicle contour points in the sub-shooting area according to the vehicle contour point position information of the vehicle contour points;
acquiring the identification information quantity of the identification information in the image to be selected;
acquiring a preset correction constant, a preset vehicle contour point threshold and a vehicle contour point influence factor of the vehicle contour point;
Figure BDA0003038419250000032
wherein P is the quality score, WDiFor the ith sub-shot region influence factor, WsjThe influence factor of the vehicle contour point of the jth vehicle contour point in the ith sub-shooting area is E, and the preset correction constant is Sum (D)j) The method comprises the steps of presetting a vehicle contour point threshold, N is a preset identification information threshold, and X is the identification information quantity.
The invention also provides a preferable system for capturing images, which comprises:
the acquisition module is used for acquiring a plurality of images to be selected shot in a preset shooting area;
the identification module is used for identifying a target object in the image to be selected and an identification key point of the target object;
the quality scoring parameter determining module is used for determining quality scoring parameters, the quality scoring parameters comprise at least one of correction characteristic parameters and contour parameters, the correction characteristic parameters are determined according to the position information of the identification key points and/or the number of the identification key points, and the contour parameters are determined according to the position information of the identification key points and the influence factors of the identification key points;
and the preferred snapshot image determining module is used for determining the quality score of the image to be selected according to the quality score parameter and determining a preferred snapshot image from each image to be selected.
Optionally, the quality score parameter determining module includes at least one of a correction feature parameter determining module and a contour parameter determining module, and the correction feature parameter determining module includes a first correction feature parameter determining submodule and/or a second correction feature parameter determining submodule;
if the identification key point comprises a contour point of the target object, the identification key point position information comprises contour point position information of the contour point, the identification key point influence factor comprises a contour point influence factor of the contour point, the contour parameter determination module is used for dividing the preset shooting area into at least two sub-shooting areas, determining a sub-shooting area influence factor of the sub-shooting areas, determining distribution information of the contour point in the sub-shooting areas according to the contour point position information, acquiring a preset contour point threshold of the target object, and determining the contour parameter according to the distribution information, the sub-shooting area influence factor and the contour point influence factor;
if the identification key point comprises a correction feature point of the target object, the identification key point number comprises a correction feature point number of the correction feature point, the first correction feature parameter determination submodule is used for obtaining a preset correction feature point threshold value and a correction feature point number of the correction feature point in the image to be selected, and determining the correction feature parameter according to the preset correction feature point threshold value and the correction feature point number;
if the identification key point comprises a correction feature point and a contour point of the target object, the correction feature point is located inside a contour formed by the contour point, the identification key point position information comprises correction feature point position information of the correction feature point and contour position information of the contour point, and the second correction feature parameter determination submodule is used for determining the correction feature parameter according to the correction feature point position information and the contour position information.
The invention also provides an electronic device, which comprises a processor, a memory and a communication bus;
the communication bus is used for connecting the processor and the memory;
the processor is configured to execute the computer program stored in the memory to implement the method according to any of the embodiments described above.
The invention also provides a computer-readable storage medium having stored thereon a computer program for causing a computer to perform the method according to any one of the embodiments described above.
The invention has the beneficial effects that:
the invention provides a snap-shot image optimization method, a snap-shot image optimization system, snap-shot image optimization equipment and a snap-shot image optimization medium, wherein the snap-shot image optimization method comprises the steps of obtaining a plurality of images to be selected shot in a preset shooting area, identifying target objects in the images to be selected and identification key points of the target objects, determining quality scoring parameters, determining the quality score of the images to be selected according to the quality score parameters, determining a preferred snap shot image from each image to be selected, the method and the device can determine one or more preferred snap-shot images in a plurality of images to be selected, improve the image quality of the snap-shot images, solve the problem that an intelligent algorithm cannot generate an ideal output effect due to poor quality of the snap-shot images, improve the reliability of the snap-shot images, make a good cushion for the follow-up intelligent algorithm to generate the ideal effect by using the snap-shot images, and effectively improve the follow-up output result of the intelligent algorithm depending on the snap-shot images.
Drawings
Fig. 1 is a schematic flowchart of a preferred method for capturing an image according to an embodiment.
Fig. 2 is a schematic diagram of a selected preset shot region.
Fig. 3 is a schematic diagram of the preset shooting area in fig. 2 being divided into a plurality of sub-shooting areas.
Fig. 4 is a schematic diagram of a target object.
Fig. 5 is a schematic diagram of a snap shot scene.
Fig. 6 is a schematic diagram of the snapshot scene in fig. 5 being divided into several sub-shooting areas.
Fig. 7 is a schematic diagram of a face contour point.
Fig. 8 is another schematic diagram of a face contour point.
Fig. 9 is another schematic view of a snap shot scene.
Fig. 10 is a schematic diagram of the snapshot scene in fig. 9 divided into several sub-shooting areas.
FIG. 11 is a schematic view of a vehicle contour point.
FIG. 12 is another schematic view of vehicle contour points.
Fig. 13 is a specific flowchart of a preferred method for capturing an image according to an embodiment.
Fig. 14 is a flowchart illustrating a method for determining a quality score according to an embodiment.
Fig. 15 is a schematic structural diagram of a preferred system for capturing images according to the second embodiment.
Fig. 16 is a schematic hardware structure diagram of an apparatus according to an embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present invention.
Example one
As shown in fig. 1, the method for prioritizing a captured image in the present embodiment includes:
s101: and acquiring a plurality of images to be selected shot in a preset shooting area.
Optionally, the preset shooting area may be any preset area, and the image to be selected may be an image shot by the same shooting device, or an image shot by multiple shooting devices based on similar viewing angles.
Optionally, the images to be selected may be a plurality of images captured by a certain capturing machine, and since the capturing machine captures the images randomly or according to a preset instruction during capturing, the quality of the obtained images is uneven, so that one or more images with better image quality can be selected from the images to be selected by using the images as the images to be selected for subsequent analysis.
Alternatively, the images to be selected may be video frames obtained from a video.
In some embodiments, before the image to be selected is obtained from the original image set, the original image set may be further filtered, and the image with the resolution lower than the preset resolution threshold may be filtered.
S102: and identifying the target object in the image to be selected and the identification key point of the target object.
Optionally, the identification of the target object may be implemented by using a related technical means in the field, which is not limited herein.
Alternatively, the identification key point of the target object may be a preset series of key points for the target object, and different categories of target objects may have different key points. Target objects of the same category use the series of keypoints of the same distribution logic as recognition keypoints. When the categories all belong to human faces, taking the target object comprising the human face as an example, the identification key points can be a plurality of human face contour points of the human face image, and no matter how many human faces are included in a plurality of images to be selected, the identification key points are the plurality of human face contour points of the people and are irrelevant to whether the people are the face shapes of square faces, round faces, melon seed faces and the like. Of course, the above description takes human faces as an exemplary type to identify key points, and the classification manner of the categories may also be other manners required by those skilled in the art, for example, the categories are classified in a manner of square human faces, circular human faces, and the like.
Alternatively, the recognition key points may be other correction feature points of the target object besides contour points, and these feature points may be determined based on the contour of the target object, for example, when the target object is a human face, the correction feature points may be facial feature points, such as a certain feature point on the nose, a feature point in the middle of the lip, a feature point in the middle between two eyebrows, and the like. The correction feature point may also be determined based on a pattern within the contour of the target object, for example, the target object is a basketball, and the correction feature point is a trademark pattern on the basketball, an air hole, etc. The correction feature points may also be identified according to an image in the contour of the target object, for example, the target object is a vehicle, the correction feature points are identification information in a license plate, that is, a license plate number, and the license plate number includes at least one of characters, letters, and numbers, where each license plate number constitutes one correction feature point. The correction feature points may also be other feature points capable of characterizing the image to be selected to have an effect on subsequent processing, and are not limited herein.
Optionally, in this embodiment, the identification of the target object in the image to be selected as the target object of the same category, that is, the key identification points of different images to be selected or of each target object in the same image to be selected are determined based on the same dimension.
S103: a quality score parameter is determined.
Optionally, the quality score parameter includes at least one of a correction feature parameter and a contour parameter.
Wherein, the correction characteristic parameter is determined according to the position information of the identification key points and/or the number of the identification key points. The contour parameters are determined according to the position information of the identification key points and the influence factors of the identification key points.
In some embodiments, the identification key points include correction feature points of the target object, the identification key point number includes a correction feature point number of the correction feature points, and the determination of the correction feature parameters includes:
acquiring a preset correction characteristic point threshold value and the number of correction characteristic points in an image to be selected;
and determining correction characteristic parameters according to a preset correction characteristic point threshold value and the number of correction characteristic points.
Alternatively, the preset correction feature point threshold may be preset by those skilled in the art as needed.
Alternatively, different types of target objects may have different correction feature point thresholds, for example, when the correction feature point is a license plate number, the preset correction feature point threshold may be 7. Typically, the preset correction feature point threshold is greater than or equal to the maximum number of correction feature points that can be determined from the image to be selected.
Optionally, the license plate number recognition may be implemented by performing semantic recognition on the image of the target object, and the like, which is not limited herein.
Optionally, one determining manner of correcting the characteristic parameter may be:
the corrected feature parameter is the number of corrected feature points/a preset corrected feature point threshold.
In some embodiments, the identification key points include correction feature points and contour points of the target object, the correction feature points are located inside a contour formed by the contour points, the identification key point position information includes correction feature point position information of the correction feature points and contour position information of the contour points, and the determination manner of the correction feature parameters includes:
and determining correction characteristic parameters according to the position information of the correction characteristic points and the position information of the outline.
Optionally, sometimes there is a certain requirement for the angle of the target object in the image to be selected, at this time, the deflection condition of the target object may be determined as the correction feature parameter by collecting position information of one or more correction feature points located inside the contour, and then the image to be selected, in which the target object at the required deflection angle is located, is selected as the preferred snap-shot image. Optionally, the specific manner of determining the deflection condition according to the corrected feature point position information and the contour position information may be implemented by using a related art manner in the field.
In some embodiments, determining the correction feature parameter from the correction feature point position information of the correction feature point and the contour position information of the contour point includes:
dividing the contour into a first region and a second region by correcting the feature points;
forming a contour according to the contour position information, and determining the contour area of the contour according to the contour position information;
respectively determining a first area of the first region and a second area of the second region according to the corrected feature point position information and the contour position information;
and determining a correction characteristic parameter according to the first area, the second area and the outline area.
Optionally, the deflection condition of the current target object may be determined according to the condition that the first area and the second area respectively account for the area of the contour. For example, if the correction feature point is a nose feature point, and the nose is located at the midline of the face in a normal case (no consideration is given to the asymmetric error of the face), the face contour is divided into a first region and a second region by the nose feature point, if the face has no deflection, the ratio of the first area to the second area to the contour area should be the same, and if the face has a certain angle of deflection, the ratio of the first area to the second area to the contour area should be different, based on which, the correction feature parameter can be determined. And through analysis, the current deflection condition of the target object can be further obtained, and a person skilled in the art can further determine a preferred snapshot image according to actual needs.
In some embodiments, identifying keypoints comprises at least two classes of correction feature points for the target object, determining a correction feature sub-parameter from each class of correction feature points, and determining a correction feature parameter from each correction feature sub-parameter.
Optionally, for a certain target object, there may be multiple dimensions to determine the correction feature points, and at this time, a corresponding correction feature sub-parameter may be determined for each type of correction feature point, and then the correction feature parameter is determined according to each correction feature sub-parameter. For example, if the target object is a vehicle, two dimensions of a license plate number and a vehicle logo of the vehicle can be extracted as correction feature points, and if the vehicle in a certain image to be selected includes both the license plate and the vehicle logo, correction feature sub-parameters are respectively obtained for the license plate and the vehicle logo, and then the average of the correction feature sub-parameters or the weighted average determined according to the influence factors of the correction feature sub-parameters, the sum of the correction feature sub-parameters, and the like are taken as the correction feature parameters.
Optionally, the correction feature sub-parameter is denoted as CiThere are n correction feature sub-parameters, and one way of determining the correction feature parameter B is as follows:
Figure BDA0003038419250000091
optionally, the correction feature sub-parameter is denoted as CiN correction feature sub-parameters are total, and because the importance degrees of different types of correction feature points may be different, a correction feature influencing factor M can be correspondingly set for each type of correction feature pointiThe correction parameters are adjusted, that is, the correction feature parameters may be determined according to the correction feature sub-parameters and the correction feature influencing factors corresponding to the correction feature sub-parameters, and the other determination method of the correction feature parameters B is as follows:
Figure BDA0003038419250000092
alternatively, the correction feature parameter may not be divided by the correction feature point, i.e., the correction feature point
Figure BDA0003038419250000093
Optionally, preset correction constants may be preset corresponding to the correction feature sub-parameters, and the preset correction constants may be the same or different, and are not limited herein, at this time, the correction feature parameter may be determined according to the correction feature sub-parameters, the correction feature influence factor corresponding to the correction feature sub-parameter, and the preset correction constant, and this part may be referred to as a determination process about the correction feature parameter in fig. 14.
In some embodiments, the identifying key points includes contour points of the target object, the identifying key point position information includes contour point position information of the contour points, the identifying key point influence factors include contour point influence factors of the contour points, and the determining manner of the contour parameters includes:
dividing a preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining distribution information of the contour points in the sub-shooting area according to the position information of the contour points;
acquiring a preset contour point threshold value of a target object;
and determining contour parameters according to the distribution information, the influence factors of the sub-shooting areas, the influence factors of the contour points and the preset contour point threshold.
Alternatively, the contour point influence factor may be preset by a person skilled in the art, for example, it is expected that the contour point influence factor corresponding to a contour point at a certain position of a certain class of target objects is set to M, and then the contour point influence factor of the contour point at the certain position of each target object belonging to the class is M. Each contour point influence factor can be the same or different, and an appropriate contour point influence factor is determined according to the influence importance degree of the contour point influence factor on the subsequent analysis.
Similarly, the determination manner of the sub-shooting region influence factor is similar to that of the contour point influence factor, and is not described herein again.
Optionally, the contour point influence factor includes a contour point weight, and the sub-shooting region influence factor includes a sub-shooting region weight.
Alternatively, the sub photographing region influence factor may be determined according to at least one of a moving direction of the target object, a distance between the sub photographing region and the photographing apparatus, a brightness of the sub photographing region, a number of non-target objects of the sub photographing region, and the like.
Optionally, the dividing manner of the sub-shooting area may be set by a person skilled in the art as needed, or implemented by a related technical manner in the art, which is not limited herein.
Alternatively, the distribution information may be determined according to the contour point position information and the sub-shooting area position information, and it may be implemented to determine which sub-shooting area a certain contour point is located in.
Alternatively, the preset contour point threshold may be set by one skilled in the art as desired.
Optionally, the preset contour point threshold is not less than the maximum number of contour points that can be identified by the target object in any image to be selected.
Optionally, the contour parameter is determined by a quotient of a sum of products of each contour point influence factor and the corresponding sub-shot region influence factor and a preset contour point threshold. In this way, the contour parameters may represent the degree to which the contour points of the target object are distributed in the "important" sub-shooting area, and generally speaking, the more the contour points are distributed in the "important" sub-shooting area, the more likely the image to be selected is to be the preferred snap-shot image.
Optionally, a determination method of the profile parameter is as follows:
Figure BDA0003038419250000101
wherein A is a profile parameter, WFiFor the ith sub-shot region influence factor, WGjThe face contour point influence factor of the jth contour point in the ith sub-shooting area is Sum (F)j) A contour point threshold is preset.
In some embodiments, after determining the correction characteristic parameter, the correction characteristic parameter may be further adjusted by a preset correction constant, and the adjusted correction characteristic parameter is used to update the original correction characteristic parameter, so as to determine the quality score parameter.
Assuming that the adjusted correction characteristic parameter is recorded as C, the original correction characteristic parameter is recorded as B, and the preset correction constant is recorded as E, wherein E is generally between 0 and 1, and depending on the influence factor of the correction characteristic point, the larger the influence factor of the correction characteristic point on snapshot analysis is, the larger E is, and the following steps are provided: c ═ B × E.
In some embodiments, if the quality scoring parameter includes a correction feature parameter and a contour parameter, determining the quality score of the image to be selected according to the quality scoring parameter includes:
Figure BDA0003038419250000102
wherein, P is the quality score, A is the contour parameter, B is the correction characteristic parameter, E is the preset correction constant, and N is the preset contour point threshold.
S104: and determining the quality score of the images to be selected according to the quality score parameters, and determining the preferred snapshot images from the images to be selected.
Optionally, the contour parameter may be directly used as the quality score, the correction feature parameter may also be directly used as the quality score, or the quality scores of a plurality of images to be selected are respectively determined by combining the contour parameter and the correction feature parameter as the quality scores, one or more images with the highest quality scores are taken as the preferred snap-shot images, or at least one image to be selected with the quality scores larger than a preset quality score threshold is taken as the preferred snap-shot images.
The existing snapshot strategy can not ensure that the snapshot image is optimal certainly, even partial snapshot images can not reach the access standard of the algorithm, and further the intelligent algorithm can not generate an ideal calculation result. Through the preferable strategy of the snapshot image provided by the embodiment, the problem that an intelligent algorithm cannot generate an ideal calculation effect due to poor data is solved. A preferred method for capturing an image provided in this embodiment is exemplarily described below by using a specific embodiment, and the method includes, taking human body capturing as an example:
s201: and selecting a scene of the snapshot machine.
The capture scene, i.e. the capture area, is a selected one of the predetermined capture areas, as shown in fig. 2.
S202: and dividing the weight of the sub-shooting area.
For the target objectNow, different positions of the preset shooting area are divided into a plurality of sub-shooting areas, as shown in fig. 3. It should be noted that only a simple dividing manner is adopted here, and different manners, such as equidistant dividing, can be adopted for specific dividing according to requirements. Obviously, the target object appears in different sub-shooting areas, which has a certain influence on our snapshot, so for the different sub-shooting areas where the target object appears, we set corresponding weights to the sub-shooting areas, which are marked as WSi(the weight of the sub-shooting area i), the specific sub-shooting area weight and the sub-shooting area division can be set according to different scenes of the snapshot machine, and for the scenes, the following can be preset:
WS4=WS6>WS1>WS3>WS5>WS2
s203: target object model pointing.
Optionally, a candidate capture photo of a plurality of capture photos shot based on the scene of the capturing machine is obtained, the contour point of the target object in the capture photo is used as one of the key factors for judging whether the capture photo is preferred, the contour point is recorded as Dj, and the weight of each contour point is recorded as WDj. Wherein a target object may be as shown in fig. 4.
S204: a correction characteristic parameter is determined.
Considering the influence of other more critical factors on the snapshot, such as the deflection angle of a human face, whether the license plate of a vehicle is clear or not, and the like, a correction characteristic parameter is defined to determine the influence of the factors on the snapshot, and the influence is marked as Ci. The correction feature parameters may be determined in different ways according to different target objects.
S205: the quality score P for each grab picture was determined.
Figure BDA0003038419250000111
S206: the preferred catch picture is determined.
Optionally, determining a target pairLike quality score P in each captured picture1、P2、P3…PnAnd obtaining the optimal captured picture after comparison.
The current snapshot machine generates one snapshot on the basis of every N frames, and in the selection of the last candidate snapshot (image to be selected), a superior snapshot cannot be obtained. The snapshot image optimization method provided by the embodiment can realize optimization of candidate snapshot, solves the problem that the target object has a good position but generates poor snapshot under shooting equipment such as video monitoring and the like, and well lays a cushion for the application of subsequent data and the effect generated by an algorithm.
In some embodiments, the target object comprises a human face, the recognition key points comprise human face contour points and at least two facial feature points, and the quality score is determined by:
dividing a face contour formed by the face contour points into a first face region and a second face region through the face feature points;
respectively determining a first face area of a first face area, a second face area of a second face area and a face contour area of a face contour;
dividing a preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining the distribution information of the face contour points in the sub-shooting area according to the position information of the face contour points;
acquiring a preset correction constant, a preset human face contour point threshold and a human face contour point influence factor of a human face contour point;
Figure BDA0003038419250000121
wherein P is the quality score, WDiFor the ith sub-shot region influence factor, WsjThe face contour point influence factor of the jth personal face contour point in the ith sub-shooting area, E is a preset correction constant, Sum (D)j) For presetting a face contour point threshold value, S1lIs firstArea of face region, S1rIs the area of the second face region, S1Is the face contour area.
In the following, a preferable method for a scary image provided by the present embodiment is exemplarily described by taking an example that a target object includes a human face, and the method includes:
s301: and selecting a snapshot scene and dividing the sub-shooting areas.
Alternatively, a snapshot scene is shown in fig. 5, and the divided sub-shooting regions are shown in fig. 6.
It can be seen from the above pictures shown in fig. 5 and 6 that the setting position of the face camera is reasonable, no obviously unsuitable region exists in the snapshot range, and the face appears at each point position and can be clearly seen, so that the weight difference of the sub-shooting regions should be reduced as much as possible when the sub-shooting regions are divided. Considering the distance between each region and the snapshot camera, for example, the weight of the sub-shooting region is set as WS1=1,WS2=0.98,WS3=0.96。
S302: and determining the face contour points and the weight of the face contour points.
For the face contour, we select N face contour points to ensure that the shape of the face can be roughly outlined, and as shown in fig. 7 and 8 below, the weight of each face contour point is set to 1.
S303: a correction characteristic parameter is determined.
Because the angle of the target face has a large influence on the snapshot optimization, the correction variables should give the influence of the angle in an important consideration, and correction characteristic parameters are given according to the characteristics, such as the nose and the like, of the target, which can obviously distinguish the deflection angle
Figure BDA0003038419250000131
The area of the line segment formed by the feature fixed point of the nose and the cutting contour point after the extension is shown in figure 8, and the left side is marked as SlRight side is marked as SrAnd the area formed by the whole contour point is recorded as S, then the characteristic parameter is corrected
Figure BDA0003038419250000132
Presetting correctionAnd E, wherein the correction constant E is generally between 0 and 1, and is determined according to the influence factor, the larger the influence factor has on the snapshot, the larger E is, and then the correction variable is obtained
Figure BDA0003038419250000133
And updating the correction characteristic parameter by using the correction variable C as one of the quality scoring parameters.
S304: a quality score is determined.
Based on the above conditions, assume that there is M in the first image to be selected1Each contour point appears in the region 1, N-M1Each contour point appears in region 2, and the second image to be selected has M2Each contour point appears in the region 2, N-M2The contour points of the first image to be selected having an area S1Left side is marked as S1lRight side is marked as S1rThe area formed by the contour points of the second image to be selected is S2Left side is marked as S2lRight side is marked as S2rThen the first to-be-selected image quality score P1, the second to-be-selected image quality score P2 are determined, respectively:
Figure BDA0003038419250000134
Figure BDA0003038419250000135
s305: a preferred snap shot image is determined.
Comparison P1And P2The size of the first to-be-selected image and the second to-be-selected image can be obtained, and the better one of the first to-be-selected image and the second to-be-selected image is taken as the preferred snapshot image.
It can be known that, the above two images to be selected are taken as an illustrative example, and those skilled in the art can determine one or more preferred snap-shot images from the multiple images to be selected according to the above idea.
In some embodiments, the target object comprises a vehicle, the identification key points comprise vehicle contour points and identification information in a license plate, and the quality score is determined by:
dividing a preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining distribution information of the vehicle contour points in the sub-shooting area according to the vehicle contour point position information of the vehicle contour points;
acquiring the number of identification information of the identification information in the image to be selected;
acquiring a preset correction constant, a preset vehicle contour point threshold and a vehicle contour point influence factor of a vehicle contour point;
Figure BDA0003038419250000136
wherein P is the quality score, WDi is the ith sub-shooting area influence factor, WsjThe influence factor of the vehicle contour point of the jth vehicle contour point in the ith sub-shooting area is E, and the preset correction constant is Sum (D)j) The method comprises the steps of presetting a vehicle contour point threshold, N is a preset identification information threshold, and X is the identification information quantity.
Optionally, the identification information in the license plate includes a license plate number. Sometimes, due to the shielding of foreign matters, not all license plate numbers can be extracted from each snapshot image, for example, flying birds, butterflies, or branches of roadside branches, flower beds, grass leaves and the like exist for shielding, so that part of license plate numbers are shielded. Of course, the identification information may also be other information, and the above is only a license plate number as an example, which is exemplified.
The following describes, by way of example, a preferred method for capturing an image according to this embodiment by using a specific embodiment, where the following method takes vehicle capturing as an example, and includes:
s401: and selecting a snapshot scene and dividing the sub-shooting areas.
Optionally, the capturing scene is also a preset capturing area, as shown in fig. 9, and the area shown in fig. 9 is the preset capturing area. Referring to fig. 10, fig. 10 illustrates a method for dividing a sub-shooting area of a preset shooting area, and it can be seen that the preset shooting area is divided into four areas 1, 2, 3, and 4.
Considering that the license plate of the motor vehicle has great influence on motor vehicle snapshot, the regions are divided, the regions close to the snapshot points can be set with higher weight, and W iss4=1,WS3=0.99,WS2=0.96,WS2=0.94。
S402: and determining the vehicle contour points and the vehicle contour point weights.
For motor vehicles, there are many vehicle types, such as cars, buses, trucks, and the like. Alternatively, 8 contour points are used, as shown in fig. 11 and 12. The weight of 8 preset vehicle contour points is 1, and theoretically, the weight of 7 vehicle contour points can be determined in the same image to be selected at most.
S403: a correction characteristic parameter is determined.
Since the license plate number of a motor vehicle is one of the most critical factors for capturing a picture, the correction feature parameters should be defined and determined around the license plate number.
Assuming that the license plate number of the motor vehicle has N-bit characters and the recognized license plate number of the target object has X bits, giving correction characteristic parameters
Figure BDA0003038419250000141
Presetting a correction constant E, wherein the preset correction constant E is between 0 and 1 and should be as close to 1 as possible, so that a correction variable can be determined
Figure BDA0003038419250000142
And updating the correction characteristic parameter by using the correction variable C as one of the quality scoring parameters.
S404: a quality score is determined.
Based on the conditions, the same target object is captured at different moments to obtain a first vehicle captured image and a second vehicle captured image, wherein 5 contour points of the first vehicle captured image appear in the area 3, 2 contour points of the first vehicle captured image appear in the area 1, 2 contour points of the second vehicle captured image appear in the area 1, and 5 contour points of the second vehicle captured image appear in the area 2. There is 6 license plate numbers in the first vehicle snapshot image clearly visible, in the second vehicle snapshot image, owing to have the sheltering from of other vehicles etc. or have the pedestrian to pass through when taking a candid photograph, or because branch on roadside, the bird that flies through suddenly, the condition that the balloon etc. exists sheltering from, there are 5 license plate numbers clearly visible, reachs from this:
quality scoring of the first vehicle snapshot image:
Figure BDA0003038419250000151
quality rating of the second vehicle snapshot:
Figure BDA0003038419250000152
s405: a preferred snap shot image is determined.
Comparison P3And P4The size of the second vehicle snapshot image is smaller than the size of the first vehicle snapshot image, and the second vehicle snapshot image is larger than the first vehicle snapshot image.
It can be known that, the two vehicle snap-shot images are taken as an illustrative example, and those skilled in the art can determine one or more preferred snap-shot images from the multiple images to be selected according to the above idea.
Optionally, the target object may also be a non-motor vehicle, and a manner of determining a preferred snap-shot image from a plurality of images to be selected including the non-motor vehicle is similar to the manner of determining the target object as the non-motor vehicle, which is not described herein again.
Optionally, the target object may also be a human body, and a manner of determining a preferred snap-shot image from a plurality of images to be selected including the human body is similar to the manner in which the target object is a human face, and is not described herein again.
In some embodiments, referring to fig. 13, fig. 13 illustrates a preferred method of capturing images, comprising:
s1301: and determining a preset shooting area.
Alternatively, the preset shooting area may be determined by determining a snapshot scene.
S1302: and dividing a preset shooting area and setting the weight of the sub-shooting area.
S1303: and fixing the contour of the target object and setting the weight of the contour point.
S1304: and acquiring a plurality of images to be selected shot in a preset shooting area.
S1305: and respectively determining the quality scores of the images to be selected.
S1306: a preferred snap shot image is determined.
In step S1305, a determination manner for determining the quality score of each image to be selected may be as shown in fig. 14, and the contour parameters may be determined by respectively obtaining the sub-shooting region weight, the contour point position information, and the contour point weight; respectively obtaining correction characteristic sub-parameters corresponding to the correction characteristic points of the multiple classes, preset correction constants corresponding to the correction characteristic sub-parameters of the classes and influence factors corresponding to the correction characteristic parameters of the classes to determine correction characteristic parameters, and further determining quality scores according to the contour parameters and the correction characteristic parameters.
According to the snapshot image optimization method provided by the embodiment, the target object in the image to be selected and the identification key point of the target object are identified by acquiring the plurality of images to be selected shot in the preset shooting area, the quality scoring parameter is determined, the quality scoring of the image to be selected is determined according to the quality scoring parameter, and the optimized snapshot image is determined from each image to be selected, so that one or more optimized snapshot images can be determined from the plurality of images to be selected, the image quality of the snapshot image is improved, the problem that an intelligent algorithm cannot generate an ideal output effect due to poor quality of the snapshot image is solved, the reliability of the snapshot image is improved, a good cushion is made for a subsequent intelligent algorithm to generate the ideal effect by using the snapshot image, and the output result of the subsequent intelligent algorithm depending on the snapshot image is effectively improved.
Example two
Referring to fig. 15, an embodiment of the present invention further provides a snapshot image optimization system, including:
the acquisition module 1501 is configured to acquire a plurality of images to be selected, which are shot in a preset shooting area;
the identification module 1502 is used for identifying a target object in the image to be selected and an identification key point of the target object;
a quality scoring parameter determining module 1503, configured to determine a quality scoring parameter, where the quality scoring parameter includes at least one of a correction feature parameter and a contour parameter, the correction feature parameter is determined according to the identification key point position information of the identification key point and/or the number of identification key points of the identification key point, and the contour parameter is determined according to the identification key point position information of the identification key point and the identification key point influence factor of the identification key point;
and the preferred snap-shot image determining module 1504 is used for determining the quality scores of the images to be selected according to the quality score parameters and determining preferred snap-shot images from the images to be selected.
In some embodiments, the quality score parameter determination module comprises at least one of a correction feature parameter determination module, a contour parameter determination module, the correction feature parameter determination module comprising a first correction feature parameter determination submodule and/or a second correction feature parameter determination submodule;
if the identification key point comprises a contour point of the target object, the identification key point position information comprises contour point position information of the contour point, the identification key point influence factor comprises a contour point influence factor of the contour point, the contour parameter determination module is used for dividing the preset shooting area into at least two sub-shooting areas, determining the sub-shooting area influence factor of the sub-shooting areas, determining the distribution information of the contour point in the sub-shooting areas according to the contour point position information, acquiring a preset contour point threshold value of the target object, and determining the contour parameter according to the distribution information, the sub-shooting area influence factor and the contour point influence factor;
if the identification key points comprise correction feature points of the target object, the identification key point number comprises the correction feature point number of the correction feature points, the first correction feature parameter determining submodule is used for acquiring a preset correction feature point threshold value and the correction feature point number of the correction feature points in the image to be selected, and determining correction feature parameters according to the preset correction feature point threshold value and the correction feature point number;
and if the identification key points comprise the correction feature points and the contour points of the target object, the correction feature points are positioned in the contour formed by the contour points, the identification key point position information comprises the correction feature point position information of the correction feature points and the contour position information of the contour points, and the second correction feature parameter determining submodule is used for determining the correction feature parameters according to the correction feature point position information and the contour position information.
In this embodiment, the snapshot image optimization system executes the snapshot image optimization method described in any one of the above embodiments, and specific functions and technical effects are as follows with reference to the above embodiments, and are not described herein again.
Referring to fig. 16, an electronic device 1600 is also provided in the embodiments of the present application, where the electronic device 1600 includes a processor 1601, a memory 1602 and a communication bus 1603;
the communication bus 1603 is used to connect the processor 1601 and the memory 1602;
the processor 1601 is configured to execute the computer program stored in the memory 1602 to implement the preferred method for capturing images as described in any of the above embodiments.
Embodiments of the present application also provide a non-transitory readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a device, the device may execute instructions (instructions) included in an embodiment of the present application.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored on the storage medium, and the computer program is used for enabling the computer to execute the preferable method for capturing the image according to the embodiment.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.
In the corresponding figures of the above embodiments, the connecting lines may represent the connection relationship between the various components to indicate that more constituent signal paths (consistent _ signal paths) and/or one or more ends of some lines have arrows to indicate the main information flow direction, the connecting lines being used as a kind of identification, not a limitation on the scheme itself, but rather to facilitate easier connection of circuits or logic units using these lines in conjunction with one or more example embodiments, and any represented signal (determined by design requirements or preferences) may actually comprise one or more signals that may be transmitted in any one direction and may be implemented in any suitable type of signal scheme.
In the above embodiments, unless otherwise specified, the description of common objects by using "first", "second", etc. ordinal numbers only indicate that they refer to different instances of the same object, rather than indicating that the objects being described must be in a given sequence, whether temporally, spatially, in ranking, or in any other manner.
In the above-described embodiments, reference in the specification to "the embodiment," "an embodiment," "another embodiment," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of the phrase "the present embodiment," "one embodiment," or "another embodiment" are not necessarily all referring to the same embodiment. If the specification states a component, feature, structure, or characteristic "may", "might", or "could" be included, that particular component, feature, structure, or characteristic is not necessarily included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the element. If the specification or claim refers to "a further" element, that does not preclude there being more than one of the further element.
In the embodiments described above, although the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory structures (e.g., dynamic ram (dram)) may use the discussed embodiments. The embodiments of the invention are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The invention is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.

Claims (13)

1. A preferred method of capturing an image, comprising:
acquiring a plurality of images to be selected shot in a preset shooting area;
identifying a target object in the image to be selected and an identification key point of the target object;
determining a quality scoring parameter, wherein the quality scoring parameter comprises at least one of a correction characteristic parameter and a contour parameter, the correction characteristic parameter is determined according to the position information of the identification key points and/or the number of the identification key points, and the contour parameter is determined according to the position information of the identification key points and the influence factors of the identification key points;
and determining the quality score of the images to be selected according to the quality score parameters, and determining a preferred snapshot image from each image to be selected.
2. The snap-shot image optimization method according to claim 1, wherein the identification key points include correction feature points of the target object, the identification key point number includes a correction feature point number of the correction feature points, and the correction feature parameters are determined in a manner including:
acquiring a preset correction characteristic point threshold value and the number of correction characteristic points of the correction characteristic points in the image to be selected;
and determining the correction characteristic parameters according to the preset correction characteristic point threshold value and the correction characteristic point quantity.
3. The snap-shot image optimization method according to claim 1, wherein the identification key points include correction feature points and contour points of the target object, the correction feature points are located inside a contour formed by the contour points, the identification key point position information includes correction feature point position information of the correction feature points and contour position information of the contour points, and the determination manner of the correction feature parameters includes:
and determining the correction characteristic parameters according to the position information of the correction characteristic points and the position information of the outline.
4. The snap-shot image optimization method according to claim 3, wherein the determining the correction feature parameter from the correction feature point position information of the correction feature point and the contour position information of the contour point comprises:
dividing the contour into a first region and a second region by the corrected feature points;
forming a contour according to the contour position information, and determining the contour area of the contour according to the contour position information;
respectively determining a first area of a first region and a second area of a second region according to the corrected feature point position information and the contour position information;
and determining the correction characteristic parameter according to the first area, the second area and the outline area.
5. The snap-shot image optimization method according to claim 1, wherein the recognition key points comprise at least two types of correction feature points of the target object, one correction feature sub-parameter is determined from each type of the correction feature points, and the correction feature parameter is determined from each of the correction feature sub-parameters.
6. The snapshot image optimization method of claim 1, wherein the identification key points comprise contour points of the target object, the identification key point position information comprises contour point position information of the contour points, the identification key point influence factors comprise contour point influence factors of the contour points, and the contour parameters are determined in a manner that:
dividing the preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining distribution information of the contour points in the sub-shooting area according to the position information of the contour points;
acquiring a preset contour point threshold value of the target object;
and determining the contour parameters according to the distribution information, the sub-shooting area influence factors, the contour point influence factors and the preset contour point threshold.
7. The method for capturing images according to any one of claims 1 to 6, wherein if the quality score parameter includes the correction feature parameter and the contour parameter, the determining the quality score of the image to be selected according to the quality score parameter comprises:
Figure FDA0003038419240000021
wherein, P is the quality score, A is the contour parameter, B is the correction characteristic parameter, E is the preset correction constant, and N is the preset contour point threshold.
8. The snap-shot image optimization method according to claim 7, wherein the target object comprises a human face, the recognition key points comprise a human face contour point and at least two facial feature points, and the quality score is determined by:
dividing the face contour formed by the face contour points into a first face area and a second face area through the face feature points;
respectively determining a first face area of the first face area, a second face area of the second face area and a face contour area of the face contour;
dividing the preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining the distribution information of the face contour points in the sub-shooting area according to the position information of the face contour points;
acquiring a preset correction constant, a preset human face contour point threshold and a human face contour point influence factor of the human face contour point;
Figure FDA0003038419240000031
wherein P is the quality score, WDiFor the ith sub-shot region influence factor, WsjThe face contour point influence factor of the jth personal face contour point in the ith sub-shooting area, E is a preset correction constant, Sum (D)j) For presetting a face contour point threshold value, S1lIs the area of the first face region, S1rIs the area of the second face region, S1Is the face contour area.
9. The snapshot image optimization method according to claim 7, wherein the target object comprises a vehicle, the identification key points comprise vehicle contour points and identification information in a license plate, and the quality score is determined by:
dividing the preset shooting area into at least two sub-shooting areas, and determining sub-shooting area influence factors of the sub-shooting areas;
determining distribution information of the vehicle contour points in the sub-shooting area according to the vehicle contour point position information of the vehicle contour points;
acquiring the identification information quantity of the identification information in the image to be selected;
acquiring a preset correction constant, a preset vehicle contour point threshold and a vehicle contour point influence factor of the vehicle contour point;
Figure FDA0003038419240000032
wherein P is the quality score, WDiFor the ith sub-shot region influence factor, WsjThe influence factor of the vehicle contour point of the jth vehicle contour point in the ith sub-shooting area is E, and the preset correction constant is Sum (D)j) The method comprises the steps of presetting a vehicle contour point threshold, N is a preset identification information threshold, and X is the identification information quantity.
10. A snapshot image optimization system, comprising:
the acquisition module is used for acquiring a plurality of images to be selected shot in a preset shooting area;
the identification module is used for identifying a target object in the image to be selected and an identification key point of the target object;
the quality scoring parameter determining module is used for determining quality scoring parameters, the quality scoring parameters comprise at least one of correction characteristic parameters and contour parameters, the correction characteristic parameters are determined according to the position information of the identification key points and/or the number of the identification key points, and the contour parameters are determined according to the position information of the identification key points and the influence factors of the identification key points;
and the preferred snapshot image determining module is used for determining the quality score of the image to be selected according to the quality score parameter and determining a preferred snapshot image from each image to be selected.
11. The snap-shot image preference system according to claim 10, wherein the quality score parameter determination module comprises at least one of a correction feature parameter determination module, a contour parameter determination module, the correction feature parameter determination module comprising a first correction feature parameter determination submodule and/or a second correction feature parameter determination submodule;
if the identification key point comprises a contour point of the target object, the identification key point position information comprises contour point position information of the contour point, the identification key point influence factor comprises a contour point influence factor of the contour point, the contour parameter determination module is used for dividing the preset shooting area into at least two sub-shooting areas, determining a sub-shooting area influence factor of the sub-shooting areas, determining distribution information of the contour point in the sub-shooting areas according to the contour point position information, acquiring a preset contour point threshold of the target object, and determining the contour parameter according to the distribution information, the sub-shooting area influence factor and the contour point influence factor;
if the identification key point comprises a correction feature point of the target object, the identification key point number comprises a correction feature point number of the correction feature point, the first correction feature parameter determination submodule is used for obtaining a preset correction feature point threshold value and a correction feature point number of the correction feature point in the image to be selected, and determining the correction feature parameter according to the preset correction feature point threshold value and the correction feature point number;
if the identification key point comprises a correction feature point and a contour point of the target object, the correction feature point is located inside a contour formed by the contour point, the identification key point position information comprises correction feature point position information of the correction feature point and contour position information of the contour point, and the second correction feature parameter determination submodule is used for determining the correction feature parameter according to the correction feature point position information and the contour position information.
12. An electronic device comprising a processor, a memory, and a communication bus;
the communication bus is used for connecting the processor and the memory;
the processor is configured to execute a computer program stored in the memory to implement the method of any one of claims 1-9.
13. A computer-readable storage medium, having stored thereon a computer program for causing a computer to perform the method of any one of claims 1-9.
CN202110450382.8A 2021-04-25 2021-04-25 Method, system, equipment and medium for optimizing snap shot image Active CN113177917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110450382.8A CN113177917B (en) 2021-04-25 2021-04-25 Method, system, equipment and medium for optimizing snap shot image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450382.8A CN113177917B (en) 2021-04-25 2021-04-25 Method, system, equipment and medium for optimizing snap shot image

Publications (2)

Publication Number Publication Date
CN113177917A true CN113177917A (en) 2021-07-27
CN113177917B CN113177917B (en) 2023-10-13

Family

ID=76926258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450382.8A Active CN113177917B (en) 2021-04-25 2021-04-25 Method, system, equipment and medium for optimizing snap shot image

Country Status (1)

Country Link
CN (1) CN113177917B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873144A (en) * 2021-08-25 2021-12-31 浙江大华技术股份有限公司 Image capturing method, image capturing apparatus, and computer-readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
US20130201359A1 (en) * 2012-02-06 2013-08-08 Qualcomm Incorporated Method and apparatus for unattended image capture
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN109413324A (en) * 2017-08-16 2019-03-01 中兴通讯股份有限公司 A kind of image pickup method and mobile terminal
CN109472262A (en) * 2018-09-25 2019-03-15 平安科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN109670473A (en) * 2018-12-28 2019-04-23 深圳英飞拓智能技术有限公司 Preferred method and device based on face grabgraf
US20190295240A1 (en) * 2018-03-20 2019-09-26 Uber Technologies, Inc. Image quality scorer machine
CN111553915A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Article identification detection method, device, equipment and readable storage medium
CN111914781A (en) * 2020-08-10 2020-11-10 杭州海康威视数字技术股份有限公司 Method and device for processing face image
CN112188075A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method
CN112287802A (en) * 2020-10-26 2021-01-29 汇纳科技股份有限公司 Face image detection method, system, storage medium and equipment
CN112528939A (en) * 2020-12-22 2021-03-19 广州海格星航信息科技有限公司 Quality evaluation method and device for face image

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130201359A1 (en) * 2012-02-06 2013-08-08 Qualcomm Incorporated Method and apparatus for unattended image capture
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN109413324A (en) * 2017-08-16 2019-03-01 中兴通讯股份有限公司 A kind of image pickup method and mobile terminal
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
US20190295240A1 (en) * 2018-03-20 2019-09-26 Uber Technologies, Inc. Image quality scorer machine
CN109472262A (en) * 2018-09-25 2019-03-15 平安科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN109670473A (en) * 2018-12-28 2019-04-23 深圳英飞拓智能技术有限公司 Preferred method and device based on face grabgraf
CN112188075A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method
CN111553915A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Article identification detection method, device, equipment and readable storage medium
CN111914781A (en) * 2020-08-10 2020-11-10 杭州海康威视数字技术股份有限公司 Method and device for processing face image
CN112287802A (en) * 2020-10-26 2021-01-29 汇纳科技股份有限公司 Face image detection method, system, storage medium and equipment
CN112528939A (en) * 2020-12-22 2021-03-19 广州海格星航信息科技有限公司 Quality evaluation method and device for face image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
T.P. CHEN ET AL.: "Fingerprint image quality analysis", 《2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》, vol. 2, pages 1253 - 1256, XP010785429 *
毛润东等: "空中飞机飞行图像拍摄模糊轮廓优化复原仿真", 《计算机仿真》, vol. 35, no. 06, pages 390 - 393 *
王刚等: "基于视差图和复数轮廓波变换的无参考立体图像质量评价", 《计算机工程与科学》, vol. 39, no. 03, pages 512 - 518 *
陈正浩: "基于多特征融合的卡口人脸质量评估方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 8, pages 138 - 688 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873144A (en) * 2021-08-25 2021-12-31 浙江大华技术股份有限公司 Image capturing method, image capturing apparatus, and computer-readable storage medium

Also Published As

Publication number Publication date
CN113177917B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109325933B (en) Method and device for recognizing copied image
CN109284733B (en) Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network
CN109918969B (en) Face detection method and device, computer device and computer readable storage medium
CN105144239B (en) Image processing apparatus, image processing method
DE102018008161A1 (en) Detecting objects using a weakly monitored model
US8792722B2 (en) Hand gesture detection
CN111027504A (en) Face key point detection method, device, equipment and storage medium
CN110210276A (en) A kind of motion track acquisition methods and its equipment, storage medium, terminal
US20120027263A1 (en) Hand gesture detection
US20120114198A1 (en) Facial image gender identification system and method thereof
DE112016005006T5 (en) AUTOMATIC VIDEO EXECUTIVE SUMMARY
US8213741B2 (en) Method to generate thumbnails for digital images
JP2004361987A (en) Image retrieval system, image classification system, image retrieval program, image classification program, image retrieval method, and image classification method
CN109033955B (en) Face tracking method and system
CN110059666B (en) Attention detection method and device
WO2009117607A1 (en) Methods, systems, and media for automatically classifying face images
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN112307853A (en) Detection method of aerial image, storage medium and electronic device
CN109784171A (en) Car damage identification method for screening images, device, readable storage medium storing program for executing and server
DE112020005223T5 (en) Object tracking device and object tracking method
CN112069887A (en) Face recognition method, face recognition device, terminal equipment and storage medium
Katramados et al. Real-time visual saliency by division of gaussians
CN113177917B (en) Method, system, equipment and medium for optimizing snap shot image
CN110569921A (en) Vehicle logo identification method, system, device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant