CN113408496B - Image determining method and device, storage medium and electronic equipment - Google Patents

Image determining method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113408496B
CN113408496B CN202110876241.2A CN202110876241A CN113408496B CN 113408496 B CN113408496 B CN 113408496B CN 202110876241 A CN202110876241 A CN 202110876241A CN 113408496 B CN113408496 B CN 113408496B
Authority
CN
China
Prior art keywords
image
key
candidate
quality
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110876241.2A
Other languages
Chinese (zh)
Other versions
CN113408496A (en
Inventor
张佳骕
唐邦杰
潘华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110876241.2A priority Critical patent/CN113408496B/en
Publication of CN113408496A publication Critical patent/CN113408496A/en
Application granted granted Critical
Publication of CN113408496B publication Critical patent/CN113408496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image determining method and device, a storage medium and electronic equipment. Wherein the method comprises the following steps: acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object; identifying each candidate image in the plurality of candidate images by utilizing an image identification model to obtain quality parameters corresponding to each candidate image, wherein the quality parameters are used for indicating the integrity degree of the key parts of the target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key parts; and determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object. The invention solves the technical problem that the object recognition success rate is low due to inaccurate image determination for object recognition.

Description

Image determining method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of images, and in particular, to an image determining method and apparatus, a storage medium, and an electronic device.
Background
With the current coverage of video monitoring, video analysis technology plays an increasingly critical role in public security, with the goal preferably being an important component of video analysis technology. The vehicle target preferably refers to scoring an image of a single vehicle target in each frame from appearance to disappearance of the target in the monitoring video, so as to obtain a preferred image with highest quality of the target. The preferred image is critical to the identification of the attributes (e.g., color, model, etc.) of the subsequent target vehicle. The high-quality optimized image can effectively improve the success rate and the accuracy rate of attribute identification of the target vehicle.
In the attribute identification of the vehicle, the integrity of the vehicle has serious influence on the success rate and accuracy of the attribute identification. However, the current target preference, such as portrait target preference, usually only focuses on the definition of the image, so that the integrity of the vehicle in the preferred image cannot be guaranteed, resulting in lower recognition success rate and accuracy of the target vehicle.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an image determining method and device, a storage medium and electronic equipment, which are used for at least solving the technical problem that the success rate of object identification is low due to inaccurate image determination for object identification.
According to an aspect of an embodiment of the present invention, there is provided an image determining method including: acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object; identifying each candidate image in the plurality of candidate images by using an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of the key part of the target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part; and determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
According to another aspect of the embodiment of the present invention, there is also provided an image determining apparatus including: an acquisition unit configured to acquire an object image set, where the object image set includes a plurality of candidate images in which a target object is detected; the image recognition unit is used for recognizing each candidate image in the plurality of candidate images by utilizing an image recognition model to obtain quality parameters corresponding to each candidate image, wherein the quality parameters are used for indicating the integrity degree of the key parts of the target objects in the corresponding candidate images, the image recognition model comprises a plurality of task sub-models, and the task sub-models are used for recognizing the integrity degree of the corresponding key parts; and a determining unit configured to determine the candidate image corresponding to the quality parameter satisfying the quality condition as a target image, where the target image is used for identifying the target object.
According to a further aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-described image determination method when run.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device including a memory in which a computer program is stored, and a processor configured to execute the above-described image determining method by the computer program.
In the embodiment of the invention, each candidate image is acquired in an object image set containing a target object through an image recognition model to carry out integrity recognition to obtain quality parameters of the candidate image, the integrity degree of each key part in the target object is recognized through a task sub-model in the image recognition model, so that the quality parameters indicating the integrity degree of the key parts in the target object in the candidate image are obtained, the candidate image corresponding to the quality parameters meeting the quality conditions is determined as the target image, each key part of the target object contained in the image is recognized through each task sub-model in the image recognition model to obtain the quality parameters capable of indicating the integrity degree of the target object in the candidate image, the target image is determined from the candidate image through the quality parameters, the target image containing the target object with higher integrity degree is determined from the image set, and the target image containing the target object with higher integrity degree is used for carrying out the recognition of the target object, so that the technical effect of improving the recognition success rate of the target object is achieved, and the technical problem of low success rate of the target recognition caused by inaccurate image determination for the object recognition is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a schematic illustration of an application environment of an alternative image determination method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative image determination method according to an embodiment of the invention;
FIG. 3 is a flow chart of an alternative image determination method according to an embodiment of the invention;
FIG. 4 is a flow chart of an alternative image determination method according to an embodiment of the invention;
FIG. 5 is a flow chart of an alternative image determination method according to an embodiment of the invention;
FIG. 6 is a flow chart of an alternative image determination method according to an embodiment of the invention;
FIG. 7 is a flow chart of an alternative image determination method according to an embodiment of the invention;
FIG. 8 is a flow chart of an alternative image determination method according to an embodiment of the invention;
fig. 9 is a schematic structural view of an alternative image determining apparatus according to an embodiment of the present invention;
Fig. 10 is a schematic structural view of an alternative electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present invention, there is provided an image determining method, alternatively, the image determining method described above may be applied, but not limited to, in the environment shown in fig. 1. Terminal device 102 interacts data with server 112 via network 110. The terminal device 102 has a video capturing function, and transmits captured video data to the server 112 via the network 110. The server 112 has a database 114 running therein for storing received video data and a processing engine 116 for processing the video data by the processing engine 116 to determine image frames for object recognition.
The processing engine 116 of the server 112 is not limited to sequentially performing S102 to S106 to determine a target image from video data. S102, acquiring an object image set. The object image set includes a plurality of candidate images, and the candidate images are video frames for detecting the target object. And intercepting a target video containing a target object from the received video data, and dividing the target video according to frames to obtain an object image set containing the target object. And S104, obtaining quality parameters of the candidate images. And identifying the candidate image through an image identification model to obtain a quality parameter corresponding to the candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of a target object in the candidate image, the image identification model is a neural network model comprising a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part. S106, determining a target image. And determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Alternatively, in this embodiment, the terminal device 102 may be a terminal device configured with a video capturing client, and may include, but is not limited to, at least one of the following: cell phones (e.g., android cell phones, IOS cell phones, etc.), notebook computers, tablet computers, palm computers, MIDs (Mobile Internet Devices ), PADs, desktop computers, smart televisions, smart camera devices, etc. The video capture client may be a video client, an instant messaging client, a browser client, an educational client, and the like. The network 110 may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communications. The server 112 may be a single server, a server cluster including a plurality of servers, or a cloud server. The above is merely an example, and is not limited in any way in the present embodiment.
As an alternative embodiment, as shown in fig. 2, the image determining method includes:
s202, acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object;
S204, identifying each candidate image in the plurality of candidate images by using an image identification model to obtain quality parameters corresponding to each candidate image, wherein the quality parameters are used for indicating the integrity degree of the key parts of the target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key parts;
s206, determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Alternatively, the object image set is not limited to an image set obtained by video processing of a target video including a target object, and the video processing is not limited to slicing the target video by frame to obtain a plurality of video frames including the target object. The target video is not limited to the video containing the target object, which is captured from the video data captured by the image capturing terminal.
Optionally, the image recognition model includes a region recognition model for recognizing the key part from the candidate image and a task sub-model for recognizing the integrity degree of the key part. The region identification model is used for identifying the key region where each key position is located from the complete candidate image, and inputting each key region into the task sub-model so as to identify the integrity degree of the key position in the key region by utilizing the task sub-model corresponding to the key position and obtain the integrity degree of the key position.
Optionally, the key locations are locations that are partitioned for the composition of the object. Taking the example that the object is a vehicle, the key parts are not limited to include: license plate, logo, front/rear window, side window, left/right lamp, roof, wheels. And determining the number of task sub-models included in the image recognition model according to the key parts, and setting the association corresponding relation between the task sub-models and the key parts. And utilizing the task sub-model corresponding to the key part to complete the key part in the key area where the key part in the candidate image is located.
Alternatively, the degree of integrity of the critical portion is not limited to a preset level of integrity, and a matching integrity parameter is set for each level of integrity. And determining the integrity level of the corresponding key part or the integrity parameter corresponding to the integrity level through the task sub-model. The statistics module of the image recognition model obtains quality parameters indicating the integrity degree of the target object in the candidate image based on the integrity level or the integrity parameters output by each task sub-model.
Optionally, determining the target object from the object image set is not limited to sequentially acquiring quality parameters of each candidate image in the object image set, and determining the target image according to the quality parameters of all candidate images.
The process of determining the target image from the image set is not limited to that shown in fig. 3, with the target object as the target a. And S302, acquiring a sample image containing the integrity marks of the key components of the target A. In the case that the sample image is acquired, S304 is performed, and the key component multitasking model M, that is, the image recognition model is trained using the sample image. In the case where the training of the multitasking model M is completed, S306 is performed, letting k=1. K is used to indicate the current round of polling. And S308, judging whether K is less than or equal to the total image number in the image set. If it is determined in S308 that the value of the current K is less than or equal to the total number of images in the image set, S310 is executed to acquire the kth image including the object a. The kth image is acquired from the image set containing object a. In the case where the image is acquired, S312 is performed, and the integrity of each component in the image is identified using M. And identifying the integrity of each component of the object A in the image through the multitask model M to obtain the integrity parameters of each component. S314 is performed to determine quality parameters based on the integrity and importance of each component. After the integrity of the individual components is obtained, the importance of the individual components is determined to determine the quality parameters of the image. S316 is performed, k=k+1.
The polling of the current round is complete, and the loop is executed from S308 to S316 until the current K > the total number of images in the image set. In the case where the quality parameter of each image in the image set is acquired, S318 is performed to select 1 or more target images having the highest quality parameters as target a.
In the embodiment of the application, in an object image set containing a target object, each candidate image is acquired through an image recognition model to carry out integrity recognition, quality parameters of the candidate images are obtained, the integrity of each key part in the target object is recognized through a task sub-model in the image recognition model, so that the quality parameters indicating the integrity of the key parts in the target object in the candidate images are obtained, the candidate image corresponding to the quality parameters meeting the quality conditions is determined as the target image, each key part of the target object contained in the image is recognized through each task sub-model in the image recognition model, the quality parameters capable of indicating the integrity of the target object in the candidate image are obtained, the target image is determined from the candidate images through the quality parameters, the target image containing the target object with higher integrity is determined from the image set, and the target image containing the target object with higher integrity is used for recognition of the target object, so that the technical effect of improving the recognition success rate of the target object is achieved, and the technical problem that the object recognition success rate is low due to inaccurate image determination for object recognition is solved.
As an optional implementation manner, as shown in fig. 4, the identifying, by using the image identification model, each candidate image in the plurality of candidate images, to obtain quality parameters corresponding to each candidate image includes:
the following processing is performed for each candidate image:
s402, determining key areas containing each key part in the candidate image;
s404, identifying key areas of all key parts based on task sub-models corresponding to all key parts in the image identification model to obtain part parameters corresponding to all the key parts, wherein the part parameters are used for indicating the integrity degree of the corresponding key parts;
s406, obtaining quality parameters according to the position parameters of the key positions.
Optionally, in order to facilitate the recognition of the completion degree of each task sub-model on the corresponding key part, the image recognition model is utilized to divide and extract the key area where each key part is located in the candidate image. And inputting the extracted region information of the key region containing the key part into the task sub-model to realize the recognition of the task sub-model on the integrity degree of the key part and obtain the quality parameter which is output by the task sub-model and corresponds to the key part.
Optionally, according to the position of the key part in the target object, determining a key area corresponding to each key part. And under the condition that the area distance of the adjacent key parts is smaller than the area threshold value, dividing the key area containing two or more key parts into key areas serving as two or more key parts, and respectively identifying the integrity degree of the key parts contained in the key areas by using the task sub-model. Taking a vehicle as an example, the area interval between the license plate and the area where the vehicle logo is located is smaller, so that the image area containing the license plate and the vehicle logo can be used as the same key area, the key area is respectively input into a task sub-model corresponding to the license plate and a task sub-model corresponding to the vehicle logo, and the identification of the integrity degree of the license plate and the identification of the integrity degree of the vehicle logo are respectively carried out.
Alternatively, the site parameter is not limited to a value indicating the degree of integrity of the critical site. For example, the location parameter is set to a value between 0 and 1, where 1 indicates that the critical location in the image is complete and 0 indicates that the critical location in the image is completely absent. The above is merely an example and is not intended to limit the location parameters.
In the embodiment of the application, the integrity degree of each key part is respectively identified through each task sub-model in the image identification model to obtain part parameters, and the quality parameters for indicating the integrity degree of the target object in the image are obtained by utilizing the part parameters corresponding to each key part. The target object is split into a plurality of key parts, the integrity degree of the key parts is identified by using the task sub-model, the integrity degree of the target object is obtained from the integrity degree of the key parts, and the accuracy of calculating the integrity degree of the target object is improved.
As an alternative embodiment, as shown in fig. 5, the above-mentioned location parameters that will obtain each critical location include:
s502, identifying the integrity level of the key part contained in the area information of the key area;
s504, acquiring a grade parameter matched with the complete grade;
s506, taking the grade parameter as the position parameter of the key position.
Alternatively, the integrity level is not limited to a preset integrity level, and a matching level parameter is set for each integrity level. The grade parameter is not limited to a numerical value type, and the integrity grade is converted into a numerical value type part parameter, so that the numerical value of the integrity degree is realized.
Optionally, each key part is provided with different integrity levels according to the situation of the part, and the integrity levels of each key part can be the same or different. For example, in the case where the target object is a vehicle, the complete level of the license plate is set to three levels of complete, incomplete, and invisible. The license plate in the complete indication image is visible and complete, the license plate in the incomplete indication image is visible but incomplete in display, and the license plate is not displayed at all in the invisible indication image. The integrity level may also be set to be complete (fully visible), more complete (greater than 50% visible), partially visible (50% visible or less), and invisible (no visible). The full level may also be set to the area duty ratio of the visible portion, the full level may be set to ten levels, the one level may be used to indicate the area duty ratio of the visible portion of 0-10%, and so on. The above integrity levels are examples and are not intended to be limiting of the integrity levels.
Alternatively, the rank parameter matching each of the full ranks is not limited to setting a matching rank score for each of the full ranks, and the site parameter describing the key site is converted into a rank score corresponding to the full rank. A range of values is set for the rank parameters, and a rank score is set for each rank parameter within the range of values. Taking the numerical range of the grade parameter as 0-1 as an example, the whole is not limited to corresponding to 1, the incomplete is not limited to corresponding to 0.5, and the invisible is not limited to corresponding to 0.
In the embodiment of the application, the complete grade of the key part is identified through the task sub-model, and the complete grade is converted into the grade parameter through matching, so that the complete degree of the key part is numerically controlled, and the quality parameter of the target object is conveniently obtained through the complete degree of the key part.
As an alternative embodiment, as shown in fig. 6, the obtaining the quality parameter according to the location parameter of each key location includes:
s602, obtaining part parameters of key parts output by each task sub-model;
s604, determining the position weights corresponding to the key positions;
s606, weighting operation is carried out on the part parameters according to the part weights, so as to obtain quality parameters.
Optionally, in the case of acquiring the location parameters output by the respective task sub-models, determining a location weight corresponding to each key location. The site weight is not limited to a weight set in advance for each key site. The weight of each critical portion is not limited to be set in accordance with the importance of the degree of integrity of the critical portion in each portion of the target object.
Optionally, with the target object being a vehicle, the key parts include: the calculation of the quality parameters of the license plate, the car logo, the front/rear car window, the side car window, the left/right car lamp, the car roof and the car wheels is not limited to the following formula (1):
Figure BDA0003190412900000101
Wherein obj_score is used to indicate quality parameters, i is used to indicate the location weights of the key locations, and score_i is used to indicate location parameters.
In the embodiment of the application, the importance degree of the key parts on the integrity degree of the target object is adjusted by setting the part weight for each key part, so that the calculated quality parameter can represent the integral integrity of the target object more.
Optionally, as shown in fig. 7, after obtaining the quality parameters corresponding to the candidate images, the method further includes:
s702, obtaining quality parameters of each candidate image in the object image set;
s704, comparing quality parameters;
s706, determining the target image meeting the quality condition according to the comparison result of the quality parameters.
Optionally, in case the quality parameter of each candidate image in the set of object images is calculated, the quality parameters of all candidate images are compared to determine the target image. The quality parameters of all candidate images are compared, and the candidate images are not limited to be ranked according to the quality parameters from large to small, so that a candidate image sequence is obtained. A target image in the set of object images is determined from the sequence of candidate images.
As an optional embodiment, the determining the candidate image corresponding to the quality parameter satisfying the quality condition as the target image includes: sorting the quality parameters corresponding to the candidate images based on the parameter values of the quality parameters, and determining the candidate images corresponding to the quality parameters sorted in the designated sequence as target images; or determining the candidate image corresponding to the quality parameter exceeding the quality parameter threshold as the target image.
Optionally, after determining the quality parameter of each candidate image in the set of object images and the sequence of candidate images obtained by ordering the quality parameters, a quality condition is determined. In the case where the quality condition indicates the designated order bit, a candidate image located on the designated order bit in the candidate image sequence is determined as the target image. For example, if the quality condition indicates that the designated order is one, the candidate image having the largest quality parameter value, which is the first order, is taken as the target image. And under the condition that the quality condition indicates that the execution sequence is one, two and three, taking the first sequence, the second sequence and the third sequence, namely three candidate images with the first three values of the quality parameters as target images.
Alternatively, in the case where the quality condition indicates the quality parameter threshold, the candidate image corresponding to the quality parameter exceeding the quality parameter threshold is taken as the target image, and the number of target images is not limited. For example, if the quality parameter threshold is 0.9, candidate images corresponding to quality parameters whose quality parameters exceed 0.9 are all taken as target images.
As an optional implementation manner, after obtaining the quality parameters corresponding to the candidate images, the method further includes: and under the condition that the quality parameters of all the candidate images do not meet the quality conditions, prompting that the target image is not determined in the target image set.
In the embodiment of the application, the quality evaluation and screening are performed on the images in the object image set through the preset quality condition and the quality parameter, so that the target images meeting the quality condition are obtained. And identifying the target object by utilizing the target image with the integrity degree meeting the quality condition so as to improve the success rate and accuracy of identifying the target object.
As an alternative embodiment, as shown in fig. 8, before the capturing the object image set, the method further includes:
s802, acquiring a sample image set, wherein the sample image set comprises a plurality of sample images, the sample images comprise labeling labels for key parts of a sample object, and the labeling labels comprise area labels of key areas where the key parts are located and quality labels of the key parts;
S804, training an initial image recognition model by using a sample image set, wherein a region label is used for optimizing a key region of the initial image recognition model, and a quality label is used for optimizing a corresponding initial task sub-model in the initial image recognition model;
s806, when the determined accuracy of the key area is higher than the first threshold and the identification accuracy of the initial task sub-model is higher than the second threshold, it is determined that the image identification model including the plurality of task sub-models is acquired.
Optionally, training the initial image recognition model with the sample image with the labeling tag to obtain the image recognition model. Labeling tags in the sample image include area tags and quality tags. The region label is used for indicating the division of the key region where the key part is located, and judging whether the division of the key region is correct or not through the region label.
Optionally, the quality label is used for indicating the integrity degree of the key part, and whether the output of the part parameter of the key part is correct or not is judged through the quality label. And training each initial task sub-model by utilizing the region information of the identified key region so as to obtain the position parameters output by each initial task sub-model.
Optionally, whether the training of the initial image recognition model is terminated is determined correctly by the determination accuracy of the key region and the recognition of the task sub-model. And under the condition that the determined accuracy of the key area is lower than a first threshold value or the identification accuracy of any initial task sub-model is lower than a second threshold value, continuing to optimize the initial image identification model by using the sample image so as to improve the accuracy of the quality parameters output by the initial image identification model.
In the embodiment of the application, in the training stage of the image recognition model, through the key region division and the training of the task submodel, the accuracy of the key region division and the accuracy of the key position recognition are improved, so that the accuracy of the quality parameters output by the image recognition model obtained through training is ensured.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
According to another aspect of the embodiment of the present invention, there is also provided an image determining apparatus for implementing the above image determining method. As shown in fig. 9, the apparatus includes:
an acquiring unit 902, configured to acquire an object image set, where the object image set includes a plurality of candidate images that detect a target object;
the identifying unit 904 is configured to identify each candidate image in the plurality of candidate images by using an image identifying model, so as to obtain a quality parameter corresponding to each candidate image, where the quality parameter is used to indicate the integrity degree of a key part of a target object in the corresponding candidate image, the image identifying model includes a plurality of task sub-models, and the task sub-models are used to identify the integrity degree of the corresponding key part;
a determining unit 906, configured to determine, as a target image, a candidate image corresponding to the quality parameter that satisfies the quality condition, where the target image is used to identify the target object.
Optionally, the identifying unit 904 is configured to process each candidate image separately, including:
the region module is used for determining key regions containing each key part in the candidate image;
the input module is used for identifying the key areas of each key part based on the task sub-model corresponding to each key part in the image identification model to obtain part parameters corresponding to each key part, wherein the part parameters are used for indicating the integrity degree of the corresponding key part;
And the part module is used for obtaining the quality parameters according to the part parameters of each key part.
Optionally, the area module includes:
identifying a completion level for identifying a key part contained in the region information of the key region;
the matching module is used for acquiring the grade parameters matched with the complete grade;
and the grade module is used for taking the grade parameter as the position parameter of the key position.
Optionally, the location module includes:
the first acquisition module is used for acquiring the position parameters of the key positions output by each task sub-model;
the weight module is used for determining the position weights corresponding to the key positions;
and the calculation module is used for carrying out weighted operation on the part parameters according to the part weights to obtain quality parameters.
Optionally, the determining module includes:
the first determining module is used for sequencing the quality parameters corresponding to the candidate images based on the parameter value of the quality parameters, and determining the candidate images corresponding to the quality parameters sequenced in the designated sequence as target images;
and the second determining module is used for determining the candidate image corresponding to the quality parameter exceeding the quality parameter threshold as the target image.
Optionally, the image determining apparatus further includes a prompting unit, configured to prompt that the target image is not determined in the target image set when the quality parameters of all candidate images do not meet the quality condition after obtaining the quality parameters corresponding to the candidate images.
Optionally, the image determining apparatus further includes a training unit, configured to, before acquiring the object image set, include:
the sample module is used for acquiring a sample image set, wherein the sample image set comprises a plurality of sample images, the sample images comprise labeling labels for key parts of a sample object, and the labeling labels comprise area labels of key areas where the key parts are located and quality labels of the key parts;
the training module is used for training the initial image recognition model by using the sample image set, wherein the region label is used for optimizing a key region of the initial image recognition model, and the quality label is used for optimizing a corresponding initial task sub-model in the initial image recognition model;
and the completion module is used for determining that the image recognition model comprising a plurality of task sub-models is acquired under the condition that the determination accuracy of the key area is higher than a first threshold value and the recognition accuracy of the initial task sub-model is higher than a second threshold value.
In the embodiment of the application, in an object image set containing a target object, each candidate image is acquired through an image recognition model to carry out integrity recognition, quality parameters of the candidate images are obtained, the integrity of each key part in the target object is recognized through a task sub-model in the image recognition model, so that the quality parameters indicating the integrity of the key parts in the target object in the candidate images are obtained, the candidate image corresponding to the quality parameters meeting the quality conditions is determined as the target image, each key part of the target object contained in the image is recognized through each task sub-model in the image recognition model, the quality parameters capable of indicating the integrity of the target object in the candidate image are obtained, the target image is determined from the candidate images through the quality parameters, the target image containing the target object with higher integrity is determined from the image set, and the target image containing the target object with higher integrity is used for recognition of the target object, so that the technical effect of improving the recognition success rate of the target object is achieved, and the technical problem that the object recognition success rate is low due to inaccurate image determination for object recognition is solved.
According to still another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the above-mentioned image determining method, which may be the terminal device or the server shown in fig. 1. The present embodiment is described taking the electronic device as a server as an example. As shown in fig. 10, the electronic device comprises a memory 1002 and a processor 1004, the memory 1002 having stored therein a computer program, the processor 1004 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object;
s2, identifying each candidate image in the plurality of candidate images by using an image identification model to obtain quality parameters corresponding to each candidate image, wherein the quality parameters are used for indicating the integrity degree of the key parts of the target objects in the corresponding candidate images, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key parts;
And S3, determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 10 is only schematic, and the electronic device may also be a terminal device such as a smart phone (e.g. Android phone, IOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 10 is not limited to the structure of the electronic device described above. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
The memory 1002 may be configured to store software programs and modules, such as program instructions/modules corresponding to the image determining method and apparatus in the embodiment of the present invention, and the processor 1004 executes the software programs and modules stored in the memory 1002 to perform various functional applications and data processing, that is, implement the image determining method described above. The memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 1002 may further include memory located remotely from the processor 1004, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1002 may be used for storing information such as a set of target images, quality conditions, and target images. As an example, as shown in fig. 10, the memory 1002 may include, but is not limited to, the acquisition unit 902, the recognition unit 904, and the determination unit 906 in the image determination device described above. In addition, other module units in the image determining apparatus may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 1006 is configured to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission means 1006 includes a network adapter (Network Interface Controller, NIC) that can be connected to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 1006 is a Radio Frequency (RF) module for communicating with the internet wirelessly.
In addition, the electronic device further includes: a display 1008 for displaying the set of object images and the target object; and a connection bus 1010 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, the nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the Peer-To-Peer network.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in various alternative implementations of the image determination aspects described above. Wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object;
s2, identifying each candidate image in the plurality of candidate images by using an image identification model to obtain quality parameters corresponding to each candidate image, wherein the quality parameters are used for indicating the integrity degree of the key parts of the target objects in the corresponding candidate images, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key parts;
And S3, determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (9)

1. An image determining method, comprising:
acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object;
identifying each candidate image in the plurality of candidate images by utilizing an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of the target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part;
Determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object;
the identifying each candidate image in the plurality of candidate images by using the image identification model, and obtaining quality parameters corresponding to each candidate image includes: the following processing is performed for each candidate image: determining a key region containing each key part in the candidate image; identifying the key areas of the key parts based on task sub-models corresponding to the key parts in the image identification model to obtain part parameters corresponding to the key parts, wherein the part parameters are used for indicating the integrity degree of the corresponding key parts; obtaining the quality parameters according to the position parameters of the key positions;
the determining the key area of each key part in the candidate image comprises the following steps: taking the key areas containing two or more key parts as the key areas of the two or more key parts respectively under the condition that the area distance of the adjacent key parts is smaller than an area threshold value;
The identifying the key area of each key part based on the task sub-model corresponding to each key part in the image identification model comprises the following steps: and respectively identifying the integrity degree of the key parts contained in the key area by utilizing the task sub-models corresponding to the two or more key parts.
2. The method of claim 1, wherein the obtaining location parameters corresponding to each of the key locations comprises:
identifying the integrity level of the key part contained in the region information of the key region;
acquiring a grade parameter matched with the complete grade;
and taking the grade parameter as the part parameter of the key part.
3. The method of claim 1, wherein said deriving said quality parameter from a site parameter for each of said critical sites comprises:
obtaining part parameters of the key parts output by each task sub-model;
determining the position weight corresponding to each key position;
and carrying out weighted operation on the part parameters according to the part weights to obtain the quality parameters.
4. The method according to claim 1, wherein the determining the candidate image corresponding to the quality parameter satisfying the quality condition as a target image includes:
sorting the quality parameters corresponding to the candidate images based on the parameter values of the quality parameters, and determining the candidate images corresponding to the quality parameters sorted in the designated sequence as the target images; or (b)
And determining the candidate image corresponding to the quality parameter exceeding a quality parameter threshold as the target image.
5. The method according to any one of claims 1 to 4, wherein after obtaining the quality parameter corresponding to each of the candidate images, the method further comprises:
and prompting that the target image is not determined in the object image set under the condition that the quality parameters of all the candidate images do not meet the quality conditions.
6. The method according to any one of claims 1 to 4, wherein prior to acquiring the set of object images, the method further comprises: acquiring a sample image set, wherein the sample image set comprises a plurality of sample images, the sample images comprise labeling labels of key parts of a sample object, and the labeling labels comprise area labels of key areas where the key parts are located and quality labels of the key parts;
Training an initial image recognition model by using the sample image set, wherein the region label is used for optimizing the key region of the initial image recognition model, and the quality label is used for optimizing a corresponding initial task sub-model in the initial image recognition model;
and determining that the image recognition model comprising a plurality of task sub-models is acquired under the condition that the determined accuracy rate of the key area is higher than a first threshold value and the recognition accuracy rate of the initial task sub-model is higher than a second threshold value.
7. An image determining apparatus, comprising:
an acquisition unit, configured to acquire an object image set, where the object image set includes a plurality of candidate images in which a target object is detected;
the image recognition unit is used for recognizing each candidate image in the plurality of candidate images by utilizing an image recognition model to obtain quality parameters corresponding to each candidate image, wherein the quality parameters are used for indicating the integrity degree of the key parts of the target object in the corresponding candidate images, the image recognition model is a neural network model comprising a plurality of task sub-models, and the task sub-models are used for recognizing the integrity degree of the corresponding key parts;
A determining unit, configured to determine the candidate image corresponding to the quality parameter that satisfies a quality condition as a target image, where the target image is used to identify the target object;
wherein the identification unit includes: the region module is used for determining key regions containing each key part in the candidate image; the input module is used for identifying the key areas of each key part based on the task sub-model corresponding to each key part in the image identification model to obtain part parameters corresponding to each key part, wherein the part parameters are used for indicating the integrity degree of the corresponding key part; the part module is used for obtaining quality parameters according to the part parameters of each key part;
the region module is used for determining a key region containing each key part in the candidate image through the following steps: taking the key areas containing two or more key parts as the key areas of the two or more key parts respectively under the condition that the area distance of the adjacent key parts is smaller than an area threshold value;
the input module is used for identifying the key areas of the key parts through the following steps: and respectively identifying the integrity degree of the key parts contained in the key area by utilizing the task sub-models corresponding to the two or more key parts.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program which, when run, performs the method of any one of claims 1 to 6.
9. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 6 by means of the computer program.
CN202110876241.2A 2021-07-30 2021-07-30 Image determining method and device, storage medium and electronic equipment Active CN113408496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876241.2A CN113408496B (en) 2021-07-30 2021-07-30 Image determining method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876241.2A CN113408496B (en) 2021-07-30 2021-07-30 Image determining method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113408496A CN113408496A (en) 2021-09-17
CN113408496B true CN113408496B (en) 2023-06-16

Family

ID=77688205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876241.2A Active CN113408496B (en) 2021-07-30 2021-07-30 Image determining method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113408496B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142962A (en) * 2009-12-25 2011-08-03 佳能株式会社 Information processing apparatus, verification apparatus, and method of controlling the same
CN110378247A (en) * 2019-06-26 2019-10-25 腾讯科技(深圳)有限公司 Virtual objects recognition methods and device, storage medium and electronic device
CN110598562A (en) * 2019-08-15 2019-12-20 阿里巴巴集团控股有限公司 Vehicle image acquisition guiding method and device
CN110781733A (en) * 2019-09-17 2020-02-11 浙江大华技术股份有限公司 Image duplicate removal method, storage medium, network equipment and intelligent monitoring system
CN111860430A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Identification method and device of fighting behavior, storage medium and electronic device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9524426B2 (en) * 2014-03-19 2016-12-20 GM Global Technology Operations LLC Multi-view human detection using semi-exhaustive search
CN110490202B (en) * 2019-06-18 2021-05-25 腾讯科技(深圳)有限公司 Detection model training method and device, computer equipment and storage medium
CN110765913A (en) * 2019-10-15 2020-02-07 浙江大华技术股份有限公司 Human body target optimization method and device based on multiple evaluation indexes and storage medium
CN111881741A (en) * 2020-06-22 2020-11-03 浙江大华技术股份有限公司 License plate recognition method and device, computer equipment and computer-readable storage medium
CN111861998A (en) * 2020-06-24 2020-10-30 浙江大华技术股份有限公司 Human body image quality evaluation method, device and system and computer equipment
CN112749711B (en) * 2020-08-04 2023-08-25 腾讯科技(深圳)有限公司 Video acquisition method and device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142962A (en) * 2009-12-25 2011-08-03 佳能株式会社 Information processing apparatus, verification apparatus, and method of controlling the same
CN110378247A (en) * 2019-06-26 2019-10-25 腾讯科技(深圳)有限公司 Virtual objects recognition methods and device, storage medium and electronic device
CN110598562A (en) * 2019-08-15 2019-12-20 阿里巴巴集团控股有限公司 Vehicle image acquisition guiding method and device
CN110781733A (en) * 2019-09-17 2020-02-11 浙江大华技术股份有限公司 Image duplicate removal method, storage medium, network equipment and intelligent monitoring system
CN111860430A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Identification method and device of fighting behavior, storage medium and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
自适应混合高斯建模的高效运动目标检测;刘伟 等;《中国图象图形学报》;第113-125页 *

Also Published As

Publication number Publication date
CN113408496A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN109325964B (en) Face tracking method and device and terminal
CN112162930B (en) Control identification method, related device, equipment and storage medium
CN104094287A (en) A method, an apparatus and a computer software for context recognition
CN108579094B (en) User interface detection method, related device, system and storage medium
JP7036401B2 (en) Learning server, image collection support system for insufficient learning, and image estimation program for insufficient learning
CN106921969A (en) Terminal authenticity verification method, apparatus and system
CN111191507A (en) Safety early warning analysis method and system for smart community
CN110826646A (en) Robot vision testing method and device, storage medium and terminal equipment
CN114299546A (en) Method and device for identifying pet identity, storage medium and electronic equipment
CN113408496B (en) Image determining method and device, storage medium and electronic equipment
CN112257628A (en) Method, device and equipment for identifying identities of outdoor competition athletes
CN109919164A (en) The recognition methods of user interface object and device
CN111652158A (en) Target object detection method and device, storage medium and electronic device
CN113762382B (en) Model training and scene recognition method, device, equipment and medium
CN113674276B (en) Image quality difference scoring method and device, storage medium and electronic equipment
CN111079468A (en) Method and device for robot to recognize object
CN113408669B (en) Image determining method and device, storage medium and electronic device
CN110135519B (en) Image classification method and device
CN113936231A (en) Target identification method and device and electronic equipment
CN114332706A (en) Target event determination method and device, storage medium and electronic device
CN116977782A (en) Training method and related device for small sample detection model
CN113240822A (en) Automatic attendance checking method and device based on mobile terminal
CN114596501A (en) Image data processing method, storage medium, processor and system
CN113688726A (en) User behavior guiding method and device based on Internet of things data
CN116958732A (en) Training method and device of image recognition model, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant