CN116091971A - Target object matching method, device, computer equipment and storage medium - Google Patents

Target object matching method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116091971A
CN116091971A CN202211725132.1A CN202211725132A CN116091971A CN 116091971 A CN116091971 A CN 116091971A CN 202211725132 A CN202211725132 A CN 202211725132A CN 116091971 A CN116091971 A CN 116091971A
Authority
CN
China
Prior art keywords
candidate objects
candidate
target
target image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211725132.1A
Other languages
Chinese (zh)
Inventor
占晴
陆振善
李伟
马东星
周道利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211725132.1A priority Critical patent/CN116091971A/en
Publication of CN116091971A publication Critical patent/CN116091971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a target object matching method, a target object matching device, computer equipment and a storage medium. The method comprises the following steps: acquiring a target image and determining a candidate object based on the target image; ranking the candidate objects based on the number and/or type of the candidate objects; and sequentially matching the candidate objects based on the sorting result to determine a target object. According to the method, all candidate objects are not required to be subjected to full attribute analysis, and the sorting can be performed through the number and/or types of the candidate objects, so that the problems of large target object matching calculation amount and low efficiency in the existing video monitoring are solved.

Description

Target object matching method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image analysis technologies, and in particular, to a target object matching method, a device 5, a computer device, and a storage medium.
Background
In recent years, with the application and development of video monitoring systems, image analysis technology is applied to video monitoring systems
Increasingly, they are important in data analysis. When the monitoring video of a certain area needs to be acquired by 0 to search a specified object, the longer the acquired monitoring video is, the more huge the image information needs to be processed by an image analysis technology, so how to find the specified object in massive image information is always an important subject in the technical field of image analysis.
The conventional technology provides a retrieval method for target objects, which is characterized in that the abnormal scores of the target objects are calculated and the set is divided, and the target 5 objects in the target object set are retrieved successively according to the abnormal scores based on the characteristic information of the target objects. However, the suspicious degree of the evaluation target object through the appearance features has larger deviation in objectivity, and the feature analysis is performed on all the extracted target objects, especially under the condition that the target objects are more in the same scene, the calculation amount required to be consumed is larger, and the retrieval efficiency is not high.
Aiming at the problems of large target object matching calculation amount and low efficiency in the prior video monitoring, the prior method still needs to be solved.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a target object matching method, apparatus, computer device, and computer-readable storage medium capable of improving target object matching efficiency.
In a first aspect, the present embodiment provides a target object matching method, where the method includes: 5, acquiring a target image and determining a candidate object based on the target image;
ranking the candidate objects based on the number and/or type of the candidate objects;
and sequentially matching the candidate objects based on the sorting result to determine a target object.
In some of these embodiments, the ranking the candidates based on the number and/or type of the candidates comprises:
determining a scene type of the target image based on the number and/or type of the candidate objects;
the candidate objects are ordered based on the scene type.
In some of these embodiments, the determining the scene type of the target image based on the number and/or type of the candidate objects comprises:
if the number of the candidate objects is larger than a first preset threshold value, determining that the target image is an object dense scene;
if the number of the candidate objects is smaller than a second preset threshold, determining that the target image is an object sparse scene;
if the number of types of the candidate objects is larger than a third preset threshold, determining that the target image is a multi-type object scene;
the ranking the candidate objects based on the scene type includes:
if the target image is an object dense scene, sorting the candidate objects based on the area size of the candidate objects;
if the target image is an object sparse scene, sorting the candidate objects based on the distance between the candidate objects and the center point of the target image;
and if the target image is a multi-type object scene, sorting the candidate objects based on the types of the candidate objects.
In some embodiments, the sequentially matching the candidate objects based on the sorting result, and determining the target object includes:
acquiring key information of a target object;
and sequentially matching the candidate objects based on the key information, and determining a target object.
In some embodiments, the matching the candidate objects sequentially based on the sorting result, and determining the target object further includes:
and carrying out full attribute analysis on the target object and outputting complete attribute information of the target object.
In some of these embodiments, the determining a candidate object based on the target image comprises:
and identifying the target object through machine vision, and determining a candidate object.
In some of these embodiments, the acquiring the target image includes:
acquiring target video data;
determining a plurality of candidate images based on the target video data;
the target image is determined based on the temporal order of the candidate images.
In a second aspect, the present embodiment provides a target object matching apparatus, including:
the determining module is used for acquiring a target image and determining a candidate object based on the target image;
a ranking module for ranking the candidate objects based on the number and/or type of the candidate objects;
and the matching module is used for sequentially matching the candidate objects based on the sorting result and determining a target object.
In a third aspect, the present embodiment provides a computer device comprising a memory storing a computer program and a processor implementing the steps of any one of the methods described above when the processor executes the computer program.
In a fourth aspect, the present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the preceding claims.
The target object matching method, the target object matching device, the computer equipment and the storage medium are used for determining candidate objects based on target images by acquiring the target images; ranking the candidate objects based on the number and/or type of the candidate objects; and sequentially matching the candidate objects based on the sorting result to determine a target object. From the aspect of influence of the number and/or types of candidate objects in the target image on the image recognition degree, feature analysis is not needed for all candidate objects, and the technical effects of reducing the calculated amount and improving the target object matching efficiency are achieved by sequencing the number and/or types of the candidate objects and then performing feature analysis.
Drawings
FIG. 1 is an application environment diagram of a target object matching method in one embodiment;
FIG. 2 is a flow chart of a target object matching method in one embodiment;
FIG. 3 is a flow diagram of a step of ordering the candidates based on the number and/or type of the candidates in one embodiment;
FIG. 4 is a flow diagram of the scene type step of determining the target image based on the candidate object and/or type in one embodiment;
FIG. 5 is a flow diagram of a ranking step of the candidate objects based on the scene type in one embodiment;
FIG. 6 is a flowchart of a step of sequentially matching candidate objects based on a ranking result to determine a target object in one embodiment;
FIG. 7 is a flowchart of a target object matching method according to another embodiment;
FIG. 8 is a flow chart of a process for acquiring a target image in one embodiment;
FIG. 9 is a block diagram of a target object matching apparatus in one embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The target object matching method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 acquires a target image and determines a candidate object based on the target image; ranking the candidate objects based on the number and/or type of the candidate objects; and sequentially matching the candidate objects based on the sorting result to determine a target object. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a target object matching method is provided, and the method is applied to the terminal 102 in fig. 1 for illustration, and includes the following steps:
step S100, a target image is acquired, and a candidate object is determined based on the target image.
The target image is an image to be processed extracted from video monitoring data, or an image obtained by capturing the image to be processed, wherein the target image comprises at least one candidate object, and the candidate object is a person or an object which is selected from the target image and possibly is the target object. When the image to be processed is extracted from the video monitoring data, the video frames serving as the image to be processed may be continuous or partially continuous, or may be selected according to a preset standard, for example, a suitable video frame may be selected according to the sharpness of the video frame. The video can be any suitable video which needs to be subjected to structural analysis, can be an original video acquired by an image acquisition device such as a camera, and can also be a video obtained after preprocessing.
Determining a candidate object based on the target image refers to determining a candidate object to be analyzed from the target image. The candidate object may be determined by calibrating a coordinate frame of the candidate object in the target image, clipping the candidate object, or other determining manners for obtaining the candidate object image, which is not limited herein.
Step S200, sorting the candidate objects based on the number and/or type of the candidate objects.
Wherein the number of candidate objects is the number of people or objects in the target image that may be target objects. The types of the candidate objects can be set according to requirements. For example, when it is necessary to identify a vehicle and a pedestrian on a traffic road, the candidate object may be set as a type of a person, a vehicle, a non-motor vehicle, or the like according to the need.
And sequencing the candidate objects based on the number of the candidate objects, wherein the sequencing mode of each candidate object is determined according to the number of the candidate objects and sequencing is carried out. The sorting mode comprises sorting according to preset sorting conditions. The ordering conditions can be set according to requirements. Optionally, the sorting condition may be, but not limited to, a position, an image size, a color value, etc. of the candidate object, and the sorting condition may be determined according to other basic features of the candidate object, which is not described herein.
And step S300, sequentially matching the candidate objects based on the sorting result, and determining a target object.
The matching of the candidate object means that the characteristics of the candidate object are identified, and the characteristics of the candidate object are matched with the characteristics of the target object, so as to determine whether the candidate object is the target object.
According to the target object matching method, the candidate objects are determined based on the target image, the sorting mode is determined based on the number and/or the types of the candidate objects, the candidate objects are sorted, the matching is sequentially carried out according to the sorting result, the image data processing workload is reduced, meanwhile, the possible candidate objects are subjected to front matching, and the technical effect of improving the target object matching efficiency is achieved.
In one embodiment, as shown in fig. 3, the ranking the candidate objects based on the number and/or type of candidate objects includes:
step S210, determining a scene type of the target image based on the number and/or type of the candidate objects.
The scene refers to a monitoring area covered by the target image. Scene type refers to the classification taken of a scene according to the distribution of candidate objects in the current target image. The scene type can be set according to actual needs, so that the corresponding scene type is determined according to the number and/or the types of the candidate objects. For example, the determination of the scene type may be whether the number of the candidate objects is a specified value, or reaches a preset threshold, or falls within a preset range, or whether the type of the candidate objects meets a preset condition, or other conditions for determining the scene type according to the number and/or the type of the candidate objects, which is not limited herein.
Step S220, sorting the candidate objects based on the scene type.
Scene type differences represent differences in candidate object distribution. Different candidate object distribution conditions correspondingly have different recognition difficulties. Therefore, the candidate objects are ranked based on scene types, different candidate object distribution conditions can be processed in a more targeted and more appropriate ranking mode, and accordingly the candidate objects with more recognition values are placed at the position which is higher than the front position in the ranking result.
According to the target object matching method, the same-picture target ordering strategy based on scene classification is used, the scene types are determined through the number and/or the types of the candidate objects, the candidate objects are ordered according to the scene types, and targeted ordering is performed according to the distribution condition of the candidate objects, so that the target objects are retrieved more quickly, and the technical effects of reducing the calculated amount and improving the target object matching efficiency are achieved.
In one embodiment, as shown in fig. 4, the determining the scene type of the target image based on the number and/or type of the candidate objects includes:
step S211, if the number of candidate objects is greater than a first preset threshold, determining that the target image is an object dense scene.
Step S212, if the number of candidate objects is smaller than a second preset threshold, determining that the target image is an object sparse scene.
Wherein the first preset threshold and the second preset threshold represent the number of candidate objects in the target image. The first preset threshold and the second preset threshold can be set according to actual needs. The first preset threshold value and the second preset threshold value can be equal or unequal, and meanwhile, the first preset threshold value is larger than or equal to the second preset threshold value, so that the situation that ordering is disordered due to the fact that the object dense scene and the object sparse scene are simultaneously judged in the same scene is avoided.
Step S213, if the number of types of the candidate objects is greater than a third preset threshold, determining that the target image is a multi-type object scene.
Wherein the third preset threshold represents the number of types covered by the candidate object in the target image. The third preset threshold may be set according to actual needs.
As shown in fig. 5, the ranking the candidate objects based on the scene type includes:
in step S221, if the target image is an object dense scene, the candidate objects are ranked based on the area size of the candidate objects.
The size of the area of the candidate object refers to the number of pixels occupied by the candidate object in the target image, and the candidate object can be ranked from large to small by calculating the number of pixels occupied by the candidate object.
It will be appreciated that when the number of candidate objects exceeds the first preset threshold, the candidate objects may have incomplete images, i.e. there is an overlapping occlusion relationship between the candidate objects. At this point, the candidate may lose key feature information. For example, in a vehicle congestion scene, in a target image acquired by a monitoring video, a vehicle as a candidate object may be blocked by other vehicles, and if key information for matching the target object is a license plate number, the candidate object loses key feature information after the license plate is blocked. It can be understood that the candidate object with larger area has more pixels, and the feature which can be identified is included in the candidate object with larger area, so that the identification result is more stable. And for the candidate objects lacking key feature information, the probability of matching with the target objects is high, so that the calculation time and the workload of image analysis are occupied, and the target object matching efficiency is low.
Based on the above situation, the candidate objects with larger areas are ordered to the front, so that the candidate objects with more features can be matched in advance under the condition that the effective targets are not lost, the feature extraction and matching of the ineffective targets are reduced, the reliability of the target object matching result is improved, and the effect of improving the target matching efficiency is achieved.
Step S222, if the target image is an object sparse scene, sorting the candidate objects based on the distance between the candidate objects and the center point of the target image.
It will be appreciated that during image acquisition, the image formed by the candidate near the edge of the range detected by the image acquisition device may be stretched due to lens distortion inherent to the optical lens, resulting in a difference between the characteristics of the candidate in the target image and the characteristics in actual situations. Therefore, as the candidate object is closer to the center point of the target image, the imaging distortion ratio of the candidate image is lower, and the matching accuracy is higher. Furthermore, the closer the distance from the center point of the target image, the lower the likelihood that the candidate object will exceed the image boundary, and the more efficient the feature extraction of the candidate object.
The distance between the candidate object and the center point of the target image refers to the distance between the coordinate position of the candidate object in the target image and the center point of the target image. Alternatively, the coordinate positions may be in the form of coordinate points. The coordinate points can be set according to actual needs. Alternatively, the coordinate point may be determined based on a center point of the candidate object detection frame, or may be determined as a coordinate position of a point closest to the center point of the target image on the candidate object detection frame or the candidate object contour line, or may be determined according to a coordinate point of a specific feature of the candidate object, or may be other coordinate points determined based on the feature of the candidate object, which is not limited herein.
Step S223, if the target image is a multi-type object scene, sorting the candidate objects based on the types of the candidate objects.
The types of the candidate objects can be set according to actual needs, and meanwhile, the positions of the types of the candidate objects in the sorting can be set according to actual needs. For example, when the types of candidates include vehicles, pedestrians, and non-vehicles, the types of candidates may be ranked, e.g., the first rank is vehicles, the second rank is non-vehicles, and the third rank is pedestrians, so that when ranking the candidates based on the types of candidates, the candidates are ranked according to the order of vehicles, non-vehicles, and pedestrians and a ranking result is generated.
Further, if the number and types of the candidate objects do not meet the ranges related to the first preset threshold, the second preset threshold and the third preset threshold, the sorting mode of the candidate objects can be set according to actual needs. Alternatively, one or more of the three sorting methods may be selected to sort the candidate objects, where the selecting method may be, but is not limited to, randomly selecting, and selecting according to a preset rule based on a distance relationship between the number/type of the candidate objects and the first preset threshold, the second preset threshold, and the third preset threshold. Alternatively, the ranking may be performed by the features of other candidate objects, for example, by the color value, brightness, etc. of the candidate objects, which is not limited herein.
Further, when the target image is simultaneously determined as an object dense scene and a multi-type object scene, or is simultaneously determined as an object sparse scene and a multi-type object scene, a unique ordering mode can be determined according to actual needs, and multiple ordering with priority can also be determined. For example, when the target image is simultaneously determined as an object-dense scene and a multi-type object scene, it may be determined whether to sort based on the size of the candidate object area or sort based on the candidate object type according to actual needs; or firstly sorting according to the area sizes of the candidate objects, and then sorting the types of the candidate objects with the same or similar area sizes; or sorting the candidate objects according to the types of the candidate objects, and sorting the candidate objects under the same type based on the area size.
According to the target object matching method, the corresponding scene types are determined according to the number and types of the candidate objects, and the corresponding ordering mode is selected, wherein the larger the area of the candidate object is, the closer the area of the candidate object is to the center point, the type of the candidate object is the same as or similar to the type of the target object, the probability that the candidate object is successfully matched with the target object is also the larger, the calculated amount is reduced, the target object image with the reference value is placed in the ordering front column, the performance influence caused by invalid targets under the condition of multiple targets is reduced through the ordering strategy, and the effect of improving the target object matching efficiency is achieved.
In one embodiment, as shown in fig. 6, the sequentially matching the candidate objects based on the sorting result, and determining the target object includes:
step S310, obtaining key information of a target object;
the key information of the target object refers to part of information selected from all feature information of the target object in advance, and can be used for judging whether the candidate object is consistent with the target object. The key information of the target object can be set according to actual conditions, or can be obtained by performing feature analysis on the target object.
Step S320, sequentially matching the candidate objects based on the key information, and determining a target object.
The matching of the candidate objects based on the key information sequentially comprises the following steps: and identifying characteristic information corresponding to the key information in the candidate object, and comparing the characteristic information of the candidate object with the key information. By comparing the characteristic information with the key information, whether the candidate object has the characteristic information of the target object can be determined, so that the target object is determined.
According to the target object matching method, through comparison of the key information of the target object and the characteristic information of the target object, the target matching is firstly carried out by using the preset key target attribute, then the full attribute analysis is carried out, the target quantity of the full attribute analysis is reduced, the rapid matching of the target object and the candidate object is realized, and the effect of improving the matching efficiency of the target object is achieved.
In one embodiment, the matching the candidate objects sequentially based on the sorting result, and determining the target object further includes:
step S400, carrying out full attribute analysis on the target object and outputting complete attribute information of the target object.
The complete attribute information of the target object can be output in the form of characters, images or both characters and pictures. Optionally, if the complete attribute information includes text representation, the complete attribute information may be generated according to keywords related to the feature information of the target object; if the complete attribute information includes a representation in the form of a picture, the complete attribute information may be a plurality of target images including the target object, where the complete attribute information may include a label on the target object, or may be an image of the target object. The complete attribute information may also be other forms for representing the characteristic information of the target image, which is not limited herein.
Further, in the process of matching the target object with the candidate objects of the target image, when the number of the target images is large, the amount of data required to be processed by the image analysis technique is also increased. Meanwhile, even if the target image contains the target object, the imaging result of the target object in the target image is poor and the target image does not have reference value. And aiming at the situation that the target objects contained in one target image possibly have incomplete characteristics, candidate object matching can be carried out on a plurality of target images, after a plurality of target objects are obtained, the full attribute analysis results of the plurality of target objects are compared, the target objects with complete characteristics are obtained, and complete attribute information is output.
According to the target object matching method, through full attribute analysis of the target object, complete attribute information of the target object is obtained, the attribute information of the target object is further expanded and completed on original key information, accurate and complete target object attribute information is obtained, reliability and availability of the target object attribute information are higher, and accordingly the effect of improving the integrity of a target object matching result and utilization efficiency of the target object matching result is achieved.
In one embodiment, the determining a candidate object based on the target image comprises:
and identifying the target object through machine vision, and determining a candidate object.
The machine vision is to collect images of the monitored range through an image collecting module and transmit the images to be processed to an image processing system, and the image processing system extracts characteristics of candidate objects through operation to achieve an automatic identification function.
According to the target object matching method, the target object is identified through machine vision, and automatic acquisition of candidate objects is achieved, so that the effect of improving target acquisition efficiency is achieved.
In one embodiment, as shown in fig. 7, the acquiring the target image includes:
step S1101, acquiring target video data;
the target video data may be any suitable video that needs to be subjected to structural analysis, and may be an original video acquired by an image acquisition device such as a camera, or may be a video obtained after preprocessing.
Step S1102, determining a plurality of candidate images based on the target video data;
the target video data is composed of a plurality of video frames, and the determination of a plurality of candidate images based on the target video data refers to the extraction of a plurality of video frame images from the target video data. Optionally, a plurality of video frame images possibly including the target object are extracted as candidate images based on the target video data.
Step S1103 determines the target image based on the temporal order of the candidate images.
Wherein determining the target image based on the temporal order of the candidate images includes sequentially determining a current target image based on the temporal order of the candidate images.
According to the target object matching method, after the candidate images are determined, the target images are sequentially determined according to the time sequence and the candidate objects are matched, and if the target objects are determined, the subsequent candidate images are not required to be matched, so that the effect of reducing the calculated amount is achieved.
The present application further provides a detailed embodiment for a clearer understanding of the technical solution of the present application.
As shown in fig. 8, the present embodiment provides a target object matching method, including:
step 1, obtaining picture stream data captured by image acquisition equipment, and transmitting part or all of key attribute information obtained by analysis of the image acquisition equipment to a database. Key attribute information includes, but is not limited to, the license plate number, license plate coordinates, and vehicle brand of the vehicle target.
And 2, extracting the candidate object from the target image, and acquiring information such as candidate object types (including but not limited to people, vehicles, non-motor vehicles and the like) and coordinates of the candidate object in the whole image.
And step 3, carrying out algorithm detection on scene characteristics of the target image, and judging the scene major class to which the target image belongs. Scene categories can be categorized into: A. the scene features are object dense scenes; B. the scene features are object sparse scenes; C. scene features are multi-type object scenes.
And 4, sorting the targets according to a certain strategy according to the scene characteristic analysis result in the step 3. If the scene features are densely distributed candidate objects, the scene features are class A scenes, and all the candidate objects in the target image are ordered preferentially according to the size of the pixel area occupied by the candidate objects.
And step 5, if the scene features are sparse in arrangement of the candidate objects, the scene features are class B scenes, and all the candidate objects in the target image are ordered preferentially according to the distance between the candidate objects and the center point of the target area.
And step 6, if the types of the candidate objects in the scene features are more, the scene is a class C scene, and all the candidate objects in the target image are ordered preferentially according to the types of the candidate objects.
And 7, according to the sequence of the candidate objects in the target image and the input sequence of the target image, taking the candidate objects to be analyzed in the batch number, and sending the candidate objects to be analyzed into an OA algorithm to detect one or more pieces of key attribute information.
And 8, judging whether the candidate object is matched with the key information preset by the user according to the detected key attribute information of the candidate object. If so, the following step 9 is performed, and the analysis of other candidates in the target image is abandoned. If not, continuing to step 7.
Step 9, if the key information is matched, namely a matched candidate object in the target image is found, and step 7 is carried out; if the key information does not match, step 4 is performed, and the next candidate object is selected for continuous analysis.
And 10, performing full attribute algorithm analysis on the matched target object, and outputting complete attribute information of the optimal target object in the target image.
According to the target object matching algorithm provided by the embodiment, the target image at the current moment is obtained, the candidate objects are determined based on the target image, the scene types of the target image are determined based on the number and the types of the candidate objects, different candidate object ordering modes are adopted according to different scene types, the target objects are obtained by sequentially comparing preset key information with the characteristic information of the candidate objects, then full attribute analysis is carried out on the target objects, complete attribute information of the target objects is obtained, reasonable ordering of the candidate objects is achieved under the condition that the characteristic analysis is not needed to be carried out on all the candidate objects in advance, the number of the target objects of the full attribute analysis is reduced, and the technical effects of reducing the calculated amount and improving the matching efficiency of the target objects are achieved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a target object matching device for realizing the target object matching method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the target object matching device or target object matching devices provided below may be referred to the limitation of the target object matching device method hereinabove, and will not be described herein.
In one embodiment, as shown in fig. 9, there is provided a target object matching apparatus including: the device comprises a determining module, a sorting module and a matching module, wherein:
a determining module 100, configured to acquire a target image, and determine a candidate object based on the target image.
The determining module 100 is further configured to identify the target object through machine vision, and determine a candidate object.
The determining module 100 is further configured to:
acquiring target video data;
determining a plurality of candidate images based on the target video data;
the target image is determined based on the temporal order of the candidate images.
A ranking module 200, configured to rank the candidate objects based on the number and/or types of the candidate objects.
The sorting module 200 is further configured to:
determining a scene type of the target image based on the number and/or type of the candidate objects;
the candidate objects are ordered based on the scene type.
The sorting module 200 is further configured to:
if the number of the candidate objects is larger than a first preset threshold value, determining that the target image is an object dense scene;
if the number of the candidate objects is smaller than a second preset threshold, determining that the target image is an object sparse scene;
if the number of types of the candidate objects is larger than a third preset threshold, determining that the target image is a multi-type object scene;
if the target image is an object dense scene, sorting the candidate objects based on the area size of the candidate objects;
if the target image is an object sparse scene, sorting the candidate objects based on the distance between the candidate objects and the center point of the target image;
and if the target image is a multi-type object scene, sorting the candidate objects based on the types of the candidate objects.
And the matching module 300 is used for sequentially matching the candidate objects based on the sorting result to determine a target object.
The matching module 300 is further configured to:
acquiring key information of a target object;
and sequentially matching the candidate objects based on the key information, and determining a target object.
In one embodiment, the target object matching apparatus further includes:
and the analysis module is used for carrying out full attribute analysis on the target object and outputting the complete attribute information of the target object.
The respective modules in the above-described target object matching apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a target object matching method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
step S100, a target image is acquired, and a candidate object is determined based on the target image.
Wherein the target image is an image to be processed containing target objects, the target image contains one to a plurality of target objects,
step S200, sorting the candidate objects based on the number and/or type of the candidate objects.
And step S300, sequentially matching the candidate objects based on the sorting result, and determining a target object.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
step S100, a target image is acquired, and a candidate object is determined based on the target image.
Wherein the target image is an image to be processed containing target objects, the target image contains one to a plurality of target objects,
step S200, sorting the candidate objects based on the number and/or type of the candidate objects.
And step S300, sequentially matching the candidate objects based on the sorting result, and determining a target object.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of matching a target object, the method comprising:
acquiring a target image and determining a candidate object based on the target image;
ranking the candidate objects based on the number and/or type of the candidate objects;
and sequentially matching the candidate objects based on the sorting result to determine a target object.
2. The method of claim 1, wherein the ordering the candidate objects based on the number and/or type of candidate objects comprises:
determining a scene type of the target image based on the number and/or type of the candidate objects;
the candidate objects are ordered based on the scene type.
3. The method of claim 2, wherein the determining the scene type of the target image based on the number and/or type of candidate objects comprises:
if the number of the candidate objects is larger than a first preset threshold value, determining that the target image is an object dense scene;
if the number of the candidate objects is smaller than a second preset threshold, determining that the target image is an object sparse scene;
if the number of types of the candidate objects is larger than a third preset threshold, determining that the target image is a multi-type object scene;
the ranking the candidate objects based on the scene type includes:
if the target image is an object dense scene, sorting the candidate objects based on the area size of the candidate objects;
if the target image is an object sparse scene, sorting the candidate objects based on the distance between the candidate objects and the center point of the target image;
and if the target image is a multi-type object scene, sorting the candidate objects based on the types of the candidate objects.
4. The method of claim 1, wherein the sequentially matching the candidate objects based on the ranking result, determining a target object comprises:
acquiring key information of a target object;
and sequentially matching the candidate objects based on the key information, and determining a target object.
5. The method of claim 4, wherein the sequentially matching the candidate objects based on the ranking result, and further comprising, after determining the target object:
and carrying out full attribute analysis on the target object and outputting complete attribute information of the target object.
6. The method of claim 1, wherein the determining a candidate object based on the target image comprises:
and identifying the target object through machine vision, and determining a candidate object.
7. The method of claim 1, wherein the acquiring the target image comprises:
acquiring target video data;
determining a plurality of candidate images based on the target video data;
the target image is determined based on the temporal order of the candidate images.
8. A target object matching apparatus, the apparatus comprising:
the determining module is used for acquiring a target image and determining a candidate object based on the target image;
a ranking module for ranking the candidate objects based on the number and/or type of the candidate objects;
and the matching module is used for sequentially matching the candidate objects based on the sorting result and determining a target object.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202211725132.1A 2022-12-30 2022-12-30 Target object matching method, device, computer equipment and storage medium Pending CN116091971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211725132.1A CN116091971A (en) 2022-12-30 2022-12-30 Target object matching method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211725132.1A CN116091971A (en) 2022-12-30 2022-12-30 Target object matching method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116091971A true CN116091971A (en) 2023-05-09

Family

ID=86213191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211725132.1A Pending CN116091971A (en) 2022-12-30 2022-12-30 Target object matching method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116091971A (en)

Similar Documents

Publication Publication Date Title
CN110765860B (en) Tumble judging method, tumble judging device, computer equipment and storage medium
CN111738357B (en) Junk picture identification method, device and equipment
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
WO2018210047A1 (en) Data processing method, data processing apparatus, electronic device and storage medium
CN110688524B (en) Video retrieval method and device, electronic equipment and storage medium
CN111581423B (en) Target retrieval method and device
CN110245714B (en) Image recognition method and device and electronic equipment
CN113496208B (en) Video scene classification method and device, storage medium and terminal
CN116310656B (en) Training sample determining method and device and computer equipment
CN112668577A (en) Method, terminal and device for detecting target object in large-scale image
CN112084812A (en) Image processing method, image processing device, computer equipment and storage medium
CN114708426A (en) Target detection method, model training method, device, equipment and storage medium
CN110263830B (en) Image processing method, device and system and storage medium
CN114155363A (en) Converter station vehicle identification method and device, computer equipment and storage medium
CN116630630B (en) Semantic segmentation method, semantic segmentation device, computer equipment and computer readable storage medium
WO2018210039A1 (en) Data processing method, data processing device, and storage medium
CN115731442A (en) Image processing method, image processing device, computer equipment and storage medium
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN116071569A (en) Image selection method, computer equipment and storage device
CN115546762A (en) Image clustering method, device, storage medium and server
CN113657378B (en) Vehicle tracking method, vehicle tracking system and computing device
CN113177917B (en) Method, system, equipment and medium for optimizing snap shot image
CN116091971A (en) Target object matching method, device, computer equipment and storage medium
CN114724128A (en) License plate recognition method, device, equipment and medium
CN114219938A (en) Region-of-interest acquisition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination