CN111860559A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111860559A
CN111860559A CN201911404681.7A CN201911404681A CN111860559A CN 111860559 A CN111860559 A CN 111860559A CN 201911404681 A CN201911404681 A CN 201911404681A CN 111860559 A CN111860559 A CN 111860559A
Authority
CN
China
Prior art keywords
image
frame
matching degree
candidate object
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911404681.7A
Other languages
Chinese (zh)
Inventor
白冰
李心冉
邢腾飞
郑茂宗
顾阳
刘恒鑫
孟一平
许鹏飞
李连志
牛红太
陈凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ditu Beijing Technology Co Ltd
Original Assignee
Ditu Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ditu Beijing Technology Co Ltd filed Critical Ditu Beijing Technology Co Ltd
Priority to CN201911404681.7A priority Critical patent/CN111860559A/en
Publication of CN111860559A publication Critical patent/CN111860559A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a multi-frame image in a video to be processed; determining a candidate object matching degree set among different frame images in the multi-frame image; each candidate object matching degree in the candidate object matching degree set refers to the matching degree between one candidate object in one frame image and one candidate object in the other frame image; selecting at least one corresponding target object with the matching degree meeting a preset condition from the candidate objects according to the candidate object matching degree set; and aiming at each target object in the at least one target object, selecting one frame of image from different frame images of the target object, and acquiring the characteristic information of the target object from the selected frame of image. The embodiment of the application can improve the utilization rate of equipment.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
After the camera acquires the video in the field of view, inputting each frame of image in the video into a machine learning model for processing to obtain an object included in each frame of image.
When a machine learning model processes each frame of image in a video, a moving object and a static object in a video sequence are often processed simultaneously, when a plurality of static objects exist in the video sequence, the same static object may exist in different frame images, the machine learning model processes different images containing the same static object respectively, and obtains related information of the static object from each image in the different images, thus, the calculation performance of equipment is wasted, and the utilization rate of the equipment is reduced.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, which are used to improve the utilization rate of the device.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a multi-frame image in a video to be processed;
determining a candidate object matching degree set among different frame images in the multi-frame image; each candidate object matching degree in the candidate object matching degree set refers to the matching degree between one candidate object in one frame image and one candidate object in the other frame image;
Selecting at least one corresponding target object with the matching degree meeting a preset condition from the candidate objects according to the candidate object matching degree set;
and aiming at each target object in the at least one target object, selecting one frame of image from different frame images of the target object, and acquiring the characteristic information of the target object from the selected frame of image.
In one embodiment, the matching degree includes an image matching degree and a distance matching degree, and selecting at least one target object whose corresponding matching degree meets a preset condition from candidate objects according to the candidate object matching degree set includes:
and determining the candidate object of which the corresponding image matching degree is greater than a first matching degree threshold value and the corresponding distance matching degree is greater than a second matching degree threshold value as the at least one target object.
In one embodiment, the distance match is determined according to the following steps:
determining the overlapping area between the image position areas corresponding to one candidate object in one frame image and one candidate object in the other frame image;
determining the sum of the areas of image position areas corresponding to a candidate object in one frame image and a candidate object in the other frame image;
And determining the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image based on the sum of the overlapping area and the area.
In one embodiment, determining a distance matching degree between one candidate object located in one frame image and one candidate object located in another frame image based on the sum of the overlapping area and the area comprises:
taking the ratio of the overlapping area to the sum of the areas as the distance matching degree between a candidate object in one frame image and a candidate object in the other frame image; alternatively, the first and second electrodes may be,
determining a difference between the sum of the areas and the overlap area;
and taking the ratio of the overlapping area to the difference value as the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image.
In one embodiment, the image matching degree is determined according to the following steps:
extracting the characteristic vector of the candidate object in each frame of image in the multi-frame images;
and determining the image matching degree of the candidate object in the image of one frame and the candidate object in the image of the other frame based on the feature vector of the candidate object in the image of one frame and the feature vector of the candidate object in the image of the other frame.
In one embodiment, selecting one frame of image from different frame of images in which the target object is located includes:
respectively determining the area of the target object in each frame of image in different frame images;
and determining the corresponding frame image with the largest area as the selected frame image.
In one embodiment, the method further comprises:
determining a non-target object which does not meet the preset condition;
and respectively extracting the characteristic information of the non-target object from different images where the non-target object is located.
In one embodiment, the method further comprises:
determining a first object with an image matching degree larger than a first matching degree threshold value and a distance matching degree smaller than or equal to a second matching degree threshold value from the non-target objects;
selecting one frame of image from different frame images of the first object, and determining the characteristic information of the first object from the selected frame of image.
In one embodiment, the method further comprises:
acquiring an information query request of a user side; the information query request carries an image corresponding to an object to be queried;
and generating a query result based on the image corresponding to the object to be queried and the pre-extracted characteristic information of each target object and non-target object.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring multi-frame images in a video to be processed;
the determining module is used for determining a candidate object matching degree set among different frame images in the multi-frame images; each candidate object matching degree in the candidate object matching degree set refers to the matching degree between one candidate object in one frame image and one candidate object in the other frame image;
the selecting module is used for selecting at least one target object of which the matching degree meets a preset condition from the candidate objects according to the candidate object matching degree set;
and the extraction module is used for selecting one frame of image from different frame images of the target object aiming at each target object in the at least one target object and acquiring the characteristic information of the target object from the selected frame of image.
In an embodiment, the matching degree includes an image matching degree and a distance matching degree, and the selecting module is configured to select at least one target object, of which the matching degree meets a preset condition, from the candidate objects according to the following steps:
And determining the candidate object of which the corresponding image matching degree is greater than a first matching degree threshold value and the corresponding distance matching degree is greater than a second matching degree threshold value as the at least one target object.
In one embodiment, the determining module is configured to determine the distance matching degree according to the following steps:
determining the overlapping area between the image position areas corresponding to one candidate object in one frame image and one candidate object in the other frame image;
determining the sum of the areas of image position areas corresponding to a candidate object in one frame image and a candidate object in the other frame image;
and determining the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image based on the sum of the overlapping area and the area.
In one embodiment, the determining module is configured to determine a distance matching degree between a candidate object located in one frame image and a candidate object located in another frame image according to the following steps:
taking the ratio of the overlapping area to the sum of the areas as the distance matching degree between a candidate object in one frame image and a candidate object in the other frame image; alternatively, the first and second electrodes may be,
Determining a difference between the sum of the areas and the overlap area;
and taking the ratio of the overlapping area to the difference value as the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image.
In one embodiment, the determining module is configured to determine the image matching degree according to the following steps:
extracting the characteristic vector of the candidate object in each frame of image in the multi-frame images;
and determining the image matching degree of the candidate object in the image of one frame and the candidate object in the image of the other frame based on the feature vector of the candidate object in the image of one frame and the feature vector of the candidate object in the image of the other frame.
In one embodiment, the selecting module is configured to select one frame of image from different frame of images in which the target object is located according to the following steps:
respectively determining the area of the target object in each frame of image in different frame images;
and determining the corresponding frame image with the largest area as the selected frame image.
In one embodiment, the determining module is further configured to:
determining a non-target object which does not meet the preset condition;
The extraction module is further configured to:
and respectively extracting the characteristic information of the non-target object from different images where the non-target object is located.
In one embodiment, the determining module is further configured to:
determining a first object with an image matching degree larger than a first matching degree threshold value and a distance matching degree smaller than or equal to a second matching degree threshold value from the non-target objects;
the extraction module is further configured to:
selecting one frame of image from different frame images of the first object, and determining the characteristic information of the first object from the selected frame of image.
In one embodiment, the method further comprises: a module for generating a plurality of modules,
the acquisition module is further configured to:
acquiring an information query request of a user side; the information query request carries an image corresponding to an object to be queried;
the generation module: and the query result is generated based on the image corresponding to the object to be queried and the pre-extracted characteristic information of each target object and non-target object.
In a third aspect, an embodiment of the present application provides an electronic device, including: the image processing method comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium are communicated through the bus, and the processor executes the machine-readable instructions to execute the steps of the image processing method.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the image processing method.
The image processing method provided by the embodiment of the application comprises the steps of obtaining a plurality of frame images in a video to be processed, and determining a candidate object matching degree set among different frame images in the plurality of frame images, wherein each candidate object matching degree in the candidate object matching degree set is the matching degree between one candidate object in one frame image and one candidate object in the other frame image; according to the candidate object matching degree set, at least one target object with the corresponding matching degree meeting preset conditions is selected from the candidate objects, for each target object in the at least one target object, one frame of image is selected from different frame images where the target object is located, and the characteristic information of the target object is obtained from the selected frame of image. In this way, the feature information of the target object is extracted only from one frame of image containing the target object, but not from each image containing the target object, so that the utilization rate of equipment is improved, and the efficiency of extracting the feature information of the target object is also improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 illustrates an architecture diagram of an image processing method provided in an embodiment of the present application;
FIG. 2A is a diagram illustrating a first exemplary image showing candidate objects according to an embodiment of the present disclosure;
FIG. 2B is a second schematic diagram illustrating candidate objects displayed in an image according to an embodiment of the present disclosure;
Fig. 3 is a second flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 4 is a third schematic flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an overlap region provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram illustrating an image processing apparatus according to an embodiment of the present application;
fig. 7 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The image processing method of the embodiment of the application can be applied to terminal equipment (such as a server and the like) for image processing, and can also be applied to any other computing equipment with a processing function. In some embodiments, the server or computing device may include a processor. The processor may process information and/or data related to the service request to perform one or more of the functions described herein.
In the related art, when the machine learning model is used for processing images in a video, due to the fact that the preset number of the objects capable of being detected is preset in the machine learning model, after the number of the objects to be detected in the video is larger than the preset number, the machine learning model cannot identify the objects to be detected exceeding the preset number, and therefore the missing detection phenomenon can be caused.
For convenience of description, the present application provides an image processing method, which includes acquiring a plurality of frame images in a video to be processed, and determining a candidate object matching degree set between different frame images in the plurality of frame images, where each candidate object matching degree in the candidate object matching degree set is a matching degree between one candidate object located in one frame image and one candidate object located in another frame image; according to the candidate object matching degree set, at least one target object with the corresponding matching degree meeting preset conditions is selected from the candidate objects, for each target object in the at least one target object, one frame of image is selected from different frame images where the target object is located, and the characteristic information of the target object is obtained from the selected frame of image. In this way, the feature information of the target object is extracted only from one frame of image containing the target object, but not from each image containing the target object, so that the utilization rate of equipment is improved, and the efficiency of extracting the feature information of the target object is also improved.
An embodiment of the present application provides an image processing method, as shown in fig. 1, where the method is applied to a terminal device, and the method specifically includes the following steps:
s101, acquiring a plurality of frame images in a video to be processed.
The video to be processed can be obtained through a camera device arranged in a building, or can be obtained through a camera device arranged on a sign post in a road, wherein the building can be a teaching building, a shopping mall, an office building and the like, and the camera can be arranged inside the building, or outside the building, and can be determined according to actual conditions.
The video to be processed comprises continuous multi-frame images which form an image sequence, each frame of image in the video to be processed comprises a plurality of candidate objects, the types of the candidate objects in the video to be processed shot in different shooting scenes are generally different, the candidate objects in the different shooting scenes can be human faces, animals, vehicles and the like, for example, the candidate objects in the video to be processed shot on roads can be vehicles and license plate numbers of the vehicles, and the candidate objects in the video to be processed shot in a market can be human faces and the like.
S102, determining a candidate object matching degree set among different frame images in the multi-frame image; each candidate matching degree in the candidate matching degree set refers to a matching degree between one candidate object in one frame image and one candidate object in the other frame image.
The image matching degree set includes images, candidate objects included in the images, and candidate matching degrees between the candidate objects, where the candidate matching degrees are candidate matching degrees between objects in two different images, the candidate objects included in the image matching degree set may be a static object and a moving object, the static object is the same candidate object included in different images, and positions of the same candidate object in different images are close (refer to fig. 2A), and the moving object is the same candidate object included in different images, and positions of the same candidate object in different images are different greatly, refer to fig. 2B.
The candidate object matching degree comprises an image matching degree and a distance matching degree, wherein the image matching degree represents the similarity between one candidate object in one frame of image and one candidate object in the other frame of image, and the higher the similarity is, the probability that the two candidate objects are the same object is, and the probability that the two candidate objects are the same object is higher; the distance matching degree represents the distance between one candidate object in one frame image and one candidate object in the other frame image, the closer the distance is, the closer the positions of the two candidate objects in the corresponding images are represented, the closer the distance is, the higher the probability that the candidate object is a static object is, and otherwise, the candidate object is a moving object.
The determination processes of the distance matching degree and the image matching degree are described below, respectively.
As shown in fig. 3, the image matching degree is determined according to the following steps:
s301, extracting the feature vector of the candidate object in each frame of image in the multi-frame image.
S302, determining the image matching degree of a candidate object in one frame image and a candidate object in the other frame image based on the feature vector of the candidate object in the one frame image and the feature vector of the candidate object in the other frame image.
In S301, feature vectors of candidate objects are used to characterize the structure and semantics of the candidate objects, one feature vector for each candidate object.
In S302, the similarity calculation methods, such as euclidean distance, manhattan distance, mahalanobis distance, and hamming distance, may be used to determine the image matching degree (also referred to as similarity), and in the specific implementation process, the euclidean distance is used to determine the image matching degree between one candidate object and another candidate object.
In a specific implementation process, after obtaining multiple frames of images, inputting each frame of image in the multiple frames of images into a preset feature vector extraction model to obtain feature vectors of each candidate object included in the image. The feature vector extraction model can be a convolutional neural network model, a long-term and short-term memory network model and the like.
Each time a feature vector of at least one candidate object included in one frame of image is obtained, the one frame of image is used as a current frame of image, and for each candidate object in the current frame of image, the feature vector of the candidate object, the feature vector of each candidate object in the previous frame of image and an Euclidean distance formula are used for calculating the image matching degree between the candidate object and each candidate object in the previous frame of image, that is, for each candidate object in the current frame of image, the feature vector of the candidate object and the feature vector of each candidate object in the previous frame of image are respectively input into the Euclidean distance formula, so that the image matching degree between the candidate object and each candidate object in the previous frame of image is obtained. Here, the process of calculating the image matching degree by using the euclidean distance calculation formula is not described here, and the image matching degree between the candidate object and the candidate object is calculated from the second frame image.
For example, a first frame image and a second frame image in a video to be processed are taken as an example for explanation, the first frame image includes three candidate objects, which are a1, a2 and A3 respectively, the second frame image includes two candidate objects, which are B1 and B2 respectively, after feature vectors of a1, a2, A3, B1 and B2 are extracted respectively, for B1 in the second frame image, feature vectors corresponding to the feature vectors of B1 and a1, a2 and A3 respectively are used to input into an euclidean distance calculation formula, image matching degrees between B1 and a1, a1 and a1 respectively are calculated, for B1 in the second frame image, feature vectors corresponding to the feature vectors of B1 and a1, a1 and a1 respectively are used to input into an euclidean distance calculation formula, and the matching degrees between B1 and a1, a1 and a1 respectively are calculated.
When determining the image matching degree, the image matching degree may be determined every time the feature vector of the candidate object included in one frame of image is extracted, or the image matching degree may be determined after the feature vector of the candidate object in each frame of image in the video to be processed is extracted, and the application does not limit the execution sequence.
After extracting the feature vectors of the candidate objects in each frame of image, the image matching degree may be determined by using the feature vectors of the candidate objects included in the current frame of image and the feature vectors of the candidate objects included in the previous frame of image, or may be determined by using the feature vectors of the candidate objects included in the current frame of image and the feature vectors of the candidate objects included in each frame of image after the current frame of image, and the determined matching degree may be determined according to actual conditions. When the matching degree is determined by using the current frame image and each frame image, the method can be executed after extracting the feature vectors of the candidate objects in each frame image in the video to be processed; the process of determining the image matching degree between every two candidate objects is similar to the above process, and will not be described in detail herein.
When determining the image matching degree by using the current frame image and the previous frame image, if a candidate object does not exist in the current frame image but the candidate object may appear in an image before the previous frame image, after obtaining the image matching degree corresponding to each candidate object in the current frame image, if the image matching degree corresponding to the first candidate object is less than a first set matching degree threshold, calculating the image matching degree between the first candidate object and the candidate object included in the image before the previous frame image by using the feature vector of the first candidate object and the feature vector of the candidate object included in the image before the previous frame image for the first candidate object. The process of calculating the image matching degree is not repeated. In this way, when the candidate object is occluded in the previous image but reappears in the subsequent image, the image matching degree of the candidate object can be obtained in this way, and the same candidate object in the obtained candidate image matching degree set is reduced.
As shown in fig. 4, the distance matching degree is determined according to the following steps:
s401, determining the overlapping area between a candidate object in one frame image and the image position area corresponding to the candidate object in the other frame image;
s402, determining the sum of the areas of image position areas corresponding to a candidate object in one frame image and a candidate object in the other frame image;
and S403, determining the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image based on the sum of the overlapping area and the area.
In S401, the image position area is used to characterize the position of the candidate object in the image and the area occupied by the candidate object at the corresponding position of the image; the overlapping area is used to characterize the area of the region where the image position regions corresponding to the two candidate objects in the two images overlap, and the overlapping region can refer to fig. 5.
In S402, the sum of the areas may be a sum of the area of the image position region of one candidate in one frame image and the area of the image position region of one candidate in another frame image.
In S403, the following two implementation manners are specifically included:
taking the ratio of the overlapping area to the sum of the areas as the distance matching degree between a candidate object in one frame image and a candidate object in the other frame image; alternatively, the first and second electrodes may be,
determining a difference between the sum of the areas and the overlap area;
and taking the ratio of the overlapping area to the difference value as the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image.
In a specific implementation process, when the distance matching degree is calculated, every two frames of images in the video to be processed may be used for calculation, and an image corresponding to a candidate object with an obtained image matching degree may also be used for calculation, which is not limited in the present application. In order to reduce the amount of calculation in calculating the distance matching degree, generally, images corresponding to the candidate object with the obtained image matching degree are selected for calculation, the images corresponding to the candidate object with the obtained image matching degree are taken as a first image set, and the first images are sorted in the first image set according to the sequence of the time points of the first images in the video to be processed from small to large.
When the image matching degree is calculated by using the images in the first image set, two adjacent frames of first images can be selected for calculation, and the distance matching degree between every two frames of images in the first image set can also be calculated and can be determined according to the actual situation. In the implementation process, considering that there may be a plurality of images including the same candidate object, when calculating the distance matching degree, the calculation amount for calculating the distance matching degree between every two frames of images is relatively large, and in order to reduce the calculation amount, the distance matching degree between the candidate objects may be calculated by using two adjacent frames of images, and the following describes the calculation process of the distance matching degree.
The first way of determining the image matching degree is as follows:
after the image matching degree is obtained, two adjacent frames of images are selected from the first image set, the overlapping area between one candidate object in one frame of first image and the image position area corresponding to one candidate object in the other frame of first image is calculated, the sum of the areas of the image position areas corresponding to one candidate object in one frame of first image and one candidate object in the other frame of first image is calculated, the difference value between the sum of the areas and the overlapping area is calculated, the ratio of the overlapping area to the difference value is calculated, and the ratio is used as the distance matching degree between one candidate object in one frame of first image and one candidate object in the other frame of first image.
For example, the first images of the two frames are T1 and T2, respectively, the candidate object a1 included in the image T1, the candidate object included in the image T2 is a2, the area of the image position region corresponding to the candidate object a1 in the image T1 is S1, the area of the image position region corresponding to the candidate object a2 in the image T2 is S2, the overlapping area between the candidate object a1 and the candidate object a2 is S0, and the distance matching degree between the candidate object a1 and the candidate object a2 is S0/(S1+ S2-S0).
A second mode for determining the image matching degree:
after the image matching degree is obtained, two adjacent frames of images are selected from the first image set, the overlapping area between one candidate object in one frame of first image and the image position area corresponding to one candidate object in the other frame of first image is calculated, the sum of the areas of the image position areas corresponding to the candidate object in the one frame of first image and the candidate object in the other frame of first image is calculated and determined, the ratio of the overlapping area to the sum of the areas is calculated, and the ratio is used as the distance matching degree between one candidate object in the one frame of first image and one candidate object in the other frame of first image.
For example, the first images of the two frames are respectively T1 and T2, the candidate object a1 included in the image T1, the candidate object included in the image T2 is a2, the area of the image position region corresponding to the candidate object a1 in the image T1 is S1, the area of the image position region corresponding to the candidate object a2 in the image T2 is S2, and the distance matching degree between the candidate object a1 and the candidate object a2 is S0/(S1+ S2).
S103, selecting at least one target object with the matching degree meeting preset conditions from the candidate objects according to the candidate object matching degree set;
the preset conditions are that the image matching degree is greater than a first matching degree threshold value and the distance matching degree is greater than a second matching degree threshold value, wherein the first matching degree threshold value and the second matching degree threshold value are generally preset and can be set according to actual conditions.
The target object meeting the preset condition is generally a stationary object.
In S103, a candidate object whose corresponding image matching degree is greater than the first matching degree threshold and whose corresponding distance matching degree is greater than the second matching degree threshold may be determined as the at least one target object.
In a specific implementation process, after the candidate object matching degree set is determined, at least one candidate object of which the corresponding image matching degree is greater than a first matching degree threshold value and the corresponding distance matching degree is greater than a second matching degree threshold value is selected from the candidate object matching degree set, and the selected at least one candidate object is determined to be at least one target object.
For example, the image matching degree included in the candidate matching degree set is α1、α2、α3……α10Distance matching degree of beta1、β2、β3……β10The first matching degree threshold is alpha0The second matching degree threshold is beta0If α is1、α2、α3Are all greater than alpha0,β1、β2、β3Are all greater than a second matching degree threshold value beta0,α1、α2、α3、β1、β2、β3The corresponding candidate objects are A1, A2, A3 and A4, and then A1, A2, A3 and A4 are determined as target objects.
S104, aiming at each target object in the at least one target object, selecting a frame of image from different frame images of the target object, and acquiring the characteristic information of the target object from the selected frame of image.
Here, the feature information includes information such as an image block corresponding to the target object, attribute information of the target object, a position of an image including the target object in the video to be processed, acquisition time of the video to be processed, and position information of the image pickup apparatus that acquires the video to be processed, where the image block corresponding to the target object may be an image including the target object, the attribute information of the target object includes information such as a shape, a color, and a structure of the target object, the position of the image including the target object in the video to be processed is a time point of the image including the target object in the video to be processed, the acquisition time of the video to be processed is a time point of the image pickup apparatus capturing the video to be processed, and the position information of the image pickup apparatus that acquires the video to be processed may be GPS information of a position where the image pickup apparatus is located.
The target object meeting the preset condition is a candidate object which basically keeps unchanged in position in the video to be processed, but when the candidate object slightly shakes in the video to be processed, the candidate object is considered as a static object.
When one frame of image is selected from different frame images of a target object, one frame of image can be selected from different images containing the target image, and in order to improve the accuracy of the extracted feature information of the target object, the image containing the largest area of the target object is selected when the one frame of image is selected, namely, the one frame of image is selected from different images according to the following mode, considering that when the target object is closer to the image pickup equipment, the image pickup equipment shoots the larger target object and the higher definition of the image containing the target object is higher:
determining the area of the target object in each frame image in different frame images;
and determining the corresponding frame image with the largest area as the selected frame image.
Specifically, the area is the area of the region occupied by the target object in the image.
In a specific implementation process, after at least one target object is obtained, different frame images (at least two frame images) containing the target object are determined for each target object, the areas of the target object in each frame image in the different frame images are respectively determined, and the frame image corresponding to the largest area is determined as the selected frame image.
For example, taking the target object a as an example, if the image including the target object a is T1, T2, T3, T4, the area of the target object a in T1 is S1, the area of the target object a in T2 is S2, the area of the target object a in T3 is S3, the area of the target object a in T4 is S4, and the area of the target object a in T3 is the largest area S3, the image T3 is a certain one-frame image.
After determining one frame of image, extracting feature information of the target object from the one frame of image, which may be extracted from the one frame of image by a machine learning model, or may be extracted from the one frame of image by an information extraction algorithm, which is not limited in this application.
After determining the candidate object meeting the preset condition as the target object, the candidate object not meeting the preset condition may be determined as a non-target object, and feature information of the non-target object may be extracted from an image in which the non-target object is located. The feature information of the non-target object comprises image blocks corresponding to the non-target object, attribute information of the non-target object, the position of an image comprising the non-target object in the video to be processed, the acquisition time of the video to be processed, position information of an image pickup device for acquiring the video to be processed and the like.
The non-target object is generally a moving object, the image matching degree of the non-target object is less than or equal to a first preset threshold, and the distance matching degree is greater than a second matching degree threshold, or the image matching degree of the non-target object is less than or equal to the first preset threshold, and the distance matching degree is less than the second matching degree threshold.
The non-target object may be included in one frame of image, or may be included in multiple frames of images, for example, a candidate object whose image matching degree is less than or equal to a first preset threshold and whose distance matching degree is greater than a second matching degree threshold is included in multiple frames of images in the video to be processed; the candidate object with the image matching degree smaller than or equal to the first preset threshold and the distance matching degree smaller than the second matching degree threshold is that the non-target object is only contained in one frame of image.
For the case that the non-target object is included in one frame of image, the feature information of the non-target object is extracted from one frame of image, the feature information of the non-target object can be extracted from one frame of image through a machine learning model, or the feature information of the non-target object can be extracted from one frame of image through an information extraction algorithm, which is not limited in the present application.
Determining a first object with an image matching degree larger than a first matching degree threshold value and a distance matching degree smaller than or equal to a second matching degree threshold value from non-target objects under the condition that the non-target objects are contained in multi-frame images; selecting one frame of image from different frame images of the first object, and determining the characteristic information of the first object from the selected frame of image.
For a non-target object (moving object) included in a multi-frame image, feature information of the non-target object can be respectively extracted from different frame images of the non-target object, and when the non-target object is a license plate or a human face, any frame image can be selected from the different frame images of the non-target object.
When one frame of image is selected from different frames of images in which a non-target object is located, considering that a scene in a multi-frame image containing the moving object generally moves from far to near of a candidate object toward an image pickup device, the clearer the candidate object photographed by the image pickup device is when the candidate object is closer to the image pickup device, that is, the largest area of the non-target object in the image is, the higher the accuracy of the extracted feature information is, therefore, in extracting one frame of image from different frames of images containing the non-target object, first, the area of the non-target object contained in each frame of image is determined, and the frame of image with the largest area is determined as the image from which the feature information is finally extracted, so that the accuracy of the feature information of the extracted non-target object (first object) is higher. The extraction process of the feature information may refer to the extraction process of the feature information of the target object.
When the image with the largest area of the first object is taken as the image for finally extracting the feature information, the image may only contain a part of the first object, that is, the first object is not complete in the image for finally extracting the feature information, and in order to ensure the integrity of the first object, when the image of one frame is selected, the area and the image matching degree are combined, that is, the image with the largest area is selected from the two images with the largest image matching degree, and the image of the one frame finally selected is taken as the image for finally extracting the feature information.
After the characteristic information of the target object and the characteristic information of the non-target object are extracted and obtained in the terminal device, the characteristic information of the target object and the characteristic information of the non-target object can be stored in a data table of the terminal device or a database, so that suspect searching, lost child searching, vehicle searching and the like can be conveniently carried out by utilizing the characteristic information in the database and the characteristics in the data table, and the method can be determined according to actual conditions.
The following is introduced in connection with an application scenario of the feature information:
acquiring an information query request of a user side; the information query request carries an image corresponding to an object to be queried;
And generating a query result based on the image corresponding to the object to be queried and the pre-extracted characteristic information of each target object and non-target object.
Here, the object to be queried may be a suspect, a lost population, a lost vehicle, or the like, and the object to be queried is different according to different application scenarios.
In the specific implementation process, after the image of the object to be queried at the user end is obtained, the image of the object to be queried, the image blocks of each target object and the image blocks of each non-target object which are extracted in advance are utilized, the object matched with the object to be queried is determined from the target object and the non-target objects, namely, the image characteristics of the object to be queried and the image characteristics of the target object and the image characteristics of the non-target objects in the database are compared, if the image characteristics of the target object or the non-target object in the database are the same as the image characteristics of the object to be queried, the matched object can be determined for the object to be queried, further, the feature information corresponding to the matched object is extracted from the database, the extracted feature information is used as a final query result, and the query result is fed back to the user end. Therefore, the user terminal can be quickly positioned to the position of the object to be queried appearing at the historical time point, and the related information of the object to be queried is provided for the user terminal.
The image processing method provided by the embodiment of the application comprises the steps of obtaining a plurality of frame images in a video to be processed, and determining a candidate object matching degree set among different frame images in the plurality of frame images, wherein each candidate object matching degree in the candidate object matching degree set is the matching degree between one candidate object in one frame image and one candidate object in the other frame image; according to the candidate object matching degree set, at least one target object with the corresponding matching degree meeting preset conditions is selected from the candidate objects, for each target object in the at least one target object, one frame of image is selected from different frame images where the target object is located, and the characteristic information of the target object is obtained from the selected frame of image. In this way, the feature information of the target object is extracted only from one frame of image containing the target object, but not from each image containing the target object, so that the utilization rate of equipment is improved, and the efficiency of extracting the feature information of the target object is also improved.
Based on the same inventive concept, an image processing apparatus corresponding to the image processing method is also provided in the embodiments of the present application, and since the principle of the apparatus in the embodiments of the present application for solving the problem is similar to the image processing method described above in the embodiments of the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
An embodiment of the present application provides an image processing apparatus, as shown in fig. 6, the apparatus including:
the acquiring module 61 is configured to acquire a plurality of frames of images in a video to be processed;
a determining module 62, configured to determine a set of candidate object matching degrees between different frame images in the multiple frame images; each candidate object matching degree in the candidate object matching degree set refers to the matching degree between one candidate object in one frame image and one candidate object in the other frame image;
a selecting module 63, configured to select, according to the candidate object matching degree set, at least one target object whose corresponding matching degree meets a preset condition from the candidate objects;
the extracting module 64 is configured to, for each target object of the at least one target object, select one frame of image from different frame images in which the target object is located, and obtain feature information of the target object from the selected frame of image.
In an embodiment, the matching degrees include an image matching degree and a distance matching degree, and the selecting module 63 is configured to select at least one target object, of which the matching degree meets a preset condition, from the candidate objects according to the following steps:
And determining the candidate object of which the corresponding image matching degree is greater than a first matching degree threshold value and the corresponding distance matching degree is greater than a second matching degree threshold value as the at least one target object.
In one embodiment, the determining module 62 is configured to determine the distance matching degree according to the following steps:
determining the overlapping area between the image position areas corresponding to one candidate object in one frame image and one candidate object in the other frame image;
determining the sum of the areas of image position areas corresponding to a candidate object in one frame image and a candidate object in the other frame image;
and determining the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image based on the sum of the overlapping area and the area.
In one embodiment, the determining module 62 is configured to determine a distance matching degree between a candidate object located in one frame image and a candidate object located in another frame image according to the following steps:
taking the ratio of the overlapping area to the sum of the areas as the distance matching degree between a candidate object in one frame image and a candidate object in the other frame image; alternatively, the first and second electrodes may be,
Determining a difference between the sum of the areas and the overlap area;
and taking the ratio of the overlapping area to the difference value as the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image.
In one embodiment, the determining module 62 is configured to determine the image matching degree according to the following steps:
extracting the characteristic vector of the candidate object in each frame of image in the multi-frame images;
and determining the image matching degree of the candidate object in the image of one frame and the candidate object in the image of the other frame based on the feature vector of the candidate object in the image of one frame and the feature vector of the candidate object in the image of the other frame.
In one embodiment, the selecting module 63 is configured to select one frame of image from different frame of images in which the target object is located according to the following steps:
respectively determining the area of the target object in each frame of image in different frame images;
and determining the corresponding frame image with the largest area as the selected frame image.
In one embodiment, the determining module 62 is further configured to:
determining a non-target object which does not meet the preset condition;
The extraction module 64 is further configured to:
and extracting the characteristic information of the non-target object from the image in which the non-target object is positioned.
In one embodiment, the determining module 62 is further configured to:
determining a first object with an image matching degree larger than a first matching degree threshold value and a distance matching degree smaller than or equal to a second matching degree threshold value from the non-target objects;
the extraction module 64 is further configured to:
selecting one frame of image from different frame images of the first object, and determining the characteristic information of the first object from the selected frame of image.
In one embodiment, the method further comprises: the generation module 65 is used to generate a model,
the obtaining module 61 is further configured to:
acquiring an information query request of a user side; the information query request carries an image corresponding to an object to be queried;
the generation module 65: and the query result is generated based on the image corresponding to the object to be queried and the pre-extracted characteristic information of each target object and non-target object.
An embodiment of the present application further provides an electronic device 70, as shown in fig. 7, which is a schematic structural diagram of the electronic device 70 provided in the embodiment of the present application, and includes: a processor 71, a memory 72, and a bus 73. The memory 72 stores machine-readable instructions executable by the processor 71 (for example, the corresponding execution instructions of the obtaining module 61, the determining module 62, the selecting module 63, and the extracting module 64 in the apparatus in fig. 6, and the like), when the electronic device 70 is operated, the processor 71 and the memory 72 communicate via the bus 73, and when the processor 71 executes the following processes:
Acquiring a multi-frame image in a video to be processed;
determining a candidate object matching degree set among different frame images in the multi-frame image; each candidate object matching degree in the candidate object matching degree set refers to the matching degree between one candidate object in one frame image and one candidate object in the other frame image;
selecting at least one corresponding target object with the matching degree meeting a preset condition from the candidate objects according to the candidate object matching degree set;
and aiming at each target object in the at least one target object, selecting one frame of image from different frame images of the target object, and acquiring the characteristic information of the target object from the selected frame of image.
In a possible implementation manner, in the instructions executed by the processor 71, the matching degrees include an image matching degree and a distance matching degree, and according to the candidate object matching degree set, selecting at least one target object from the candidate objects, where the corresponding matching degree meets a preset condition, includes:
and determining the candidate object of which the corresponding image matching degree is greater than a first matching degree threshold value and the corresponding distance matching degree is greater than a second matching degree threshold value as the at least one target object.
In one possible embodiment, processor 71 executes instructions that determine the distance match according to the following steps:
determining the overlapping area between the image position areas corresponding to one candidate object in one frame image and one candidate object in the other frame image;
determining the sum of the areas of image position areas corresponding to a candidate object in one frame image and a candidate object in the other frame image;
and determining the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image based on the sum of the overlapping area and the area.
In a possible implementation, the processor 71 executes instructions for determining a distance matching degree between a candidate object located in one frame image and a candidate object located in another frame image based on a sum of the overlapping area and the area, including:
taking the ratio of the overlapping area to the sum of the areas as the distance matching degree between a candidate object in one frame image and a candidate object in the other frame image; alternatively, the first and second electrodes may be,
Determining a difference between the sum of the areas and the overlap area;
and taking the ratio of the overlapping area to the difference value as the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image.
In one possible embodiment, processor 71 executes instructions that determine the image matching degree according to the following steps:
extracting the characteristic vector of the candidate object in each frame of image in the multi-frame images;
and determining the image matching degree of the candidate object in the image of one frame and the candidate object in the image of the other frame based on the feature vector of the candidate object in the image of one frame and the feature vector of the candidate object in the image of the other frame.
In one possible embodiment, the instructions executed by the processor 71 to select one frame image from different frame images in which the target object is located includes:
respectively determining the area of the target object in each frame of image in different frame images;
and determining the corresponding frame image with the largest area as the selected frame image.
In a possible implementation, the instructions executed by the processor 71 further include:
Determining a non-target object which does not meet the preset condition;
and extracting the characteristic information of the non-target object from the image in which the non-target object is positioned.
In one possible embodiment, processor 71 executes instructions to determine, from the non-target objects, a first object whose image matching degree is greater than a first matching degree threshold and whose distance matching degree is less than or equal to a second matching degree threshold;
selecting one frame of image from different frame images of the first object, and determining the characteristic information of the first object from the selected frame of image.
In a possible implementation, the instructions executed by the processor 71 further include:
acquiring an information query request of a user side; the information query request carries an image corresponding to an object to be queried;
and generating a query result based on the image corresponding to the object to be queried and the pre-extracted characteristic information of each target object and non-target object.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the road network data processing method are executed.
Specifically, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, and the like, and when a computer program on the storage medium is executed, the above-mentioned image processing method can be executed, so as to solve the problem of low utilization rate of the prior art device; according to the candidate object matching degree set, at least one target object with the corresponding matching degree meeting preset conditions is selected from the candidate objects, for each target object in the at least one target object, one frame of image is selected from different frame images where the target object is located, and the characteristic information of the target object is obtained from the selected frame of image. In this way, the feature information of the target object is extracted only from one frame of image containing the target object, but not from each image containing the target object, so that the utilization rate of equipment is improved, and the efficiency of extracting the feature information of the target object is also improved.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of road network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a personal computer, a server, or a road network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. An image processing method, characterized in that the method comprises:
acquiring a multi-frame image in a video to be processed;
determining a candidate object matching degree set among different frame images in the multi-frame image; each candidate object matching degree in the candidate object matching degree set refers to the matching degree between one candidate object in one frame image and one candidate object in the other frame image;
selecting at least one corresponding target object with the matching degree meeting a preset condition from the candidate objects according to the candidate object matching degree set;
and aiming at each target object in the at least one target object, selecting one frame of image from different frame images of the target object, and acquiring the characteristic information of the target object from the selected frame of image.
2. The image processing method according to claim 1, wherein the matching degree includes an image matching degree and a distance matching degree, and selecting at least one target object whose matching degree meets a preset condition from the candidate objects according to the candidate object matching degree set includes:
and determining the candidate object of which the corresponding image matching degree is greater than a first matching degree threshold value and the corresponding distance matching degree is greater than a second matching degree threshold value as the at least one target object.
3. The image processing method of claim 2, wherein the distance matching degree is determined according to the following steps:
determining the overlapping area between the image position areas corresponding to one candidate object in one frame image and one candidate object in the other frame image;
determining the sum of the areas of image position areas corresponding to a candidate object in one frame image and a candidate object in the other frame image;
and determining the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image based on the sum of the overlapping area and the area.
4. The image processing method of claim 3, wherein determining a distance match between a candidate object in one frame image and a candidate object in another frame image based on a sum of the overlap area and the area comprises:
taking the ratio of the overlapping area to the sum of the areas as the distance matching degree between a candidate object in one frame image and a candidate object in the other frame image; alternatively, the first and second electrodes may be,
Determining a difference between the sum of the areas and the overlap area;
and taking the ratio of the overlapping area to the difference value as the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image.
5. The image processing method of claim 2, wherein the image matching degree is determined according to the following steps:
extracting the characteristic vector of the candidate object in each frame of image in the multi-frame images;
and determining the image matching degree of the candidate object in the image of one frame and the candidate object in the image of the other frame based on the feature vector of the candidate object in the image of one frame and the feature vector of the candidate object in the image of the other frame.
6. The image processing method of claim 1, wherein selecting one frame of image from different frames of images in which the target object is located comprises:
respectively determining the area of the target object in each frame of image in different frame images;
and determining the corresponding frame image with the largest area as the selected frame image.
7. The image processing method according to claim 1, further comprising:
Determining a non-target object which does not meet the preset condition;
and extracting the characteristic information of the non-target object from the image in which the non-target object is positioned.
8. The image processing method according to claim 7, further comprising:
determining a first object with an image matching degree larger than a first matching degree threshold value and a distance matching degree smaller than or equal to a second matching degree threshold value from the non-target objects;
selecting one frame of image from different frame images of the first object, and determining the characteristic information of the first object from the selected frame of image.
9. The image processing method according to claim 7, further comprising:
acquiring an information query request of a user side; the information query request carries an image corresponding to an object to be queried;
and generating a query result based on the image corresponding to the object to be queried and the pre-extracted characteristic information of each target object and non-target object.
10. An image processing apparatus, characterized by comprising:
the acquisition module is used for acquiring multi-frame images in a video to be processed;
the determining module is used for determining a candidate object matching degree set among different frame images in the multi-frame images; each candidate object matching degree in the candidate object matching degree set refers to the matching degree between one candidate object in one frame image and one candidate object in the other frame image;
The selecting module is used for selecting at least one target object of which the matching degree meets a preset condition from the candidate objects according to the candidate object matching degree set;
and the extraction module is used for selecting one frame of image from different frame images of the target object aiming at each target object in the at least one target object and acquiring the characteristic information of the target object from the selected frame of image.
11. The image processing apparatus according to claim 10, wherein the matching degree includes an image matching degree and a distance matching degree, and the selecting module is configured to select at least one target object from the candidate objects, where the matching degree meets a preset condition, according to the following steps:
and determining the candidate object of which the corresponding image matching degree is greater than a first matching degree threshold value and the corresponding distance matching degree is greater than a second matching degree threshold value as the at least one target object.
12. The image processing apparatus of claim 11, wherein the determining module is configured to determine the distance match metric according to:
determining the overlapping area between the image position areas corresponding to one candidate object in one frame image and one candidate object in the other frame image;
Determining the sum of the areas of image position areas corresponding to a candidate object in one frame image and a candidate object in the other frame image;
and determining the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image based on the sum of the overlapping area and the area.
13. The image processing apparatus of claim 12, wherein the determining module is configured to determine a distance match between a candidate object located in one frame of image and a candidate object located in another frame of image according to the following steps:
taking the ratio of the overlapping area to the sum of the areas as the distance matching degree between a candidate object in one frame image and a candidate object in the other frame image; alternatively, the first and second electrodes may be,
determining a difference between the sum of the areas and the overlap area;
and taking the ratio of the overlapping area to the difference value as the distance matching degree between one candidate object in one frame image and one candidate object in the other frame image.
14. The image processing apparatus of claim 11, wherein the determining module is configured to determine the image matching degree according to:
Extracting the characteristic vector of the candidate object in each frame of image in the multi-frame images;
and determining the image matching degree of the candidate object in the image of one frame and the candidate object in the image of the other frame based on the feature vector of the candidate object in the image of one frame and the feature vector of the candidate object in the image of the other frame.
15. The image processing apparatus as claimed in claim 10, wherein the selecting module is configured to select one frame of image from different frames of image in which the target object is located according to the following steps:
respectively determining the area of the target object in each frame of image in different frame images;
and determining the corresponding frame image with the largest area as the selected frame image.
16. The image processing apparatus of claim 10, wherein the determination module is further to:
determining a non-target object which does not meet the preset condition;
the extraction module is further configured to:
and extracting the characteristic information of the non-target object from the image in which the non-target object is positioned.
17. The image processing apparatus of claim 16, wherein the determination module is further to:
Determining a first object with an image matching degree larger than a first matching degree threshold value and a distance matching degree smaller than or equal to a second matching degree threshold value from the non-target objects;
the extraction module is further configured to:
selecting one frame of image from different frame images of the first object, and determining the characteristic information of the first object from the selected frame of image.
18. The image processing apparatus according to claim 16, further comprising: a module for generating a plurality of modules,
the acquisition module is further configured to:
acquiring an information query request of a user side; the information query request carries an image corresponding to an object to be queried;
the generation module: and the query result is generated based on the image corresponding to the object to be queried and the pre-extracted characteristic information of each target object and non-target object.
19. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the image processing method according to any one of claims 1 to 9.
20. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to one of claims 1 to 9.
CN201911404681.7A 2019-12-31 2019-12-31 Image processing method, image processing device, electronic equipment and storage medium Pending CN111860559A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911404681.7A CN111860559A (en) 2019-12-31 2019-12-31 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911404681.7A CN111860559A (en) 2019-12-31 2019-12-31 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111860559A true CN111860559A (en) 2020-10-30

Family

ID=72970795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911404681.7A Pending CN111860559A (en) 2019-12-31 2019-12-31 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111860559A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378958A (en) * 2021-06-24 2021-09-10 北京百度网讯科技有限公司 Automatic labeling method, device, equipment, storage medium and computer program product
CN113838110A (en) * 2021-09-08 2021-12-24 重庆紫光华山智安科技有限公司 Target detection result verification method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886117A (en) * 2019-01-21 2019-06-14 青岛海信网络科技股份有限公司 A kind of method and apparatus of goal behavior detection
JPWO2018073848A1 (en) * 2016-10-19 2019-06-27 日本電気株式会社 Image processing apparatus, staying object tracking system, image processing method and recording medium
CN110287778A (en) * 2019-05-15 2019-09-27 北京旷视科技有限公司 A kind of processing method of image, device, terminal and storage medium
CN110610510A (en) * 2019-08-29 2019-12-24 Oppo广东移动通信有限公司 Target tracking method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2018073848A1 (en) * 2016-10-19 2019-06-27 日本電気株式会社 Image processing apparatus, staying object tracking system, image processing method and recording medium
CN109886117A (en) * 2019-01-21 2019-06-14 青岛海信网络科技股份有限公司 A kind of method and apparatus of goal behavior detection
CN110287778A (en) * 2019-05-15 2019-09-27 北京旷视科技有限公司 A kind of processing method of image, device, terminal and storage medium
CN110610510A (en) * 2019-08-29 2019-12-24 Oppo广东移动通信有限公司 Target tracking method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378958A (en) * 2021-06-24 2021-09-10 北京百度网讯科技有限公司 Automatic labeling method, device, equipment, storage medium and computer program product
CN113838110A (en) * 2021-09-08 2021-12-24 重庆紫光华山智安科技有限公司 Target detection result verification method and device, storage medium and electronic equipment
CN113838110B (en) * 2021-09-08 2023-09-05 重庆紫光华山智安科技有限公司 Verification method and device for target detection result, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110147717B (en) Human body action recognition method and device
US11200682B2 (en) Target recognition method and apparatus, storage medium, and electronic device
JP7282851B2 (en) Apparatus, method and program
CN107808111B (en) Method and apparatus for pedestrian detection and attitude estimation
Vieira et al. On the improvement of human action recognition from depth map sequences using space–time occupancy patterns
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN105005777A (en) Face-based audio and video recommendation method and face-based audio and video recommendation system
TW202135002A (en) Action recognition method, electronic equipment, and computer readable storage medium
CN111445526A (en) Estimation method and estimation device for pose between image frames and storage medium
CN109063776B (en) Image re-recognition network training method and device and image re-recognition method and device
CN111598067B (en) Re-recognition training method, re-recognition method and storage device in video
CN111914775A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
WO2023168957A1 (en) Pose determination method and apparatus, electronic device, storage medium, and program
CN111400550A (en) Target motion trajectory construction method and device and computer storage medium
CN112818955A (en) Image segmentation method and device, computer equipment and storage medium
CN112381071A (en) Behavior analysis method of target in video stream, terminal device and medium
CN111860559A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111429476A (en) Method and device for determining action track of target person
CN113610918A (en) Pose calculation method and device, electronic equipment and readable storage medium
JP2022549661A (en) IMAGE PROCESSING METHOD, APPARATUS, DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN113610967B (en) Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium
CN111476189A (en) Identity recognition method and related device
CN111104915B (en) Method, device, equipment and medium for peer analysis
CN112949539A (en) Pedestrian re-identification interactive retrieval method and system based on camera position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination