CN113378005B - Event processing method, device, electronic equipment and storage medium - Google Patents

Event processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113378005B
CN113378005B CN202110622066.4A CN202110622066A CN113378005B CN 113378005 B CN113378005 B CN 113378005B CN 202110622066 A CN202110622066 A CN 202110622066A CN 113378005 B CN113378005 B CN 113378005B
Authority
CN
China
Prior art keywords
information
event
target
target object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110622066.4A
Other languages
Chinese (zh)
Other versions
CN113378005A (en
Inventor
甘露
付琰
周洋杰
陈亮辉
彭玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110622066.4A priority Critical patent/CN113378005B/en
Publication of CN113378005A publication Critical patent/CN113378005A/en
Application granted granted Critical
Publication of CN113378005B publication Critical patent/CN113378005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an event processing method, an event processing device, event processing equipment and a storage medium, relates to the field of deep learning and big data, and can be used in a smart city scene. The specific implementation scheme is as follows: acquiring an image to be detected, and carrying out feature extraction on the image to be detected to acquire a plurality of feature information of the image to be detected; determining event information of a target event; searching in a pre-established object information base according to a plurality of characteristic information of the image to be detected, and sequencing search results according to event information of the target event; acquiring object information of a target object in the image to be detected according to the sequencing result; and tracking and positioning the target object according to the object information. The method and the device improve the accuracy of the object information base, reduce the cost of manually checking the candidate objects, and improve the event processing efficiency.

Description

Event processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing, and in particular, to the field of deep learning and big data, and more particularly, to an event processing method, apparatus, electronic device, and storage medium.
Background
As AI (Artificial Intelligence ) has increasingly penetrated smart city construction, functional departments of the city are actively combing pain points to explore solutions in cooperation with the internet or traditional suppliers. The method generally comprises the following optimization links after the traditional office flow is disassembled: intelligent data fusion, intelligent application, intelligent flow propulsion, intelligent analysis and evaluation and the like, and are used for improving office efficiency and quality.
At present, intelligent data fusion in some scenes also has the problems of lower accuracy, lower efficiency and the like because manual investigation is needed.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, and storage medium for event processing, which are applicable in a smart city scenario.
According to a first aspect of the present disclosure, there is provided an event processing method, including:
acquiring an image to be detected, and carrying out feature extraction on the image to be detected to acquire a plurality of feature information of the image to be detected;
determining event information of a target event, wherein the event information comprises at least one of occurrence place information and occurrence time information;
searching in a pre-established object information base according to a plurality of characteristic information of the image to be detected, and sequencing search results according to event information of the target event;
Acquiring object information of a target object in the image to be detected according to the sequencing result;
and tracking and positioning the target object according to the object information.
According to a second aspect of the present disclosure, there is provided an event processing apparatus comprising:
the image processing module is used for acquiring an image to be detected, and extracting the characteristics of the image to be detected to acquire a plurality of characteristic information of the image to be detected;
a first determining module for determining event information of a target event, the event information including at least one of occurrence place information and occurrence time information;
the retrieval module is used for retrieving in a pre-established object information base according to the plurality of characteristic information of the image to be detected and sequencing retrieval results according to the event information of the target event;
the second determining module is used for determining object information of the target object in the image to be detected according to the sorting result;
and the positioning module is used for tracking and positioning the target object according to the object information.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect described above.
According to the technical scheme, the number of search results is reduced by extracting the plurality of feature information of the image to be detected and searching in the pre-established object information base according to the plurality of feature information of the image to be detected. In addition, the search results are ordered according to the event information of the target event, so that the relevance between the search results and the target event is introduced, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and the accuracy and the efficiency of event processing can be improved through comprehensive multi-aspect data tracking and positioning analysis.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method of event processing according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of creating an object information library according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of establishing object information for each target object according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of acquiring candidates and their ordering according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of another method for obtaining candidates and their ordering according to an embodiment of the disclosure;
FIG. 6 is a flow chart of tracking and locating a target object according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an event processing apparatus according to an embodiment of the present disclosure;
FIG. 8 is a block diagram of another event processing device according to an embodiment of the present disclosure;
Fig. 9 is a block diagram of an electronic device for implementing an event processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of association objects, which indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the existing data fusion scheme, only the face information of the target is adopted to perform data fusion, so that the accuracy and recall rate of the target information base are not high enough. In addition, when searching is carried out in the target information base according to the face image of the target object, a plurality of similar candidates are easy to find, so that the manual elimination workload is high and the efficiency is low.
In view of the foregoing, the present disclosure proposes an event processing method, apparatus, device, and storage medium.
Fig. 1 is a flowchart of an event processing method according to an embodiment of the present disclosure. It should be noted that the event processing method according to the embodiments of the present disclosure may be applied to the event processing apparatus according to the embodiments of the present disclosure, and the event processing apparatus may be configured in an electronic device. As shown in fig. 1, the method comprises the steps of:
step 101, obtaining an image to be detected, and extracting features of the image to be detected to obtain a plurality of feature information of the image to be detected.
In order to retrieve the target object as accurate as possible according to the image to be detected, feature extraction is required to be performed on the image to be detected so as to obtain a plurality of feature information of the image to be detected, and the feature information can be used as a clue for further retrieving the target object, so that the retrieval efficiency is improved.
It should be noted that, the plurality of feature information of the image to be detected may include: at least two of the first feature information, the second feature information, the vehicle feature information, and the space-time feature information may also include feature information not mentioned in other embodiments of the present disclosure according to scene requirements, which is not limited in this disclosure.
In a certain scenario, the first feature information may be facial feature information, and the second feature information may be body feature information. As one example, the human characteristic information may include a human vector, a clothing color, a gender, whether to wear glasses, whether to wear a hat, and the like. Regarding vehicle characteristic information, such as that a target object in an image to be detected is in a vehicle, information such as a license plate number, a vehicle color and the like of the vehicle can be extracted. In addition, the space-time characteristic information can be information such as snapshot time and place.
Step 102, determining event information of a target event, wherein the event information comprises at least one of occurrence place information and occurrence time information.
It is understood that at least one of the occurrence information and the occurrence time information of the target event may be used as a clue for further determining the target object. For example, the information of the occurrence place of the target event corresponds to the snapshot place in the object information base, and for example, the information of the occurrence time of the target event corresponds to the snapshot time in the object information base, wherein the object information base will be described below.
And 103, searching in a pre-established object information base according to a plurality of characteristic information of the image to be detected, and sequencing search results according to event information of the target event.
That is, a plurality of feature information of the image to be detected is used as a screening condition, a search result is searched in a pre-established object information base, and the correlation between the search result and at least one of the occurrence place information and the occurrence time information of the target event is calculated, so that the search result is ranked according to the correlation.
The pre-established object information base can be the feature information base of each object obtained by converting the video shot by the monitoring camera into an image, extracting the features, and clustering the feature information of the same object. Searching is carried out in the object information base according to the plurality of characteristic information of the image to be detected, and a search result with high matching performance is obtained, so that the accuracy of the search result can be improved, and the number of the search results can be reduced. The search results are ordered according to the event information of the target event, which is equivalent to automatic investigation aiming at the search results, so that the investigation efficiency is improved, and the cost of manual investigation is reduced.
And 104, acquiring object information of a target object in the image to be detected according to the sorting result.
It will be appreciated that according to the ranking result, it is possible to obtain which result or results of the search result is/are the most matched with the image to be detected, so that the result or results are/are the target object in the detected image.
And 105, tracking and positioning the target object according to the object information.
Because the object information comprises corresponding place information and behavior information of the corresponding target object at different times, tracking and positioning analysis can be performed on the target object according to the information. In addition, the location information and the behavior information of the target object at different times can be obtained in the related database according to the object information, so that tracking and positioning of the target object are realized.
It should be noted that, in the technical solution of the present disclosure, the feature information and the track behavior information of the related target object are acquired, stored, applied, etc. all conform to the requirements of the related laws and regulations, and do not violate the popular public order.
According to the event processing method of the embodiment of the disclosure, the plurality of pieces of characteristic information of the image to be detected are extracted, and searching is performed in the pre-established object information base according to the plurality of pieces of characteristic information of the image to be detected, so that the number of search results is reduced by introducing a plurality of clues. In addition, the search results are ordered according to the event information of the target event, so that the relevance between the search results and the target event is introduced, namely, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and tracking and positioning analysis is performed through comprehensive multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can be improved.
In order to further describe the manner in which the object information library is created in detail, this disclosure proposes yet another embodiment.
Fig. 2 is a flowchart of creating an object information base according to an embodiment of the present disclosure. As shown in fig. 2, the object information base may be previously established by:
step 201, acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer.
The monitoring camera may be a monitoring camera of a plurality of different scenes, for example: monitoring traffic roads and monitoring public places such as subway stations or stations.
Step 202, performing object detection on each video frame to determine M object samples in each video frame; wherein M is a positive integer.
It will be appreciated that each video frame corresponds to an image, and that object detection is performed for each video frame, thereby detecting M object samples in each video frame. Wherein the M target object samples in each video frame refer to M portraits in each video frame.
In step 203, an image of each target object sample is obtained from N video frames, and feature extraction is performed on the image to obtain a plurality of feature information of each target object sample.
That is, all the target object samples corresponding to the N video frames are extracted as images, and then feature extraction is performed on the images, so as to obtain a plurality of feature information of each target object sample.
In the embodiment of the disclosure, feature extraction of the image may include extraction of at least two types of features of the first feature information, the second feature information, the vehicle feature information, the time-space feature information, and the like, so that the acquired information coverage is wide, and therefore, a plurality of types of features are extracted as much as possible to improve the quality of the object information base. In a certain scene, the first feature information may be facial feature information, and the second feature information may be body feature information. As one example, the human characteristic information may include a human vector, a clothing color, a gender, whether to wear glasses, whether to wear a hat, and the like. Regarding vehicle feature information, such as that a target object is in a vehicle in an image of a target object sample, information such as a license plate number, a vehicle color, etc. of the vehicle may be extracted. In addition, the space-time characteristic information can be information such as snapshot time and place.
In step 204, object information of each target object sample is established according to the plurality of feature information of each target object sample.
That is, by extracting the features of the image, a plurality of pieces of feature information of each target object sample, that is, first feature information, second feature information, vehicle information, spatiotemporal information, and the like, corresponding to each target object are obtained, so that these pieces of feature information are taken as object information of each target object sample.
It should be noted that, since different target object samples may refer to the same target object, it is necessary to determine according to each target object sample and its characteristic information, so as to combine the target object samples and its characteristic information that refer to the same target object, thereby obtaining object information corresponding to each target object.
And 205, building a library according to the object information of each target object sample to obtain an object information library.
According to the event processing method provided by the embodiment of the disclosure, when the object information base is established, the characteristic extraction is respectively carried out for each target object sample so as to obtain a plurality of characteristic information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information and tracking and positioning in the image to be detected.
To further illustrate the creation of object information for each target object sample in the above embodiments, the present disclosure proposes another embodiment.
Fig. 3 is a flowchart of establishing object information of each target object according to an embodiment of the present disclosure.
As shown in fig. 3, an implementation manner of establishing object information of each target object includes:
step 301, acquiring a pre-established discrimination model; wherein the discriminant model is trained using a plurality of characteristic information of the subject sample.
The pre-established discrimination model is used for judging whether the plurality of target object samples are the same object according to the plurality of characteristic information of the plurality of target object samples.
Step 302, grouping each target object sample, and inputting a plurality of feature information of each target object sample in each group into a discrimination model to determine whether each target object sample in each group is the same object.
It will be understood that, among the plurality of target object samples obtained by the above sampling, there may be a case where different target object samples refer to the same object, so in order to make each object information and each object form a one-to-one correspondence, it is necessary to perform group discrimination for each target object sample.
As an example, all target correspondence samples may be combined two by two to obtain multiple sets of target object samples. And inputting a plurality of characteristic information corresponding to each target object sample in each group of target object samples into a judging model to judge whether each target object sample in each group is the same object.
Step 303, in response to each target object sample in each group being the same object, combining the plurality of feature information of each target object sample in each group to obtain object information of the same object.
That is, if the target object samples in each group are the same object, the plurality of feature information of the target object samples in each group all belong to the same object, so that the plurality of feature information of the target object samples in each group are combined to obtain the object information corresponding to the same object.
In step 304, in response to the target object samples in each group not being the same object, object information of the target object samples is established according to the feature information of the target object samples in each group.
That is, if the target object samples in each group are not the same object, it is explained that the plurality of feature information of the target object samples in each group are information of different objects, so that the corresponding object information is respectively established for the target object samples.
According to the event processing method provided by the embodiment of the disclosure, when object information is established, whether each group of target object samples are the same object is judged according to the judging model, and a plurality of characteristic information of each target object sample of the same object is combined into object information of the same object, so that the situation that the object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and recall rate of the object information are further improved.
In the event processing method of the above embodiment, searching is performed in a pre-established object information base according to a plurality of feature information of an image to be detected, and searching results are ordered according to event information of a target event. To further describe the specific implementation of this section, the present disclosure proposes yet another embodiment for this section.
Fig. 4 is a flowchart of acquiring a candidate object and its ordering according to an embodiment of the disclosure. As shown in fig. 4, a specific implementation of obtaining a candidate object and its ordering may include:
step 401, retrieving in an object information base according to first feature information among a plurality of feature information of an image to be detected, to obtain at least one candidate object.
In the embodiment of the present disclosure, a plurality of feature information and object information libraries of an image to be detected include first feature information, second feature information, vehicle feature information and time-space feature information as examples. The first feature information may be facial feature information, and the second feature information may be human feature information. As an example, the implementation manner of searching the object information base according to the first characteristic information among the plurality of characteristic information of the image to be detected may be: acquiring face characteristic information of an image to be detected; acquiring a centroid face vector of each person in an object information base; according to the face characteristic information of the image to be detected, obtaining the similarity between the image to be detected and the centroid face vector of each object in the object information base; and taking the object corresponding to the object information with the similarity meeting the expectations as a candidate object. Wherein, since each object may have a plurality of face feature vectors extracted from different images, the centroid face vector refers to an average value of the plurality of face feature vectors. That is, an average value of a plurality of face feature vectors of each object may be regarded as a centroid face vector of each person. Thus, the calculated amount of the facial feature similarity can be reduced, and the resource consumption is reduced.
Step 402, acquiring space-time characteristic information from object information of each candidate object.
Step 403, calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object.
It will be appreciated that in order to narrow down the range of candidates, further matching may be achieved by adding cues.
In the embodiment of the present disclosure, the event information of the target event may include at least one of occurrence place information and occurrence time information of the target event. According to at least one of the occurrence place information and the occurrence time information of the target event and the space-time characteristic information of each candidate object, the possibility that each candidate object participates in the target event can be calculated according to time and place, and the possibility can be embodied in the form of a calculated score, so that the first correlation between each candidate object and the target event is obtained.
Step 404, obtaining corresponding feature information from the object information of each candidate object according to at least one feature information of the second feature information, the vehicle feature information and the space-time feature information among the plurality of feature information of the image to be detected.
That is, according to each feature information of the image to be detected, the feature information of the corresponding category is acquired in the object information of each candidate object. For example, if the plurality of feature information of the image to be detected includes the second feature information, the vehicle feature information, and the space-time feature information, the corresponding second feature information, vehicle feature information, and space-time feature information need to be obtained from the object information of each candidate object. The second feature information may be human feature information in some scenarios.
And step 405, inputting at least one piece of characteristic information and corresponding characteristic information into a pre-established discrimination model to obtain a second correlation between each candidate object and the target event.
It can be understood that the pre-established discrimination model can determine whether the target object and the candidate object in the image to be detected are the same object according to the feature information of the image to be detected and the corresponding feature information in the object information, so as to obtain the similarity score of each candidate object.
At step 406, at least one candidate object is ranked according to the first correlation and the second correlation.
In order to comprehensively consider at least one of the occurrence place information and the occurrence time information of the target event, and clues such as feature information in the image to be detected and the like, so as to further examine the candidate objects, the at least one candidate object is ordered according to the first correlation and the second correlation. In the embodiment of the disclosure, the score of the first correlation and the score of the second correlation may be weighted and ranked according to the final score size of the weighted calculation.
According to the event processing method of the embodiment of the disclosure, when object information retrieval is performed, not only the event information of the target event is introduced, but also the correlation among the second characteristic information, the vehicle and the space-time characteristic information is introduced, so that the possibility that the candidate object participates in the target event can be synthesized, and the similarity with the target object in the image to be detected is calculated, the purpose of accurately checking the candidate object is achieved, the labor cost is further saved, and the candidate object checking efficiency is improved.
In order to further improve the candidate object checking efficiency, based on the above embodiments, another way to obtain the candidate objects and the ordering thereof is proposed in the embodiments of the present disclosure. Fig. 5 is a flowchart of another method for obtaining candidates and ordering thereof according to an embodiment of the present disclosure. As shown in fig. 5, on the basis of the above embodiment, the implementation further includes:
step 507, it is determined whether the candidate object has participated in a particular event. If the candidate object does not participate in the specific event, go to step 506; if the candidate object participated in the particular event, step 508 is performed.
It will be appreciated that if the candidate object has a record in the related event database and the recorded event has a degree of coincidence with the target event, the likelihood of the candidate object being the target object will increase.
As an example, a query may be performed in the related event database based on the candidate object, and if a specific event in which the candidate object participates may be found in the related event database, it is explained that the candidate object participates in the specific event. Otherwise, the specific event is not participated.
In step 508, in response to the candidate object participating in the specific event, descriptive information of the specific event is obtained.
Step 509, obtaining a clue description keyword of the target event.
Step 510, calculating the third relatedness of the candidate object and the target event according to the description information of the specific event and the clue description keywords of the target event.
It can be understood that, according to the description information of the specific event and the clue description keyword of the target event, the coincidence of the specific event and the target event can be calculated, so as to obtain the third correlation between the candidate object and the target event.
Step 511 ranks the at least one candidate object according to the first correlation, the second correlation, and the third correlation.
In order to comprehensively consider the occurrence place information and/or the occurrence time information of the target event, the characteristic information in the image to be detected, the correlation with the specific event and other clues, so as to further examine the candidate objects, at least one candidate object is ordered according to the first correlation, the second correlation and the third correlation. In the embodiment of the disclosure, the score of the first correlation, the score of the second correlation and the score of the third correlation may be weighted and ranked according to the final score size of the weighted calculation.
It should be noted that, steps 501 to 506 in fig. 5 are identical to the implementation manner of steps 401 to 406 in fig. 4, and are not described herein.
According to the event processing method of the embodiment of the disclosure, when the object information is searched, the correlation between the specific event in which the candidate object participates and the target event is increased, that is, if the candidate object participates in the specific event related to the target event, the possibility that the candidate object is the target object is increased, so that the candidate object can be further checked, and the candidate object checking efficiency can be further improved.
In the specific manner of tracking and positioning the target object according to the object information in the above embodiment, the present disclosure proposes yet another embodiment.
Fig. 6 is a flowchart of tracking and positioning a target object according to an embodiment of the present disclosure. As shown in fig. 6, an implementation manner of tracking and positioning the target object may include:
step 601, obtaining a motion trail of a target object according to object information; the motion trail comprises at least one of a snapshot trail and an identity ID (Identity document, identity number) trail of the monitoring camera.
It should be noted that the motion track of the target object may be included in the object information, that is, the capturing track of the monitoring camera is obtained based on the capturing time and the place of the monitoring camera in the object information. In addition, the motion trail of the target object can be queried in a trail database according to the object information, wherein the trail database comprises the motion trail of each object. For example, a track point obtained by a base station access dotting mode (such as a terminal SIM card (Subscriber Identity Module, user identification card) of a user is accessed to a certain base station, the base station can report the location information to obtain track information of the user reaching the location), and for example, a track obtained by a WiFi access dotting mode; as another example, a network IP address used when a user logs into a social application; for another example, a dotting mode of taking an identity card into and out of a bus is utilized; for another example, the identity card is used for handling the check-in and check-out of hotels, or other track points obtained by the ID check-in mode.
Step 602, merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging.
In the embodiment of the present disclosure, after at least one of the snapshot track and the ID track of the monitoring camera of the target object is combined, an abnormal track point may exist therein, so that conflict detection analysis needs to be performed on the combined motion track. As an example, the combined motion trail can be smoothed through speed to find abnormal points, and the abnormal reasons are analyzed according to the information of the abnormal points and the object information, so that the clustering errors of the object information base can be corrected in time. In addition, confidence coefficient calculation can be performed on each track point according to the object information, and related first characteristic information, identity ID and other information can be obtained for track points with confidence coefficient lower than a threshold value, so that a worker can conveniently and manually check key information points, and timely modify object information and ID associated information to obtain a target object motion track with high accuracy.
And 603, tracking and positioning the target object according to the motion trail after conflict detection analysis.
It can be understood that after the collision detection analysis is performed on the motion trail of the target object, the staff performs tracking and positioning on the target object according to the analyzed motion trail, so as to process the target event.
According to the event processing method provided by the embodiment of the disclosure, at least one of the snapshot track and the identity ID track of the target object motion obtained according to the object information, so that the acquisition of the target object motion track by data fusion is realized. In addition, conflict detection analysis is carried out on the motion trail of the target object, and manual verification is carried out on key trail points, so that the accuracy of the motion trail of the target object is improved.
In order to implement the above method, the present disclosure proposes an event processing apparatus.
Fig. 7 is a block diagram of an event processing device according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
the image processing module 710 is configured to obtain an image to be detected, and perform feature extraction on the image to be detected to obtain a plurality of feature information of the image to be detected;
a first determining module 720 for determining event information of a target event, the event information including at least one of occurrence place information and occurrence time information;
The retrieving module 730 is configured to retrieve from a pre-established object information base according to a plurality of feature information of the image to be detected, and order the retrieval results according to event information of the target event;
a second determining module 740, configured to determine object information of the target object in the image to be detected according to the sorting result;
and the positioning module 750 is used for tracking and positioning the target object according to the object information.
In some embodiments of the present disclosure, the retrieval module 730 includes:
a search obtaining unit 730-1, configured to search in an object information base according to first feature information among a plurality of feature information of an image to be detected, to obtain at least one candidate object;
a first acquisition unit 730-2 for acquiring spatiotemporal feature information from object information of each candidate object;
a first calculating unit 730-3 for calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object;
a second obtaining unit 730-4, configured to obtain corresponding feature information from the object information of each candidate object according to at least one feature information of second feature information, vehicle feature information, and space-time feature information among the plurality of feature information of the image to be detected;
A second calculation unit 730-5, configured to input at least one feature information and corresponding feature information into a pre-established discrimination model, to obtain a second correlation between each candidate object and the target event;
a ranking unit 730-6 for ranking the at least one candidate object according to the first correlation and the second correlation.
Furthermore, in the embodiment of the present disclosure, the retrieving module 730 further includes:
a determining unit 730-7 for determining whether the candidate object has participated in the specific event;
a third acquiring unit 730-8, configured to acquire description information of a specific event in response to participation of the candidate object in the specific event;
a fourth obtaining unit 730-9 for obtaining a cue description keyword of the target event;
a third calculating unit 730-10, configured to calculate a third relativity between the candidate object and the target event according to the description information of the specific event and the clue description keyword of the target event;
wherein, the sorting unit 730-6 is specifically configured to:
at least one candidate object is ranked according to the first correlation, the second correlation, and the third correlation.
In the embodiment of the present disclosure, the positioning module 750 is specifically configured to:
acquiring a motion trail of a target object according to object information; the motion track comprises at least one of a snap track and an identity ID track of the monitoring camera;
Merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging;
and tracking and positioning the target object according to the motion trail after conflict detection and analysis.
According to the event processing device of the embodiment of the disclosure, the plurality of pieces of characteristic information of the image to be detected are extracted, and the retrieval is performed in the pre-established object information base according to the plurality of pieces of characteristic information of the image to be detected, so that the number of retrieval results is reduced by introducing a plurality of clues. In addition, the search results are ordered according to the occurrence place information and/or the occurrence time information of the target event, so that the correlation between the search results and the target event is introduced, namely, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and tracking and positioning analysis is performed through comprehensive multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can be improved.
Fig. 8 is a block diagram illustrating another event processing apparatus according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus further includes:
the establishing module 860 is configured to pre-establish an object information base: the establishing module 860 is specifically configured to:
acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer;
performing target detection on each video frame to determine M target object samples in each video frame; wherein M is a positive integer;
acquiring images of each target object sample from N video frames, and carrying out feature extraction on the images to obtain a plurality of feature information of each target object sample;
establishing object information of each target object sample according to a plurality of characteristic information of each target object sample;
and building a library according to the object information of each target object sample to obtain an object information library.
In some embodiments of the present disclosure, the creation module 860 is specifically configured to:
acquiring a pre-established judging model; the discrimination model is trained by adopting a plurality of characteristic information of the object sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into a judging model, and judging whether each target object sample in each group is the same object or not;
In response to the fact that each target object sample in each group is the same object, combining a plurality of characteristic information of each target object sample in each group to obtain object information of the same object;
and establishing object information of each target object sample according to the characteristic information of each target object sample in each group in response to each target object sample in each group not being the same object.
It should be noted that 810 to 850 in fig. 8 have the same functions and structures as 710 to 750 in fig. 7, and are not described here again.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
According to the event processing device provided by the embodiment of the disclosure, when the object information base is established, the characteristic extraction is respectively carried out for each target object sample so as to obtain a plurality of characteristic information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information and tracking and positioning in the image to be detected. In addition, the target object samples of the same object are combined, so that the situation that object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and recall rate of an object information base can be further improved.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
As shown in fig. 9, is a block diagram of an electronic device of a method of event processing according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device includes: one or more processors 901, memory 902, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). In fig. 9, a processor 901 is taken as an example.
Memory 902 is a non-transitory computer-readable storage medium provided by the present disclosure. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods of event handling provided by the present disclosure. The non-transitory computer readable storage medium of the present disclosure stores computer instructions for causing a computer to perform the method of event processing provided by the present disclosure.
The memory 902 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., the image processing module 710, the first determination module 720, the retrieval module 730, the second determination module 740, and the positioning module 750 shown in fig. 7) corresponding to the method of event processing in the embodiments of the present disclosure. The processor 901 executes various functional applications of the server and data processing, i.e., implements the event processing method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 902. The present disclosure provides a computer program product comprising a computer program which, when executed by a processor 901, implements the event processing method in the above-described method embodiments.
The memory 902 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the electronic device for event processing, etc. In addition, the memory 902 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 902 optionally includes memory remotely located relative to processor 901, which may be connected to the event processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of event processing may further include: an input device 903 and an output device 904. The processor 901, memory 902, input devices 903, and output devices 904 may be connected by a bus or other means, for example in fig. 9.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the event-handling electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output means 904 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present application may be performed in parallel or sequentially or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (9)

1. An event processing method, comprising:
acquiring an image to be detected, and carrying out feature extraction on the image to be detected to acquire a plurality of feature information of the image to be detected;
determining event information of a target event, wherein the event information comprises at least one of occurrence place information and occurrence time information;
searching in a pre-established object information base according to a plurality of characteristic information of the image to be detected, and sequencing search results according to event information of the target event;
acquiring object information of a target object in the image to be detected according to the sequencing result;
tracking and positioning the target object according to the object information;
the searching in the pre-established object information base according to the plurality of characteristic information of the image to be detected, and sorting the searching results according to the event information of the target event comprises the following steps:
retrieving in the object information base according to first characteristic information among the plurality of characteristic information of the image to be detected to obtain at least one candidate object;
acquiring space-time characteristic information from object information of each candidate object;
calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object;
Acquiring corresponding feature information from object information of each candidate object according to at least one feature information of second feature information, vehicle feature information and space-time feature information among the plurality of feature information of the image to be detected;
inputting the at least one characteristic information and the corresponding characteristic information into a pre-established judging model to obtain a second relativity of each candidate object and the target event;
determining whether the candidate object has participated in a particular event;
responding to the candidate object participating in a specific event, and acquiring description information of the specific event;
obtaining clue description keywords of the target event;
calculating a third relativity between the candidate object and the target event according to the description information of the specific event and the clue description key words of the target event;
weighting the scores of the first correlation, the second correlation and the third correlation to obtain a final score, and sorting the at least one candidate object according to the final score;
the tracking and positioning the target object according to the object information comprises the following steps:
Acquiring a motion trail of the target object according to the object information; the motion track comprises at least one of a snap track and an identity ID track of the monitoring camera;
merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging;
tracking and positioning the target object according to the motion trail after conflict detection analysis;
the conflict detection analysis of the combined motion trail comprises the following steps:
and acquiring abnormal points in the motion tracks obtained after the merging through speed smoothing, and correcting the object information base according to the information of the abnormal points and the object information.
2. The method of claim 1, wherein the object information library is pre-established by:
acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer;
performing object detection on each video frame to determine M object samples in each video frame; wherein M is a positive integer;
Acquiring an image of each target object sample from the N video frames, and carrying out feature extraction on the image to obtain a plurality of feature information of each target object sample;
establishing object information of each target object sample according to the plurality of characteristic information of each target object sample;
and building a library according to the object information of each target object sample to obtain the object information library.
3. The method of claim 2, wherein establishing object information for each target object sample based on the plurality of characteristic information for each target object sample comprises:
acquiring a pre-established judging model; wherein the discriminant model is trained using a plurality of characteristic information of the subject sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into the judging model, and judging whether each target object sample in each group is the same object;
in response to that each target object sample in each group is the same object, combining a plurality of characteristic information of each target object sample in each group to obtain object information of the same object;
And establishing object information of each target object sample according to the characteristic information of each target object sample in each group in response to the fact that each target object sample in each group is not the same object.
4. An event processing apparatus comprising:
the image processing module is used for acquiring an image to be detected, and extracting the characteristics of the image to be detected to acquire a plurality of characteristic information of the image to be detected;
a first determining module for determining event information of a target event, the event information including at least one of occurrence place information and occurrence time information;
the retrieval module is used for retrieving in a pre-established object information base according to the plurality of characteristic information of the image to be detected and sequencing retrieval results according to the event information of the target event;
the second determining module is used for determining object information of the target object in the image to be detected according to the sorting result;
the positioning module is used for tracking and positioning the target object according to the object information;
the retrieval module comprises:
the retrieval obtaining unit is used for retrieving in the object information base according to first characteristic information in the plurality of characteristic information of the image to be detected to obtain at least one candidate object;
A first acquisition unit configured to acquire spatio-temporal feature information from object information of each of the candidate objects;
a first calculation unit, configured to calculate a first correlation between each candidate object and the target event according to event information of the target event and spatiotemporal feature information of each candidate object;
a second obtaining unit, configured to obtain corresponding feature information from object information of each candidate object according to at least one feature information of second feature information, vehicle feature information, and space-time feature information among the plurality of feature information of the image to be detected;
the second computing unit is used for inputting the at least one piece of characteristic information and the corresponding piece of characteristic information into a pre-established judging model to obtain a second relativity of each candidate object and the target event;
a ranking unit configured to rank the at least one candidate object according to the first correlation and the second correlation;
the retrieval module comprises:
a determining unit configured to determine whether the candidate object has participated in a specific event;
the third acquisition unit is used for responding to the candidate object participating in a specific event and acquiring the description information of the specific event;
A fourth obtaining unit, configured to obtain a clue description keyword of the target event;
a third calculating unit, configured to calculate a third relativity between the candidate object and the target event according to the description information of the specific event and the clue description keyword of the target event;
wherein, the sequencing unit is specifically configured to:
weighting the scores of the first correlation, the second correlation and the third correlation to obtain a final score, and sorting the at least one candidate object according to the final score;
the positioning module is specifically used for:
acquiring a motion trail of the target object according to the object information; the motion track comprises at least one of a snap track and an identity ID track of the monitoring camera;
merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging;
tracking and positioning the target object according to the motion trail after conflict detection analysis;
the conflict detection analysis of the combined motion trail comprises the following steps:
And acquiring abnormal points in the motion tracks obtained after the merging through speed smoothing, and correcting the object information base according to the information of the abnormal points and the object information.
5. The apparatus of claim 4, further comprising:
the establishing module is used for pre-establishing the object information base: the building module is specifically configured to:
acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer;
performing object detection on each video frame to determine M object samples in each video frame; wherein M is a positive integer;
acquiring an image of each target object sample from the N video frames, and carrying out feature extraction on the image to obtain a plurality of feature information of each target object sample;
establishing object information of each target object sample according to the plurality of characteristic information of each target object sample;
and building a library according to the object information of each target object sample to obtain the object information library.
6. The apparatus of claim 5, wherein the means for establishing is specifically configured to:
Acquiring a pre-established judging model; wherein the discriminant model is trained using a plurality of characteristic information of the subject sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into the judging model, and judging whether each target object sample in each group is the same object;
in response to that each target object sample in each group is the same object, combining a plurality of characteristic information of each target object sample in each group to obtain object information of the same object;
and establishing object information of each target object sample according to the characteristic information of each target object sample in each group in response to the fact that each target object sample in each group is not the same object.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 3.
8. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 3.
9. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 3.
CN202110622066.4A 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium Active CN113378005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110622066.4A CN113378005B (en) 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110622066.4A CN113378005B (en) 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113378005A CN113378005A (en) 2021-09-10
CN113378005B true CN113378005B (en) 2023-06-02

Family

ID=77575808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110622066.4A Active CN113378005B (en) 2021-06-03 2021-06-03 Event processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113378005B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115431174B (en) * 2022-09-05 2023-11-21 昆山市恒达精密机械工业有限公司 Method and system for controlling grinding of middle plate

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970240B1 (en) * 2001-12-17 2011-06-28 Google Inc. Method and apparatus for archiving and visualizing digital images
CN110717414A (en) * 2019-09-24 2020-01-21 青岛海信网络科技股份有限公司 Target detection tracking method, device and equipment
CN110888877A (en) * 2019-11-13 2020-03-17 深圳市超视智慧科技有限公司 Event information display method and device, computing equipment and storage medium
WO2020248386A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Video analysis method and apparatus, computer device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020303B (en) * 2012-12-31 2015-08-19 中国科学院自动化研究所 Based on the historical events extraction of internet cross-media terrestrial reference and the searching method of picture concerned
US11417128B2 (en) * 2017-12-22 2022-08-16 Motorola Solutions, Inc. Method, device, and system for adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging
CN108932509A (en) * 2018-08-16 2018-12-04 新智数字科技有限公司 A kind of across scene objects search methods and device based on video tracking
CN109145931B (en) * 2018-09-03 2019-11-05 百度在线网络技术(北京)有限公司 Object detecting method, device and storage medium
CN110705476A (en) * 2019-09-30 2020-01-17 深圳市商汤科技有限公司 Data analysis method and device, electronic equipment and computer storage medium
CN110942036B (en) * 2019-11-29 2023-04-18 深圳市商汤科技有限公司 Person identification method and device, electronic equipment and storage medium
CN112084939A (en) * 2020-09-08 2020-12-15 深圳市润腾智慧科技有限公司 Image feature data management method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970240B1 (en) * 2001-12-17 2011-06-28 Google Inc. Method and apparatus for archiving and visualizing digital images
WO2020248386A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Video analysis method and apparatus, computer device and storage medium
CN110717414A (en) * 2019-09-24 2020-01-21 青岛海信网络科技股份有限公司 Target detection tracking method, device and equipment
CN110888877A (en) * 2019-11-13 2020-03-17 深圳市超视智慧科技有限公司 Event information display method and device, computing equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于突发事件的跨媒体信息检索系统的研究;訾玲玲;杜军平;;计算机仿真(06);全文 *

Also Published As

Publication number Publication date
CN113378005A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN111967302B (en) Video tag generation method and device and electronic equipment
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
US9704046B2 (en) Discovering object pathways in a camera network
CN111259751B (en) Human behavior recognition method, device, equipment and storage medium based on video
US20210201161A1 (en) Method, apparatus, electronic device and readable storage medium for constructing key-point learning model
CN111582185B (en) Method and device for recognizing images
CN111611903B (en) Training method, using method, device, equipment and medium of motion recognition model
CN111598164A (en) Method and device for identifying attribute of target object, electronic equipment and storage medium
CN113033458B (en) Action recognition method and device
CN109902681B (en) User group relation determining method, device, equipment and storage medium
JP2021034003A (en) Human object recognition method, apparatus, electronic device, storage medium, and program
US11557120B2 (en) Video event recognition method, electronic device and storage medium
CN112507090A (en) Method, apparatus, device and storage medium for outputting information
JP2021520015A (en) Image processing methods, devices, terminal equipment, servers and systems
CN112148908A (en) Image database updating method and device, electronic equipment and medium
CN113378005B (en) Event processing method, device, electronic equipment and storage medium
CN112507833A (en) Face recognition and model training method, device, equipment and storage medium
CN111783619A (en) Human body attribute identification method, device, equipment and storage medium
US20220300774A1 (en) Methods, apparatuses, devices and storage media for detecting correlated objects involved in image
CN110889392B (en) Method and device for processing face image
CN111949820B (en) Video associated interest point processing method and device and electronic equipment
CN116403285A (en) Action recognition method, device, electronic equipment and storage medium
CN111241225A (en) Resident area change judgment method, resident area change judgment device, resident area change judgment equipment and storage medium
CN111985298B (en) Face recognition sample collection method and device
CN113283410B (en) Face enhancement recognition method, device and equipment based on data association analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant