CN111461031B - Object recognition system and method - Google Patents

Object recognition system and method Download PDF

Info

Publication number
CN111461031B
CN111461031B CN202010260664.7A CN202010260664A CN111461031B CN 111461031 B CN111461031 B CN 111461031B CN 202010260664 A CN202010260664 A CN 202010260664A CN 111461031 B CN111461031 B CN 111461031B
Authority
CN
China
Prior art keywords
gait
data
information
module
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010260664.7A
Other languages
Chinese (zh)
Other versions
CN111461031A (en
Inventor
黄永祯
史伟康
肖渝洋
高东霞
蒲澍
田德宽
张居昌
金振亚
姚亮
韩春静
高峰
李晓莹
秦跃
张燃
刘富文
陈永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yinhe Shuidi Technology Ningbo Co ltd
Original Assignee
Yinhe Shuidi Technology Ningbo Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yinhe Shuidi Technology Ningbo Co ltd filed Critical Yinhe Shuidi Technology Ningbo Co ltd
Priority to CN202010260664.7A priority Critical patent/CN111461031B/en
Publication of CN111461031A publication Critical patent/CN111461031A/en
Application granted granted Critical
Publication of CN111461031B publication Critical patent/CN111461031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application provides an object recognition system and method, the system includes: the data acquisition module is used for acquiring gait data of a first object in the environment where the data acquisition module is located; the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining the gait characteristic information of each first object based on the gait data and a preset gait recognition model, determining a target object and the gait image information of the target object from a plurality of first objects based on the gait characteristic information and a preset historical video stream or a gait characteristic database, and transmitting the gait image information to the display module; the display module is used for displaying the gait image information.

Description

Object recognition system and method
Technical Field
The application relates to the technical field of data processing, in particular to an object recognition system and method.
Background
In the process of identifying an object (such as a pedestrian or an animal), particularly in the security field, face identification or fingerprint identification is generally performed, but these technologies are limited to short-distance identification, for example, fingerprint identification must be performed through human body contact to successfully identify the object, and face identification cannot be performed successfully under the condition that the face of the object is blocked, so that accurate and efficient long-distance, non-contact and full-view identification cannot be performed, and thus identification information of the object cannot be displayed.
Disclosure of Invention
In view of the above, an object of the present application is to provide an object recognition system and method for improving the recognition efficiency of objects.
In a first aspect, an embodiment of the present application provides an object recognition system, including: the system comprises a data acquisition module, a data processing module and a display module;
the data acquisition module is used for acquiring gait data of a first object in the environment where the data acquisition module is located;
the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining the gait characteristic information of each first object based on the gait data and a preset gait recognition model, determining a target object and the gait image information of the target object from a plurality of first objects based on the gait characteristic information and a preset historical video stream or a gait characteristic database, and transmitting the gait image information to the display module;
the display module is used for displaying the gait image information.
In one embodiment, the gait data comprises at least one of a video stream and a gait image sequence.
In one embodiment, the data processing module includes a data analysis module for:
determining gait feature information of a second object included in each of the historical video streams based on the gait recognition model;
and determining a target object from a plurality of first objects based on the gait feature information corresponding to the first objects and the gait feature information corresponding to the second objects or the gait feature database.
In one embodiment, the data processing module includes a region collision module for:
receiving gait feature information of a second object transmitted by the data analysis module;
determining a first target historical video stream belonging to the same location area from a plurality of historical video streams based on source information of the video streams;
for different location areas, determining a second object corresponding to the first historical video stream belonging to the location area as a third object;
determining the target object from a plurality of first objects based on gait feature information corresponding to the first objects and gait feature information corresponding to a third object corresponding to each position area; or alternatively, the process may be performed,
determining a second historical video stream belonging to the same preset time period from a plurality of historical video streams;
determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object;
and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first objects and the gait feature information corresponding to the fourth objects corresponding to each preset time period.
In one embodiment, the data processing module includes a data retrieval module for:
gait image information of the target object is determined from gait data and/or historical video streams comprising the target object based on the gait feature information of the target object.
In one embodiment, the data processing module is further configured to:
acquiring gait characteristic information of a target suspect;
determining the similarity between the target suspects and the target object based on the gait feature information of the target suspects and the gait feature information of the target object;
if the similarity is determined to be greater than a preset similarity threshold, determining target gait image information of the target suspects based on the gait image information of the target objects, and sending the target gait image information to the display module;
the display module is further configured to:
and displaying the target gait image information of the target suspects.
In one embodiment, the system further comprises: the warning module is used for processing the data according to the following conditions: after the similarity is determined to be greater than a preset similarity threshold, generating warning information, and respectively sending the warning information to the warning module and the display module;
the warning module is used for warning based on the warning information;
the display module is also used for displaying the warning information.
In one embodiment, the data processing module is further configured to:
determining target gait data corresponding to the target object from the gait data;
the system further comprises: the case management module is used for:
acquiring case data corresponding to a plurality of cases respectively, and target gait data and gait characteristic information of a target object transmitted by the data processing module;
based on the case data and gait feature information of the target object, determining associated gait data respectively associated with a plurality of cases from the target gait data;
and indicating the storage module to store the cases, the case data and the corresponding associated gait data in an associated manner.
In one embodiment, the memory module is further configured to:
and storing gait characteristic information of the target object.
In a second aspect, an embodiment of the present application provides an object recognition method, where the method is applied to the object recognition system in any one of the first aspect, and the object recognition system includes: the system comprises a data acquisition module, a data processing module and a display module;
the data acquisition module acquires gait data of a first object in an environment where the data acquisition module is located;
the data processing module receives the gait data, determines gait characteristic information of each first object based on the gait data and a preset gait recognition model, and determines a target object and gait image information of the target object from a plurality of first objects based on the gait characteristic information and a preset historical video stream or a gait characteristic database;
the display module displays the gait image information.
The embodiment of the application provides an object recognition system, which comprises a data acquisition module, a data processing module and a display module; the data acquisition module is used for acquiring gait data corresponding to the first objects respectively, the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining gait characteristic information of each first object based on the gait data and a preset gait recognition model, determining a target object and gait image information of the target object from the first objects based on the gait characteristic information and a preset historical video stream or gait characteristic database, and sending the gait image information to the display module so as to display the gait image information by the display module, so that the information of the object can be recognized efficiently without short-distance contact.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a first structure of an object recognition system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a second configuration of an object recognition system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a third configuration of an object recognition system according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an object recognition method according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
It should be noted that the term "comprising" will be used in embodiments of the application to indicate the presence of the features stated hereafter, but not to exclude the addition of other features.
The embodiment of the application provides an object recognition system, which comprises: the system comprises a data acquisition module, a data processing module and a display module; the data acquisition module is used for acquiring gait data corresponding to the first objects respectively, the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining gait characteristic information of each first object based on the gait data and a preset gait recognition model, determining a target object and gait image information of the target object from the first objects based on the gait characteristic information and a preset historical video stream or gait characteristic database, and sending the gait image information to the display module so as to display the gait image information by the display module, so that the object data can be recognized efficiently without short-distance contact.
An embodiment of the present application provides an object recognition system, as shown in fig. 1, where the system specifically includes: a data acquisition module 11, a data processing module 12 and a display module 13.
The data acquisition module 11 is used for acquiring gait data of a first object in the environment where the data acquisition module is located;
the data processing module 12 is configured to receive the gait data transmitted by the data acquisition module 11, determine gait feature information of each first object based on the gait data and a preset gait recognition model, determine a target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database, and send the gait image information to the display module;
the display module 13 is configured to display the gait image information.
The object may be a pedestrian, an animal, or the like, and may be determined according to actual conditions.
The data acquisition module 11 can be applied to scenes such as a mall, a traffic sign or a school, and the data acquisition module 11 at least comprises a video camera and a gait recognition camera, wherein the video camera can acquire a video stream of an object, and the video camera cannot extract a gait image from the video stream, so that when the data acquisition module is the video camera, the gait data acquired by the video camera is the video stream of the object.
The gait recognition camera is embedded with a gait detection algorithm and a gait tracking algorithm, can be a gait snapshot camera, a gait recognition camera and the like, has the capability of video structural analysis, and can extract gait information and perform gait recognition.
The gait data acquired by the data acquisition module 11 may be a video stream or a gait image sequence, and may be determined according to actual situations, and the gait information of the subject, the region to which the subject belongs, the time for the camera to acquire the gait data, and the device identifier of the camera to acquire the gait may be acquired at the same time.
The data processing module 12 includes a data analysis module 121, a region collision module 122, and a data retrieval module 123, and the functions of the data analysis module 121, the functions of the region collision module 122, and the functions of the data retrieval module 123 are described below with reference to fig. 2, respectively.
The data analysis module 121 may determine gait feature information of the second object included in each of the historical video streams based on the gait recognition model, and determine a target object from among the plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information or gait feature database corresponding to the second object.
Here, the gait recognition module is a preset training completion model; the historical video stream can be a video stream which is not accessed to video acquisition equipment of the system, can be a video stream which is accessed to video acquisition equipment of the system, and can also be a historical case video which is grabbed from each platform; a large number of sample objects and corresponding gait feature information are pre-stored in a gait feature database. The video acquisition equipment which is not accessed into the system comprises a mobile terminal carried by an object or a portable camera device; the camera equipment connected to the system comprises a video camera and a gait recognition camera.
The gait characteristic information can be information such as the magnitude, direction, action point and the like of force when the subject walks.
In a specific implementation process, after receiving the gait data transmitted by the data acquisition module, when the gait data is a video stream, the data analysis module 121 extracts a plurality of gait images corresponding to a plurality of first objects from the video stream through a gait detection algorithm, inputs the plurality of gait images corresponding to the first objects into a gait tracking algorithm for each first object to obtain a gait image sequence corresponding to the first object, and inputs the gait image sequence corresponding to the first object into a gait recognition model to obtain the gait feature information of the first object. The gait detection algorithm and the gait tracking algorithm can be embedded in a gait recognition box, which is arranged in the data processing module.
When the gait data is a gait image sequence, the gait image sequence corresponding to each first object is directly input into a gait recognition model, and the gait characteristic information corresponding to each first object is obtained.
After obtaining the gait feature information of the first object, the data analysis module 121 is further configured to perform recognition processing on each historical video stream by using the gait recognition model, so as to obtain gait feature information of the second object included in each historical video stream. Before the historical video stream is processed by the gait recognition model, the historical video stream can be processed by a gait detection algorithm and a gait tracking algorithm to obtain a gait image sequence corresponding to each second object, so that the gait image sequences of the second objects are input into the gait recognition model to obtain gait characteristic information of the second objects.
After the gait feature information corresponding to the first object and the second object is obtained, feature extraction can be performed on the gait feature information of the first object to obtain a gait feature vector corresponding to the first object, and feature extraction is performed on the gait feature information of the second object to obtain a gait feature vector corresponding to the second object.
And respectively inputting the gait feature vector of each first object and the gait feature vector corresponding to each second object into a similarity calculation algorithm for each first object to obtain a first gait similarity between the first object and each second object. The similarity calculation algorithm may be a cosine similarity algorithm, a euclidean distance algorithm, or the like.
When the first gait similarity greater than the preset similarity threshold exists in the plurality of first gait similarities corresponding to the first object, the first number of the first gait similarities greater than the preset similarity threshold can be counted, wherein the first number of the first gait similarities greater than the preset similarity threshold is indicated that the similarity of the gait characteristic information of the first object and the gait characteristic information of the second object is greater. The similarity threshold may be determined according to practical situations, for example, the preset similarity threshold may be 90%, 95% or 98%, etc.
And sequencing the first objects according to the sequence from the first number to the second number, and determining the first object with the preset number which is sequenced first as the target object. The preset number may be determined according to practical situations, for example, the preset number may be 5, 8, 10, or the like.
According to the application, the target object is determined through the historical video stream, and valuable data is found from the historical video data, for example, useful information can be obtained from the old and old cases which are not broken, so that criminals can be found out, and the utilization rate of video resources is improved.
In addition to determining the target object in the above manner, the target object may also be determined using the first object and gait feature information in the gait feature database, and may include the steps of:
and extracting features of gait feature information in the gait feature database to obtain gait feature vectors corresponding to each sample object.
And respectively inputting the gait feature vector of each first object and the gait feature vector corresponding to each sample object into a similarity calculation algorithm for each first object to obtain a second gait similarity between the first object and each sample object.
When the second gait similarity greater than the preset similarity threshold exists in the plurality of second gait similarities corresponding to the first object, the fact that the similarity between the gait feature information of the first object and the gait feature information of the sample object is large is indicated, the second number of the second gait similarities greater than the preset similarity threshold can be counted, the plurality of first objects are ranked according to the sequence from the large to the small of the second number, and the preset number of second objects ranked forward are determined to be target objects.
The present disclosure may further obtain a video stream captured by a video capture device of source information to determine a target object when the target object is determined by a first object and a second object, as described in detail below.
After determining the gait feature information of the second object, the data analysis module 121 may further transmit the gait feature information of the second object to the area collision module 122, where the area collision module 122 may determine, from the plurality of historical video streams, a first historical video stream belonging to the same location area based on source information of the video stream, determine, for different location areas, a second object corresponding to the first historical video stream belonging to the location area as a third object, and determine, from the plurality of first objects, a target object based on the gait feature information corresponding to the first object and the gait feature information corresponding to the third object corresponding to each location area.
Here, the source information of the video stream includes a device identifier of the image capturing device that captures the video stream, and device location information, which may be a device serial number, a code (preset), or the like, and device location information, which may be installation location information (such as GPS coordinates) of the device that captures the history video stream.
In a specific implementation process, the historical video streams are further divided, the actual distance between every two image capturing devices corresponding to the historical video streams is calculated according to the device position information in the source information of the video streams, if the actual distance is smaller than a preset distance threshold value, the corresponding historical video streams are determined to belong to the same position area, and the historical video streams belonging to the same position area are used as first historical video streams.
In addition to determining the first historical video stream belonging to the same location area in the above manner, the device location information corresponding to each historical video stream may be marked in the map, and the historical video stream belonging to the same administrative area is determined as the first historical video stream.
For example, the historical video streams include 5, the first historical video stream is Q1, the second historical video stream is Q2, the third historical video stream is Q3, the fourth historical video stream is Q4, and the fifth historical video stream is Q5, and if the actual distance between Q1 and Q2 and the actual distance between Q2 and Q3 are smaller than the preset distance threshold, Q1, Q2, and Q3 are determined to be the first historical video stream.
For each location area, a second object corresponding to the first historical video stream belonging to the location area is determined, and the determined second object is used as a third object.
And respectively inputting the gait feature vector of each first object and the gait feature vector corresponding to the third object corresponding to each position area into a similarity calculation algorithm for each first object to obtain a third gait similarity between the first object and each third object.
When the third gait similarity greater than the preset similarity threshold exists in the plurality of third gait similarities corresponding to the first object, the similarity between the gait feature information of the first object and the gait feature information of the third object is larger, and the second number of the third gait similarities greater than the preset similarity threshold can be counted.
And sequencing the first objects according to the sequence from the first object to the second object, and determining the third objects with the preset number, which are sequenced first, as target objects.
In addition to determining a target object using an area to which an image capturing apparatus that captures video streams belongs, a second historical video stream belonging to a preset period of time may be determined from a plurality of the historical video streams; determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object; and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first objects and the gait feature information corresponding to the fourth objects corresponding to each preset time period.
Here, the preset time period may be a commute time period (e.g., 7:00-9:00, 17:00-19:00), and a plurality of time periods may be preset.
In the implementation process, for each historical video stream, intercepting a second historical video stream corresponding to a preset time period from the historical video stream, determining a second object corresponding to each second historical video stream, and taking the determined second object as a fourth object.
And respectively inputting the gait feature vector of each first object and the gait feature vector corresponding to each fourth object into a similarity calculation algorithm for each first object to obtain fourth gait similarity between the first object and each fourth object.
When the fourth gait similarity greater than the preset similarity threshold exists in the plurality of fourth gait similarities corresponding to the first object, the similarity between the gait feature information of the first object and the gait feature information of the fourth object is larger, and the third number of the fourth gait similarity greater than the preset similarity threshold can be counted.
And sequencing the first objects according to the sequence from the large number to the small number, and determining the preset number of fourth objects sequenced to the front as target objects.
After the data analysis module 121 or the area collision module 122 determines the target object, gait feature information of the target object is transmitted to the data retrieval module 123, and the data retrieval module 123 may determine gait image information of the target object from the gait data and/or the historical video stream corresponding to the target object based on the gait feature information of the target object.
The gait image information comprises a gait image, position information, time information, identity information of an object and activity track information, wherein the position information can be GPS coordinates, is the position of a data acquisition module for acquiring the gait data of the target object, the time information is the time when the target object passes through the position of the data acquisition module, and when the target object is a user, the identity information of the object is the identity information authorized by the user, and the activity track information is the track information authorized by the user.
In the implementation process, after the target object is determined, a gait image sequence or a current video stream corresponding to the target object can be acquired, a historical video stream comprising the target object can be acquired, gait data and the historical video stream comprising the target object can be acquired at the same time, and the gait image sequence or the current video stream can be determined according to actual conditions.
And determining the position information of the target object, the time information when the target object passes through the corresponding position and the gait image of the target object at the position from the current video stream or the historical video stream comprising the target image according to the gait characteristic information of the target object, and taking the position information, the time information, the gait image, the identity information, the activity track information and the like of the target object as gait image information.
The system of the application can also realize early warning, and during early warning, the data processing module 12 can acquire gait characteristic information of a target suspects, and determine the similarity between the target suspects and the target objects based on the gait characteristic information of the target suspects and the gait characteristic information of the target objects; if the similarity is determined to be greater than the preset similarity threshold, determining target gait image information of the target suspects based on the gait image information of the target objects, and sending the target gait image information to the display module 13.
Here, the gait feature information of the target suspect is authorized gait feature information.
In the specific implementation process, after the gait feature information of the target suspects is obtained, the gait feature information of the target suspects can be subjected to feature extraction to obtain gait feature vectors corresponding to the target suspects, and the gait feature information of the target objects is subjected to feature extraction to obtain the gait feature vectors of the target objects.
And inputting the gait feature vector of the target suspect and the gait feature vector of the target object into a similarity calculation algorithm to obtain the similarity between the target suspect and the target object.
When the similarity between the target suspect and the target object is greater than the preset similarity threshold, the gait image information in the gait image information of the target object may be used as the target gait image information of the target suspect, so as to be displayed in the display module 13.
When the system determines that the similarity between the target suspects and the target objects is greater than the preset similarity threshold, warning information can be generated, the warning information is sent to the warning module, and the warning module warns based on warning instructions in the warning information, for example, warning can be carried out in a warning lamp (such as red lamp stroboscopic lamp) mode, and warning can also be carried out in a voice mode.
The display module 13 may display the warning information while the warning module is warning, where the warning information includes information such as the number of target suspects, identity information of each target suspects, and the like.
The system disclosed by the application can be associated with a case authorized by a third party for storage, and can be realized through the case management module 15 when the case authorized by the third party is associated for storage, referring to fig. 3.
In determining the gait image information, the data processing module 12 may also determine target gait data corresponding to the target object from the gait data, that is, determine a gait image sequence or a video stream including the target object, take the video stream or the gait image sequence including the target object as the target gait data, and transmit the target gait data to the case management module 15.
The case management module 15 may acquire case data corresponding to each of the plurality of cases, and the target gait data transmitted by the data processing module 12.
Here, the case data includes data such as a case name, a content of a case, and a video of the case, the content of the case includes detailed information of the case (such as gait feature information of a person involved in the case), the video of the case includes a related video at the time of occurrence of the case, and the like.
The case management module 15 may determine associated gait data respectively associated with a plurality of cases from the target gait data based on the case data, and instruct the storage module 14 to associate the stored cases. Case data and corresponding associated gait data.
In a specific implementation process, the case management module 15 may extract gait feature information of a person involved from a document video, or extract gait feature information of a person involved from a document content, for each case, the case management module 15 may calculate, according to the gait feature information of a person involved in the case and the gait feature information of a target object, a similarity between the target object and the case, and if the similarity is greater than a preset similarity threshold, may use the target gait data of the target object as associated gait data associated with the case, and store the case identifier, the corresponding case data and the corresponding associated gait data of the case, and meanwhile, the storage module 14 may further store the gait feature information and the gait image information of the target object.
In addition, the data processing module 12 may periodically extract gait feature information corresponding to each case from the storage module, and utilize the gait feature information corresponding to each case and the gait feature information extracted from the video stream acquired in real time to realize the monitoring of the case, and if the gait feature information extracted from the video stream is matched with the gait feature information of the case, the data processing module may warn and determine the position information and the action track of the related personnel.
The embodiment of the application provides an object recognition method, which is applied to an object recognition system, as shown in fig. 4, wherein the object recognition system comprises a data acquisition module, a data processing module and a display module; comprising the following steps:
s401, the data acquisition module acquires gait data of a first object in an environment where the data acquisition module is located;
s402, the data processing module receives the gait data, determines gait feature information of each first object based on the gait data and a preset gait recognition model, and determines a target object and gait image information of the target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database;
s403, the display module displays the gait image information.
In one embodiment, the gait data comprises at least one of a video stream and a gait image sequence.
In one embodiment, the data processing module includes a data analysis module, and the data processing module determines a target object from a plurality of first objects based on the gait feature information and a preset historical video stream or gait feature database, including:
the data analysis module determines gait feature information of a second object included in each of the historical video streams based on the gait recognition model; the method comprises the steps of,
and determining a target object from the plurality of first objects based on the gait feature information corresponding to the first objects and the gait feature information or the gait feature database corresponding to the second objects.
In one embodiment, the data processing module includes a region collision module, and the data processing module determines a target object from a plurality of first objects based on the gait feature information and a preset historical video stream, including:
the regional collision module receives gait characteristic information of the second object transmitted by the data analysis module; the method comprises the steps of,
determining a target historical video stream belonging to the same location area from a plurality of historical video streams based on source information of the video streams;
for different location areas, determining a second object corresponding to the first historical video stream belonging to the location area as a third object;
determining the target object from a plurality of first objects based on gait feature information corresponding to the first objects and gait feature information corresponding to a third object corresponding to each position area; or alternatively, the process may be performed,
determining a second historical video stream belonging to the same preset time period from a plurality of historical video streams;
determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object;
and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first objects and the gait feature information corresponding to the fourth objects corresponding to each preset time period.
In one embodiment, the data processing module includes a data retrieval module that determines gait image information of a target object, comprising:
the data retrieval module determines gait image information of a target object from gait data including the target object and/or a historical video stream based on gait feature information of the target object.
In one embodiment, the method further comprises:
the data processing module acquires gait characteristic information of the target suspects;
the data processing module determines the similarity between the target suspects and the target object based on the gait feature information of the target suspects and the gait feature information of the target object;
if the data processing module determines that the similarity is larger than a preset similarity threshold, determining target gait image information of the target suspected person based on the gait image information of the target object, and sending the target gait image information to the display module;
and the display module displays the target gait image information of the target suspects.
In one embodiment, the system comprises: the warning module, the method further comprises:
after the data processing module determines that the similarity is larger than a preset similarity threshold, generating warning information, and respectively sending the warning information to a warning module and a display module;
the warning module warns based on the warning information;
the display module is also used for displaying the warning information.
In one embodiment, the system further comprises: the case management module and the storage module, the method further comprises:
the data processing module determines target gait data corresponding to the target object from the gait data;
the case management module acquires case data corresponding to a plurality of cases respectively, and target gait data and gait characteristic information of a target object transmitted by the data processing module;
the case management module determines associated gait data respectively associated with a plurality of cases from the target gait data based on the case data and the gait characteristic information of the target object;
the case management module instructs the storage module to store the cases, the case data and the corresponding associated gait data in an associated manner.
In one embodiment, the method further comprises: the storage module stores gait feature information of the target object.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the method embodiments, and are not repeated in the present disclosure. In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other form.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (8)

1. An object recognition system, the system comprising: the system comprises a data acquisition module, a data processing module and a display module, wherein the data processing module comprises a data analysis module and a region collision module;
the data acquisition module is used for acquiring gait data of a first object in the environment where the data acquisition module is located;
the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining the gait characteristic information of each first object based on the gait data and a preset gait recognition model, determining a target object from a plurality of first objects based on the gait characteristic information and a preset historical video stream or a gait characteristic database, taking the position information, the time information, the gait image, the identity information and the activity track information of the target object as gait image information, and sending the gait image information to the display module;
the display module is used for displaying the gait image information;
the data analysis module is used for determining gait feature information of a second object included in each historical video stream based on the gait recognition model;
determining a target object from a plurality of first objects based on gait feature information corresponding to the first object and gait feature information corresponding to the second object or the gait feature database;
the area collision module is used for receiving gait characteristic information of the second object transmitted by the data analysis module;
determining a first target historical video stream belonging to the same location area from a plurality of historical video streams based on source information of the video streams;
for different location areas, determining a second object corresponding to the first historical video stream belonging to the location area as a third object;
determining the target object from a plurality of first objects based on gait feature information corresponding to the first objects and gait feature information corresponding to a third object corresponding to each position area; or alternatively, the process may be performed,
determining a second historical video stream belonging to the same preset time period from a plurality of historical video streams;
determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object;
and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first objects and the gait feature information corresponding to the fourth objects corresponding to each preset time period.
2. The system of claim 1, wherein the gait data comprises at least one of a video stream and a sequence of gait images.
3. The system of claim 1, wherein the data processing module comprises a data retrieval module to:
gait image information of the target object is determined from gait data and/or historical video streams comprising the target object based on the gait feature information of the target object.
4. The system of claim 1, wherein the data processing module is further to:
acquiring gait characteristic information of a target suspect;
determining the similarity between the target suspects and the target object based on the gait feature information of the target suspects and the gait feature information of the target object;
if the similarity is determined to be greater than a preset similarity threshold, determining target gait image information of the target suspects based on the gait image information of the target objects, and sending the target gait image information to the display module;
the display module is further configured to:
and displaying the target gait image information of the target suspects.
5. The system of claim 4, wherein the system further comprises: the warning module is used for processing the data according to the following conditions: after the similarity is determined to be greater than a preset similarity threshold, generating warning information, and respectively sending the warning information to the warning module and the display module;
the warning module is used for warning based on the warning information;
the display module is also used for displaying the warning information.
6. The system of claim 1, wherein the data processing module is further to:
determining target gait data corresponding to the target object from the gait data;
the system further comprises: the case management module is used for:
acquiring case data corresponding to a plurality of cases respectively, and target gait data and gait characteristic information of a target object transmitted by the data processing module;
based on the case data and gait feature information of the target object, determining associated gait data respectively associated with a plurality of cases from the target gait data;
and indicating the storage module to store the cases, the case data and the corresponding associated gait data in an associated manner.
7. The system of claim 6, wherein the storage module is further to:
and storing gait characteristic information of the target object.
8. An object recognition method, characterized in that the method is applied in an object recognition system according to any one of claims 1-7, said object recognition system comprising: the system comprises a data acquisition module, a data processing module and a display module, wherein the data processing module comprises a data analysis module and a region collision module;
the data acquisition module acquires gait data of a first object in an environment where the data acquisition module is located;
the data processing module receives the gait data, determines gait characteristic information of each first object based on the gait data and a preset gait recognition model, determines a target object from a plurality of first objects based on the gait characteristic information and a preset historical video stream or a gait characteristic database, and takes position information, time information, a gait image, identity information and activity track information of the target object as gait image information;
the display module displays the gait image information;
the data analysis module determines gait feature information of a second object included in each of the historical video streams based on the gait recognition model;
determining a target object from a plurality of first objects based on gait feature information corresponding to the first object and gait feature information corresponding to the second object or the gait feature database;
the regional collision module receives gait characteristic information of the second object transmitted by the data analysis module;
determining a first target historical video stream belonging to the same location area from a plurality of historical video streams based on source information of the video streams;
for different location areas, determining a second object corresponding to the first historical video stream belonging to the location area as a third object;
determining the target object from a plurality of first objects based on gait feature information corresponding to the first objects and gait feature information corresponding to a third object corresponding to each position area; or alternatively, the process may be performed,
determining a second historical video stream belonging to the same preset time period from a plurality of historical video streams;
determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object;
and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first objects and the gait feature information corresponding to the fourth objects corresponding to each preset time period.
CN202010260664.7A 2020-04-03 2020-04-03 Object recognition system and method Active CN111461031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010260664.7A CN111461031B (en) 2020-04-03 2020-04-03 Object recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010260664.7A CN111461031B (en) 2020-04-03 2020-04-03 Object recognition system and method

Publications (2)

Publication Number Publication Date
CN111461031A CN111461031A (en) 2020-07-28
CN111461031B true CN111461031B (en) 2023-10-24

Family

ID=71680451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010260664.7A Active CN111461031B (en) 2020-04-03 2020-04-03 Object recognition system and method

Country Status (1)

Country Link
CN (1) CN111461031B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506684A (en) * 2016-06-14 2017-12-22 中兴通讯股份有限公司 Gait recognition method and device
CN108108693A (en) * 2017-12-20 2018-06-01 深圳市安博臣实业有限公司 Intelligent identification monitoring device and recognition methods based on 3D high definition VR panoramas
CN109508645A (en) * 2018-10-19 2019-03-22 银河水滴科技(北京)有限公司 Personal identification method and device under monitoring scene
CN109544751A (en) * 2018-11-23 2019-03-29 银河水滴科技(北京)有限公司 A kind of Door-access control method and device
CN109634981A (en) * 2018-12-11 2019-04-16 银河水滴科技(北京)有限公司 A kind of database expansion method and device
CN110139075A (en) * 2019-05-10 2019-08-16 银河水滴科技(北京)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN110765984A (en) * 2019-11-08 2020-02-07 北京市商汤科技开发有限公司 Mobile state information display method, device, equipment and storage medium
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN110874568A (en) * 2019-09-27 2020-03-10 银河水滴科技(北京)有限公司 Security check method and device based on gait recognition, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506684A (en) * 2016-06-14 2017-12-22 中兴通讯股份有限公司 Gait recognition method and device
CN108108693A (en) * 2017-12-20 2018-06-01 深圳市安博臣实业有限公司 Intelligent identification monitoring device and recognition methods based on 3D high definition VR panoramas
CN109508645A (en) * 2018-10-19 2019-03-22 银河水滴科技(北京)有限公司 Personal identification method and device under monitoring scene
CN109544751A (en) * 2018-11-23 2019-03-29 银河水滴科技(北京)有限公司 A kind of Door-access control method and device
CN109634981A (en) * 2018-12-11 2019-04-16 银河水滴科技(北京)有限公司 A kind of database expansion method and device
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN110139075A (en) * 2019-05-10 2019-08-16 银河水滴科技(北京)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN110874568A (en) * 2019-09-27 2020-03-10 银河水滴科技(北京)有限公司 Security check method and device based on gait recognition, computer equipment and storage medium
CN110765984A (en) * 2019-11-08 2020-02-07 北京市商汤科技开发有限公司 Mobile state information display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111461031A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
WO2019153193A1 (en) Taxi operation monitoring method, device, storage medium, and system
CN109766755B (en) Face recognition method and related product
CN108540751A (en) Monitoring method, apparatus and system based on video and electronic device identification
CN101404107A (en) Internet bar monitoring and warning system based on human face recognition technology
CN108563651B (en) Multi-video target searching method, device and equipment
CN108540750A (en) Based on monitor video and the associated method, apparatus of electronic device identification and system
CN108540756A (en) Recognition methods, apparatus and system based on video and electronic device identification
CN111539338A (en) Pedestrian mask wearing control method, device, equipment and computer storage medium
CN109426785A (en) A kind of human body target personal identification method and device
CN111368619A (en) Method, device and equipment for detecting suspicious people
CN112419639A (en) Video information acquisition method and device
CN112949439A (en) Method and system for monitoring invasion of personnel in key area of oil tank truck
CN113505704B (en) Personnel safety detection method, system, equipment and storage medium for image recognition
CN110363180A (en) A kind of method and apparatus and equipment that statistics stranger's face repeats
CN114067396A (en) Vision learning-based digital management system and method for live-in project field test
CN114170272A (en) Accident reporting and storing method based on sensing sensor in cloud environment
CN113901946A (en) Abnormal behavior detection method and device, electronic equipment and storage medium
CN111461031B (en) Object recognition system and method
CN113627321A (en) Image identification method and device based on artificial intelligence and computer equipment
CN115457449B (en) Early warning system based on AI video analysis and monitoring security protection
CN108540748A (en) Monitor video and the associated method, apparatus of electronic device identification and system
CN110751125A (en) Wearing detection method and device
CN112419638B (en) Method and device for acquiring alarm video
CN111353477B (en) Gait recognition system and method
CN114743026A (en) Target object orientation detection method, device, equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210201

Address after: 315000 9-3, building 91, 16 Buzheng lane, Haishu District, Ningbo City, Zhejiang Province

Applicant after: Yinhe shuidi Technology (Ningbo) Co.,Ltd.

Address before: 0701, 7 / F, 51 Xueyuan Road, Haidian District, Beijing 100191

Applicant before: Watrix Technology (Beijing) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant