CN111461031A - Object recognition system and method - Google Patents

Object recognition system and method Download PDF

Info

Publication number
CN111461031A
CN111461031A CN202010260664.7A CN202010260664A CN111461031A CN 111461031 A CN111461031 A CN 111461031A CN 202010260664 A CN202010260664 A CN 202010260664A CN 111461031 A CN111461031 A CN 111461031A
Authority
CN
China
Prior art keywords
gait
data
target
module
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010260664.7A
Other languages
Chinese (zh)
Other versions
CN111461031B (en
Inventor
黄永祯
史伟康
肖渝洋
高东霞
蒲澍
田德宽
张居昌
金振亚
姚亮
韩春静
高峰
李晓莹
秦跃
张燃
刘富文
陈永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yinhe Shuidi Technology Ningbo Co ltd
Original Assignee
Watrix Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Watrix Technology Beijing Co Ltd filed Critical Watrix Technology Beijing Co Ltd
Priority to CN202010260664.7A priority Critical patent/CN111461031B/en
Publication of CN111461031A publication Critical patent/CN111461031A/en
Application granted granted Critical
Publication of CN111461031B publication Critical patent/CN111461031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an object recognition system and method, the system comprising: the data acquisition module is used for acquiring gait data of a first object in the environment where the data acquisition module is located; the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining gait feature information of each first object based on the gait data and a preset gait recognition model, determining a target object and gait image information of the target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database, and transmitting the gait image information to the display module; and the display module is used for displaying the gait image information.

Description

Object recognition system and method
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an object recognition system and method.
Background
When the identity of an object (such as a pedestrian or an animal) is identified, especially in the security field, face identification or fingerprint identification is generally used, but these technologies are limited to short-distance identification, for example, fingerprint identification must be successfully identified through human body contact, and face identification cannot be successfully identified under the condition that the face of the object is shielded, so that the accurate and efficient identity identification of a long distance, a non-contact and a full view angle cannot be realized, and the identification information of the object cannot be displayed.
Disclosure of Invention
In view of the above, the present application is directed to an object recognition system and method for improving the recognition efficiency of an object.
In a first aspect, an embodiment of the present application provides an object recognition system, where the system includes: the data acquisition module, the data processing module and the display module;
the data acquisition module is used for acquiring gait data of a first object in the environment where the data acquisition module is located;
the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining gait feature information of each first object based on the gait data and a preset gait recognition model, determining a target object and gait image information of the target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database, and transmitting the gait image information to the display module;
and the display module is used for displaying the gait image information.
In one embodiment, the gait data comprises at least one of a video stream and a sequence of gait images.
In one embodiment, the data processing module comprises a data analysis module for:
determining gait feature information of a second object included in each historical video stream based on the gait recognition model;
and determining a target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to the second object or the gait feature database.
In one embodiment, the data processing module comprises a zone collision module for:
receiving gait feature information of a second object transmitted by the data analysis module;
determining a first target historical video stream belonging to the same position area from a plurality of historical video streams based on the source information of the video streams;
determining a second object corresponding to the first historical video stream belonging to the position area as a third object aiming at different position areas;
determining the target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to the third object corresponding to each position area; alternatively, the first and second electrodes may be,
determining a second historical video stream belonging to the same preset time period from a plurality of historical video streams;
determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object;
and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to a fourth object corresponding to each preset time period.
In one embodiment, the data processing module comprises a data retrieval module for:
determining gait image information of a target object from gait data and/or a historical video stream comprising the target object based on gait feature information of the target object.
In one embodiment, the data processing module is further configured to:
acquiring gait feature information of a target suspect;
determining the similarity between the target suspect and the target object based on the gait feature information of the target suspect and the gait feature information of the target object;
if the similarity is determined to be larger than a preset similarity threshold, determining target gait image information of the target suspect based on the gait image information of the target object, and sending the target gait image information to the display module;
the display module is further configured to:
and displaying the target gait image information of the target suspect.
In one embodiment, the system further comprises: the warning module, the data processing module is still used for: after the similarity is determined to be larger than a preset similarity threshold value, generating warning information, and respectively sending the warning information to the warning module and the display module;
the warning module is used for warning based on the warning information;
the display module is also used for displaying the warning information.
In one embodiment, the data processing module is further configured to:
determining target gait data corresponding to the target object from the gait data;
the system further comprises: the case management module is used for:
acquiring case data corresponding to a plurality of cases respectively, and target gait data and gait feature information of a target object transmitted by the data processing module;
determining associated gait data respectively associated with a plurality of cases from the target gait data based on the case data and the gait feature information of the target object;
and instructing the storage module to store the plurality of cases, the case data and the corresponding associated gait data in an associated manner.
In one embodiment, the storage module is further configured to:
and storing the gait feature information of the target object.
In a second aspect, an embodiment of the present application provides an object identification method, which is applied to the object identification system described in any one of the first aspect, where the object identification system includes: the data acquisition module, the data processing module and the display module;
the data acquisition module acquires gait data of a first object in the environment where the data acquisition module is located;
the data processing module receives the gait data, determines gait feature information of each first object based on the gait data and a preset gait recognition model, and determines a target object and gait image information of the target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database;
and the display module displays the gait image information.
The embodiment of the application provides an object identification system, which comprises a packet data acquisition module, a data processing module and a display module; the data acquisition module is used for acquiring gait data corresponding to the first objects respectively, the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining the gait feature information of each first object based on the gait data and a preset gait recognition model, and sending the gait image information to the display module based on the gait feature information and a preset historical video stream or gait feature database so as to enable the display module to display the gait image information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a first schematic structural diagram of an object recognition system according to an embodiment of the present application;
fig. 2 is a second schematic structural diagram of an object recognition system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram illustrating a third object recognition system according to an embodiment of the present application;
fig. 4 shows a flowchart of an object identification method according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
An embodiment of the present application provides an object recognition system, including: the data acquisition module, the data processing module and the display module; the data acquisition module is used for acquiring gait data corresponding to the first objects respectively, the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining the gait feature information of each first object based on the gait data and a preset gait recognition model, and sending the gait image information to the display module based on the gait feature information and a preset historical video stream or gait feature database so as to enable the display module to display the gait image information.
An embodiment of the present application provides an object identification system, as shown in fig. 1, the system specifically includes: a data acquisition module 11, a data processing module 12 and a display module 13.
The data acquisition module 11 is used for acquiring gait data of a first object in the environment where the data acquisition module is located;
the data processing module 12 is configured to receive the gait data transmitted by the data acquisition module 11, determine gait feature information of each first object based on the gait data and a preset gait recognition model, determine a target object and gait image information of the target object from the plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database, and send the gait image information to the display module;
the display module 13 is configured to display the gait image information.
The object can be a pedestrian, an animal or the like, and can be determined according to actual conditions.
The data acquisition module 11 can be applied to scenes such as a market, a traffic sign or a school, the data acquisition module 11 at least comprises a video camera and a gait recognition camera, the video camera can acquire a video stream of an object, and the video camera cannot extract gait images from the video stream, so that when the data acquisition module is the video camera, the gait data acquired by the video camera is the video stream of the object.
The gait recognition camera is embedded with a gait detection algorithm and a gait tracking algorithm, can be a gait snapshot camera, a gait recognition camera and the like, has the capability of video structural analysis, and can extract gait information and recognize the gait.
The gait data collected by the data collection module 11 may be a video stream or a gait image sequence, and may be determined according to actual conditions, and the gait data may be obtained while obtaining the gait data, gait information of the object, a region to which the object belongs, time for the camera device to collect the gait data, and a device identifier of the camera device that has collected the gait.
The data processing module 12 includes a data analysis module 121, a region collision module 122 and a data retrieval module 123, and referring to fig. 2, the functions of the data analysis module 121, the region collision module 122 and the data retrieval module 123 are described below, respectively.
The data analysis module 121 may determine, based on the gait recognition model, gait feature information of a second object included in each of the historical video streams, and determine a target object from the plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information or the gait feature database corresponding to the second object.
Here, the gait recognition module is a preset model which completes training; the historical video stream can be a video stream collected by video collection equipment which is not accessed into the system, can be a video stream collected by camera equipment which is accessed into the system, and can also be a historical case video captured from each platform; a large number of sample objects and corresponding gait feature information are stored in the gait feature database in advance. The video acquisition equipment which is not accessed into the system comprises a mobile terminal carried by an object or portable camera equipment; the camera equipment connected to the system comprises a video camera and a gait recognition camera.
The gait feature information may be information such as the magnitude, direction, and action point of the force when the subject walks.
In a specific implementation process, after receiving the gait data transmitted by the data acquisition module, when the gait data is a video stream, the data analysis module 121 extracts a plurality of gait images corresponding to a plurality of first objects from the video stream through a gait detection algorithm, inputs the plurality of gait images corresponding to the first objects into a gait tracking algorithm for each first object to obtain a gait image sequence corresponding to the first object, and inputs the gait image sequence corresponding to the first object into a gait recognition model to obtain gait feature information of the first object. The gait detection algorithm and the gait tracking algorithm can be embedded into a gait recognition box, and the gait recognition box is arranged in the data processing module.
And when the gait data is a gait image sequence, directly inputting the gait image sequence corresponding to each first object into the gait recognition model to obtain the gait feature information corresponding to each first object.
After obtaining the gait feature information of the first object, the data analysis module 121 is further configured to perform recognition processing on each historical video stream by using a gait recognition model, so as to obtain the gait feature information of the second object included in each historical video stream. Before the gait recognition model is used for processing the historical video stream, the gait detection algorithm and the gait tracking algorithm can be used for processing the historical video stream to obtain a gait image sequence corresponding to each second object, and therefore the gait image sequence of the second object is input into the gait recognition model to obtain the gait feature information of the second object.
After obtaining the gait feature information corresponding to the first object and the second object respectively, feature extraction may be performed on the gait feature information of the first object to obtain a gait feature vector corresponding to the first object, and feature extraction may be performed on the gait feature information of the second object to obtain a gait feature vector corresponding to the second object.
And inputting the gait feature vector of the first object and the gait feature vector corresponding to each second object into a similarity calculation algorithm aiming at each first object, so as to obtain first step state similarity between the first object and each second object. The similarity calculation algorithm may be a cosine similarity calculation method, an euclidean distance algorithm, or the like.
When the first step-state similarity greater than the preset similarity threshold exists in the plurality of first step-state similarities corresponding to the first object, it is indicated that the similarity between the gait feature information of the first object and the gait feature information of the second object is greater, and the first number of the first step-state similarities greater than the preset similarity threshold may be counted. The similarity threshold may be determined according to actual conditions, for example, the preset similarity threshold may be 90%, 95%, or 98%.
And sequencing the plurality of first objects according to the sequence of the first number from large to small, and determining the first objects with the preset number in the front of the sequence as target objects. The preset number may be determined according to actual conditions, for example, the preset number may be 5, 8, or 10.
The target object is determined through the historical video stream, valuable data can be found from the historical video data, for example, useful information can be obtained from an unbroken old case, criminals can be further found, and the utilization rate of video resources is improved.
In addition to determining the target object in the above manner, the determining the target object by using the first object and the gait feature information in the gait feature database may also include:
and performing feature extraction on the gait feature information in the gait feature database to obtain a gait feature vector corresponding to each sample object.
And inputting the gait feature vector of the first object and the gait feature vector corresponding to each sample object into a similarity calculation algorithm aiming at each first object to obtain second step state similarity between the first object and each sample object.
When a second step similarity larger than a preset similarity threshold exists in the plurality of second step similarities corresponding to the first object, the similarity between the gait feature information of the first object and the gait feature information of the sample object is larger, a second number of the second step similarities larger than the preset similarity threshold can be counted, the plurality of first objects are sorted according to a descending order of the second number, and a preset number of second objects which are sorted in the front are determined as target objects.
The present disclosure may further determine the target object by a video stream captured by a video capture device that acquires the source information when determining the target object by the first object and the second object, which is described in detail below.
The data analysis module 121 may further transmit the gait feature information of the second object to the regional collision module 122 after determining the gait feature information of the second object, before the regional collision module 122 processes the gait feature information of the second object, the regional collision module 122 may determine, from the plurality of historical video streams, a first historical video stream belonging to the same location region, determine, for different location regions, a second object corresponding to the first historical video stream belonging to the location region as a third object, and determine, from the plurality of first objects, a target object based on the gait feature information corresponding to the first object and the gait feature information corresponding to the third object corresponding to each location region.
Here, the source information of the video stream includes a device identifier of the image pickup device that has acquired the video stream, and device location information, the device identifier may be a device serial number, a code (preset), or the like, and the device location information may be installation location information (such as GPS coordinates) of the device that has acquired the history video stream.
In a specific implementation process, historical video streams are further divided, the actual distance between the camera devices corresponding to every two historical video streams is calculated according to the device position information in the source information of the video streams, if the actual distance is smaller than a preset distance threshold value, the corresponding historical video streams are determined to belong to the same position area, and the historical video streams belonging to the same position area are used as a first historical video stream.
In addition to determining the first historical video streams belonging to the same location area in the above manner, the device location information corresponding to each historical video stream may be marked in the map, and the historical video streams belonging to the same administrative area are determined as the first historical video streams.
For example, if the historical video streams include 5, the device with the first historical video stream being Q1, the second historical video stream being Q2, the third historical video stream being Q3, the fourth historical video stream being Q4, and the fifth historical video stream being Q5, the actual distance between Q1 and Q2, and the actual distance between Q2 and Q3 are smaller than the preset distance threshold, then Q1, Q2, and Q3 are determined to be the first historical video stream.
And for each position area, determining a second object corresponding to the first historical video stream belonging to the position area, and taking the determined second object as a third object.
And for each first object, inputting the gait feature vector of the first object and the gait feature vector corresponding to the third object corresponding to each position area into a similarity calculation algorithm to obtain third step similarity between the first object and each third object.
When a third step similarity greater than a preset similarity threshold exists in the plurality of third step similarities corresponding to the first object, it is described that the similarity between the gait feature information of the first object and the gait feature information of the third object is large, and a second number of the third step similarities greater than the preset similarity threshold may be counted.
And sequencing the plurality of first objects according to the sequence of the second number from large to small, and determining a preset number of third objects which are sequenced at the top as target objects.
In addition to determining a target object using an area to which an image pickup apparatus that has captured a video stream belongs, a second history video stream that belongs to a preset time period may be determined from among the plurality of history video streams; determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object; and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to a fourth object corresponding to each preset time period.
Here, the preset time period may be a commute time period (e.g., 7:00-9:00, 17:00-19:00), and a plurality of time periods may be preset.
In a specific implementation process, for each historical video stream, a second historical video stream corresponding to a preset time period is intercepted from the historical video stream, a second object corresponding to each second historical video stream is determined, and the determined second object is used as a fourth object.
And for each first object, inputting the gait feature vector of the first object and the gait feature vector corresponding to each fourth object into a similarity calculation algorithm to obtain fourth gait similarity between the first object and each fourth object.
When a fourth gait similarity greater than the preset similarity threshold exists in the plurality of fourth gait similarities corresponding to the first object, it is described that the gait feature information of the first object and the gait feature information of the fourth object have a greater similarity, and the first variance number of the fourth gait similarity greater than the preset similarity threshold can be counted.
And sequencing the plurality of first objects according to the sequence of the third number from large to small, and determining a preset number of fourth objects which are sequenced at the top as target objects.
After the data analysis module 121 or the regional collision module 122 determines the target object, the gait feature information of the target object is transmitted to the data retrieval module 123, and the data retrieval module 123 may determine the gait image information of the target object from the gait data and/or the historical video stream including the target object corresponding to the gait feature information of the target object.
The gait image information comprises gait images, position information, time information, identity information and activity track information of the object, wherein the position information can be GPS coordinates, the position is the position of a data acquisition module for acquiring gait data of the target object, the time information is the time when the target object passes through the position of the data acquisition module, when the object is a user, the identity information of the object is the identity information authorized by the user, and the activity track information is the track information authorized by the user.
In a specific implementation process, after the target object is determined, a gait image sequence or a current video stream corresponding to the target object may be acquired, a historical video stream including the target object may also be acquired, gait data and the historical video stream including the target object may also be acquired at the same time, and the determination may be performed according to an actual situation.
According to the gait feature information of the target object, the position information of the target object, the time information when the target object passes through the corresponding position and the gait image of the target object at the position are determined from the current video stream or the historical video stream comprising the target image, and the position information, the time information, the gait image, the identity information, the activity track information and the like of the target object are taken as the gait image information.
The system of the application can also realize early warning, and during early warning, the data processing module 12 can acquire gait feature information of a target suspect and determine the similarity between the target suspect and the target object based on the gait feature information of the target suspect and the gait feature information of the target object; if the similarity is determined to be greater than the preset similarity threshold, the target gait image information of the target suspect is determined based on the gait image information of the target object, and the target gait image information is sent to the display module 13.
Here, the gait feature information of the target suspect is authorized gait feature information.
In a specific implementation process, after acquiring the gait feature information of the target suspect, feature extraction may be performed on the gait feature information of the target suspect to obtain a gait feature vector corresponding to the target suspect, and feature extraction may be performed on the gait feature information of the target object to obtain the gait feature vector of the target object.
And inputting the gait feature vector of the target suspect and the gait feature vector of the target object into a similarity calculation algorithm to obtain the similarity between the target suspect and the target object.
When the similarity between the target suspect and the target object is greater than the preset similarity threshold, the gait image information in the gait image information of the target object can be used as the target gait image information of the target suspect so as to be displayed in the display module 13.
When this system confirms that the similarity between target suspect and the target object is greater than the preset similarity threshold, can also generate warning information, send warning information to warning module, warning module warns based on the warning instruction in the warning information, for example, can warn through the mode of warning lamp (if red light stroboscopic), also can warn through the speech mode.
When the warning module warns, the display module 13 can display warning information, where the warning information includes the number of target suspects and the identity information of each target suspects.
The system disclosed by the application can also be used for performing associated storage with cases authorized by a third party, and can be realized by the case management module 15 when performing associated storage on cases authorized by the third party, referring to fig. 3.
When determining the gait image information, the data processing module 12 may further determine target gait data corresponding to the target object from the gait data, that is, determine a gait image sequence or a video stream including the target object, use the video stream or the gait image sequence including the target object as the target gait data, and transmit the target gait data to the case management module 15.
The case management module 15 may obtain case data corresponding to a plurality of cases, and target gait data transmitted by the data processing module 12.
Here, the case data includes data such as case name, file content, and file video, the file content includes detailed information of the case (such as gait feature information of the person involved in the case), and the file video includes related video when the case occurs, and the like.
The case management module 15 may determine associated gait data associated with each of the plurality of cases from the target gait data based on the case data, and instruct the storage module 14 to store the cases in association. Case data and corresponding associated gait data.
In the specific implementation process, the case management module 15 may extract gait feature information of the case-involved person from the file video, or extract gait feature information of the case-involved person from the file content, and for each case, the case management module 15 may calculate a similarity between the target object and the case according to the gait feature information of the case-involved person corresponding to the case and the gait feature information of the target object, and if the similarity is greater than a preset similarity threshold, may use the target gait data of the target object as associated gait data associated with the case, store the case identifier of the case, the corresponding case data and the corresponding associated gait data, and at the same time, the storage module 14 may also store the gait feature information and the gait image information of the target object.
In addition, the data processing module 12 may periodically extract the gait feature information corresponding to each case from the storage module, and monitor the case by using the gait feature information corresponding to each case and the gait feature information extracted from the video stream acquired in real time, and if the gait feature information extracted from the video stream matches the gait feature information of the case, the data processing module may warn and determine the position information and the action trajectory of the relevant person.
The embodiment of the application provides an object identification method, as shown in fig. 4, the method is applied to an object identification system, and the object identification system comprises a data acquisition module, a data processing module and a display module; the method comprises the following steps:
s401, the data acquisition module acquires gait data of a first object in the environment where the data acquisition module is located;
s402, the data processing module receives the gait data, determines gait feature information of each first object based on the gait data and a preset gait recognition model, and determines a target object and gait image information of the target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database;
and S403, the display module displays the gait image information.
In one embodiment, the gait data comprises at least one of a video stream and a sequence of gait images.
In one embodiment, the data processing module includes a data analysis module, and the data processing module determines a target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database, including:
the data analysis module determines gait feature information of a second object included in each historical video stream based on the gait recognition model; and the number of the first and second groups,
and determining a target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information or the gait feature database corresponding to the second object.
In one embodiment, the data processing module includes a region collision module, and the data processing module determines a target object from a plurality of first objects based on the gait feature information and a preset historical video stream, including:
the region collision module receives the gait feature information of the second object transmitted by the data analysis module; and the number of the first and second groups,
determining a target historical video stream belonging to the same position area from a plurality of historical video streams based on the source information of the video streams;
determining a second object corresponding to the first historical video stream belonging to the position area as a third object aiming at different position areas;
determining the target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to the third object corresponding to each position area; alternatively, the first and second electrodes may be,
determining a second historical video stream belonging to the same preset time period from a plurality of historical video streams;
determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object;
and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to a fourth object corresponding to each preset time period.
In one embodiment, the data processing module comprises a data retrieval module, the data processing module determining gait image information of a target object, comprising:
the data retrieval module determines gait image information of a target object from gait data and/or historical video streams including the target object based on gait feature information of the target object.
In one embodiment, the method further comprises:
the data processing module acquires gait feature information of a target suspect;
the data processing module determines the similarity between the target suspect and the target object based on the gait feature information of the target suspect and the gait feature information of the target object;
if the data processing module determines that the similarity is greater than a preset similarity threshold, the data processing module determines target gait image information of the target suspect based on the gait image information of the target object and sends the target gait image information to the display module;
and the display module displays the target gait image information of the target suspect.
In one embodiment, the system comprises: an alert module, the method further comprising:
after the data processing module determines that the similarity is greater than a preset similarity threshold, warning information is generated and sent to a warning module and a display module respectively;
the warning module warns based on the warning information;
the display module is also used for displaying the warning information.
In one embodiment, the system further comprises: case management module and storage module, the method also includes:
the data processing module determines target gait data corresponding to the target object from the gait data;
the case management module acquires case data corresponding to a plurality of cases respectively, and target gait data and gait feature information of a target object transmitted by the data processing module;
the case management module determines associated gait data respectively associated with a plurality of cases from the target gait data based on the case data and the gait feature information of the target object;
the case management module instructs the storage module to store the plurality of cases, the case data and the corresponding associated gait data in an associated manner.
In one embodiment, the method further comprises: the storage module stores gait feature information of the target object.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An object recognition system, the system comprising: the data acquisition module, the data processing module and the display module;
the data acquisition module is used for acquiring gait data of a first object in the environment where the data acquisition module is located;
the data processing module is used for receiving the gait data transmitted by the data acquisition module, determining gait feature information of each first object based on the gait data and a preset gait recognition model, determining a target object and gait image information of the target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database, and transmitting the gait image information to the display module;
and the display module is used for displaying the gait image information.
2. The system of claim 1, wherein the gait data comprises at least one of a video stream and a sequence of gait images.
3. The system of claim 1, wherein the data processing module comprises a data analysis module to:
determining gait feature information of a second object included in each historical video stream based on the gait recognition model;
and determining a target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to the second object or the gait feature database.
4. The system of claim 3, wherein the data processing module comprises a zone collision module to:
receiving gait feature information of a second object transmitted by the data analysis module;
determining a first target historical video stream belonging to the same position area from a plurality of historical video streams based on the source information of the video streams;
determining a second object corresponding to the first historical video stream belonging to the position area as a third object aiming at different position areas;
determining the target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to the third object corresponding to each position area; alternatively, the first and second electrodes may be,
determining a second historical video stream belonging to the same preset time period from a plurality of historical video streams;
determining a second object corresponding to a second historical video stream belonging to different preset time periods as a fourth object;
and determining the target object from a plurality of first objects based on the gait feature information corresponding to the first object and the gait feature information corresponding to a fourth object corresponding to each preset time period.
5. The system of claim 4, wherein the data processing module comprises a data retrieval module to:
determining gait image information of a target object from gait data and/or a historical video stream comprising the target object based on gait feature information of the target object.
6. The system of claim 1, wherein the data processing module is further to:
acquiring gait feature information of a target suspect;
determining the similarity between the target suspect and the target object based on the gait feature information of the target suspect and the gait feature information of the target object;
if the similarity is determined to be larger than a preset similarity threshold, determining target gait image information of the target suspect based on the gait image information of the target object, and sending the target gait image information to the display module;
the display module is further configured to:
and displaying the target gait image information of the target suspect.
7. The system of claim 6, further comprising: the warning module, the data processing module is still used for: after the similarity is determined to be larger than a preset similarity threshold value, generating warning information, and respectively sending the warning information to the warning module and the display module;
the warning module is used for warning based on the warning information;
the display module is also used for displaying the warning information.
8. The system of claim 1, wherein the data processing module is further to:
determining target gait data corresponding to the target object from the gait data;
the system further comprises: the case management module is used for:
acquiring case data corresponding to a plurality of cases respectively, and target gait data and gait feature information of a target object transmitted by the data processing module;
determining associated gait data respectively associated with a plurality of cases from the target gait data based on the case data and the gait feature information of the target object;
and instructing the storage module to store the plurality of cases, the case data and the corresponding associated gait data in an associated manner.
9. The system of claim 8, wherein the storage module is further to:
and storing the gait feature information of the target object.
10. An object recognition method applied to an object recognition system according to any one of claims 1 to 9, the object recognition system comprising: the data acquisition module, the data processing module and the display module;
the data acquisition module acquires gait data of a first object in the environment where the data acquisition module is located;
the data processing module receives the gait data, determines gait feature information of each first object based on the gait data and a preset gait recognition model, and determines a target object and gait image information of the target object from a plurality of first objects based on the gait feature information and a preset historical video stream or a gait feature database;
and the display module displays the gait image information.
CN202010260664.7A 2020-04-03 2020-04-03 Object recognition system and method Active CN111461031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010260664.7A CN111461031B (en) 2020-04-03 2020-04-03 Object recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010260664.7A CN111461031B (en) 2020-04-03 2020-04-03 Object recognition system and method

Publications (2)

Publication Number Publication Date
CN111461031A true CN111461031A (en) 2020-07-28
CN111461031B CN111461031B (en) 2023-10-24

Family

ID=71680451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010260664.7A Active CN111461031B (en) 2020-04-03 2020-04-03 Object recognition system and method

Country Status (1)

Country Link
CN (1) CN111461031B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506684A (en) * 2016-06-14 2017-12-22 中兴通讯股份有限公司 Gait recognition method and device
CN108108693A (en) * 2017-12-20 2018-06-01 深圳市安博臣实业有限公司 Intelligent identification monitoring device and recognition methods based on 3D high definition VR panoramas
CN109508645A (en) * 2018-10-19 2019-03-22 银河水滴科技(北京)有限公司 Personal identification method and device under monitoring scene
CN109544751A (en) * 2018-11-23 2019-03-29 银河水滴科技(北京)有限公司 A kind of Door-access control method and device
CN109634981A (en) * 2018-12-11 2019-04-16 银河水滴科技(北京)有限公司 A kind of database expansion method and device
CN110139075A (en) * 2019-05-10 2019-08-16 银河水滴科技(北京)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN110765984A (en) * 2019-11-08 2020-02-07 北京市商汤科技开发有限公司 Mobile state information display method, device, equipment and storage medium
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN110874568A (en) * 2019-09-27 2020-03-10 银河水滴科技(北京)有限公司 Security check method and device based on gait recognition, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506684A (en) * 2016-06-14 2017-12-22 中兴通讯股份有限公司 Gait recognition method and device
CN108108693A (en) * 2017-12-20 2018-06-01 深圳市安博臣实业有限公司 Intelligent identification monitoring device and recognition methods based on 3D high definition VR panoramas
CN109508645A (en) * 2018-10-19 2019-03-22 银河水滴科技(北京)有限公司 Personal identification method and device under monitoring scene
CN109544751A (en) * 2018-11-23 2019-03-29 银河水滴科技(北京)有限公司 A kind of Door-access control method and device
CN109634981A (en) * 2018-12-11 2019-04-16 银河水滴科技(北京)有限公司 A kind of database expansion method and device
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN110139075A (en) * 2019-05-10 2019-08-16 银河水滴科技(北京)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN110874568A (en) * 2019-09-27 2020-03-10 银河水滴科技(北京)有限公司 Security check method and device based on gait recognition, computer equipment and storage medium
CN110765984A (en) * 2019-11-08 2020-02-07 北京市商汤科技开发有限公司 Mobile state information display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111461031B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN107305627B (en) Vehicle video monitoring method, server and system
WO2019153193A1 (en) Taxi operation monitoring method, device, storage medium, and system
US20220092881A1 (en) Method and apparatus for behavior analysis, electronic apparatus, storage medium, and computer program
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN109766755B (en) Face recognition method and related product
CN108540751A (en) Monitoring method, apparatus and system based on video and electronic device identification
Bertoni et al. Perceiving humans: from monocular 3d localization to social distancing
CN108540750A (en) Based on monitor video and the associated method, apparatus of electronic device identification and system
CN108540756A (en) Recognition methods, apparatus and system based on video and electronic device identification
CN111539338A (en) Pedestrian mask wearing control method, device, equipment and computer storage medium
CN111460985A (en) On-site worker track statistical method and system based on cross-camera human body matching
JP5718632B2 (en) Part recognition device, part recognition method, and part recognition program
RU2315352C2 (en) Method and system for automatically finding three-dimensional images
CN110147731A (en) Vehicle type recognition method and Related product
KR20220000873A (en) Safety control service system unsing artifical intelligence
CN113505704B (en) Personnel safety detection method, system, equipment and storage medium for image recognition
CN114170272A (en) Accident reporting and storing method based on sensing sensor in cloud environment
CN111383248A (en) Method and device for judging red light running of pedestrian and electronic equipment
CN115457449B (en) Early warning system based on AI video analysis and monitoring security protection
CN111461031A (en) Object recognition system and method
CN108540748A (en) Monitor video and the associated method, apparatus of electronic device identification and system
CN111898434B (en) Video detection and analysis system
CN109819207B (en) Target searching method and related equipment
CN114783097A (en) Hospital epidemic prevention management system and method
CN114333079A (en) Method, device, equipment and storage medium for generating alarm event

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210201

Address after: 315000 9-3, building 91, 16 Buzheng lane, Haishu District, Ningbo City, Zhejiang Province

Applicant after: Yinhe shuidi Technology (Ningbo) Co.,Ltd.

Address before: 0701, 7 / F, 51 Xueyuan Road, Haidian District, Beijing 100191

Applicant before: Watrix Technology (Beijing) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant