CN112270257A - Motion trajectory determination method and device and computer readable storage medium - Google Patents

Motion trajectory determination method and device and computer readable storage medium Download PDF

Info

Publication number
CN112270257A
CN112270257A CN202011162643.8A CN202011162643A CN112270257A CN 112270257 A CN112270257 A CN 112270257A CN 202011162643 A CN202011162643 A CN 202011162643A CN 112270257 A CN112270257 A CN 112270257A
Authority
CN
China
Prior art keywords
groups
vehicle
sets
images
close
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011162643.8A
Other languages
Chinese (zh)
Inventor
王维治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Infineon Information Co ltd
Original Assignee
Shenzhen Infinova Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Infinova Ltd filed Critical Shenzhen Infinova Ltd
Priority to CN202011162643.8A priority Critical patent/CN112270257A/en
Publication of CN112270257A publication Critical patent/CN112270257A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of monitoring security and protection, and provides a motion trail determination method, a motion trail determination device and a computer readable storage medium, wherein the motion trail determination method comprises the following steps: respectively carrying out face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets; the monitoring equipment acquires N different areas within a preset time period at N acquisition angles by using N groups of close-up images; and respectively determining the motion trail of the target person and the motion trail of the target vehicle based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets. In the above-described motion trajectory determination method, the time series of the N groups of close-up images are identical, so that the complete motion trajectories of the target person and the target vehicle can be obtained.

Description

Motion trajectory determination method and device and computer readable storage medium
Technical Field
The application belongs to the technical field of monitoring and security protection, and particularly relates to a motion trail determination method and device and a computer readable storage medium.
Background
Currently, in order to realize real-time monitoring of some large sites, monitoring equipment can be installed in the sites. The conventional solution is to install a monitoring device at regular intervals in these large-scale sites to realize real-time monitoring of the large-scale sites, and determine the moving tracks of pedestrians and vehicles appearing in the monitoring devices through the installed monitoring devices. However, when acquiring images of pedestrians and/or vehicles appearing in a plurality of monitoring devices, the solution causes that the motion tracks of pedestrians and/or vehicles in the shooting ranges of the plurality of monitoring devices are missing in a certain time period due to different time sequences of capturing video images by some monitoring devices, and the complete motion tracks of pedestrians and/or vehicles cannot be obtained.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining a motion trail and a computer readable storage medium, which can solve the problem that in the existing scheme, the motion trails of pedestrians and/or vehicles in the shooting ranges of a plurality of monitoring devices are missing in a certain time period and the complete motion trails of the pedestrians and/or vehicles cannot be obtained due to the fact that the time sequences of video images collected by some monitoring devices are different.
In a first aspect, an embodiment of the present application provides a motion trajectory determination method, including:
respectively carrying out face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets; wherein, N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles by one monitoring device;
and respectively determining the motion trail of the target person and the motion trail of the target vehicle based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets.
Further, the step of respectively performing face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets includes:
respectively inputting the N groups of close-up images into a preset first feature extraction model for processing to obtain N groups of face feature sets corresponding to the N groups of close-up images;
and respectively inputting the N groups of close-up images into a preset second feature extraction model for processing to obtain N groups of vehicle feature sets corresponding to the N groups of close-up images.
Further, before the inputting the N groups of close-up images into a preset first feature extraction model for processing, the method further includes:
acquiring N groups of first image sets;
inputting N groups of first image sets into a preset face detection model and a preset vehicle detection model respectively to obtain face score values and vehicle score values corresponding to each first image in the N groups of first image sets; wherein the face score value is used to represent the integrity of the face in the first image; the vehicle score value is used to represent the integrity of the vehicle in the first image;
and determining N groups of close-up images according to a preset strategy and the face score value and the vehicle score value which respectively correspond to each first image in the N groups of first images.
Further, before acquiring the N sets of first images, the method further includes:
acquiring N groups of video image sets;
inputting N groups of video image sets into a preset target detection model for target recognition to obtain N groups of target recognition result sets corresponding to the N groups of video image sets;
and determining N groups of video image sets including faces and/or vehicles in the N groups of target recognition result sets as N groups of first image sets.
Further, the determining the motion trajectory of the target person and the motion trajectory of the target vehicle based on the comparison results between the N groups of face feature sets and the comparison results between the N groups of vehicle feature sets respectively includes:
if a first face feature set exists in the N groups of face feature sets, determining the motion track of the target person according to the first face feature set; the first face feature set is a face feature set of which the similarity of a target face feature set corresponding to the target person is greater than or equal to a first preset similarity threshold;
if detecting that a first vehicle feature set exists in the N groups of vehicle feature sets, determining the motion track of the target vehicle according to the first vehicle feature set; the first vehicle feature set is a vehicle feature set of which the similarity of a target vehicle feature set corresponding to the target vehicle is greater than or equal to a second preset similarity threshold.
Further, after determining the motion trajectory of the target person and the motion trajectory of the target vehicle, the method further includes:
and sending the motion trail of the target person and the motion trail of the target vehicle to a target terminal.
In a second aspect, an embodiment of the present application provides a motion trajectory determination apparatus, including:
the recognition unit is used for respectively carrying out face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets; wherein, N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles by one monitoring device;
and the first determining unit is used for determining the motion trail of the target person and the motion trail of the target vehicle respectively based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets.
In a third aspect, an embodiment of the present application provides a motion trajectory determination apparatus, including:
a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of controlling the trusted boot method as described in any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the motion trajectory determination method according to any one of the above first aspects.
In a fifth aspect, the present application provides a computer program product, which when run on a motion trajectory determination apparatus, causes the motion trajectory determination apparatus to execute the motion trajectory determination method according to any one of the above first aspects.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the motion trail determining method provided by the embodiment of the application, N groups of face feature sets and N groups of vehicle feature sets are obtained by respectively carrying out face feature recognition and vehicle feature recognition on N groups of close-up images; the monitoring equipment acquires N different areas within a preset time period at N acquisition angles by using N groups of close-up images; and respectively determining the motion trail of the target person and the motion trail of the target vehicle based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets. In the motion trail determination method, the N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles through one monitoring device, so that the time sequences of the N groups of close-up images are consistent, and complete motion trails of the target person and the target vehicle in the N different areas can be obtained through the determination method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of an implementation of a motion trajectory determination method provided in an embodiment of the present application;
fig. 2 is a flowchart of a specific implementation of S101 in a motion trajectory determination method provided in an embodiment of the present application;
fig. 3 is a flowchart of an implementation of a motion trajectory determination method according to another embodiment of the present application;
fig. 4 is a flowchart of an implementation of a motion trajectory determination method according to yet another embodiment of the present application;
fig. 5 is a flowchart of an implementation of a motion trajectory determination method according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of a motion trajectory determination apparatus provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a motion trajectory determination device according to another embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a motion trajectory determination method according to an embodiment of the present application. In the embodiment of the application, the execution subject of the motion trail determination method is a motion trail determination device. The motion trail determination device may comprise a terminal or a server, and may also be a chip in the terminal or a processor in the server. Here, the terminal and the server may be a smart phone, a tablet computer, a desktop computer, or the like.
As shown in fig. 1, the motion trajectory determination method may include steps S101 to S102, which are detailed as follows:
in S101, respectively carrying out face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets; wherein, N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles by one monitoring device.
In this embodiment of the application, the monitoring device may be a hundred million-level pixel imaging device, and the hundred million-level pixel imaging device employs an array type multi-scale ultra-high definition imaging technology. The array type multi-scale ultrahigh-definition imaging technology is a cross-scale imaging technology of a panoramic main lens and N close-up lens arrays. The N close-up lenses are responsible for clear imaging in respective areas, the hundred million-level pixel camera equipment realizes positioning by utilizing the panoramic main lens and splices the N close-up lenses together, and therefore array type multi-scale ultrahigh-definition imaging is achieved.
Based on this, taking a hundred million-level pixel shooting device as an example, the N groups of close-up images may be obtained by the hundred million-level pixel shooting device by capturing N different regions at N capture angles within a preset time period, that is, the N groups of close-up images respectively correspond to the close-up contents within N sub-regions within one main region within the preset time period.
It should be noted that each group of close-up images is used to describe the close-up content in each sub-region corresponding to each group of close-up images within the preset time period, that is, the time sequence of N groups of close-up images is consistent, so that the motion trail determination device can determine the motion trail of a certain pedestrian and/or a certain vehicle within the preset time period according to the N groups of close-up images. The preset time period may be set according to actual needs, and is not limited herein.
In an embodiment of the application, the motion trail determination device may be preset with N recognition modules, and the N recognition modules are controlled to respectively perform face feature recognition and vehicle feature recognition on the N groups of close-up images, so that the working efficiency of the motion trail determination device is improved.
The motion trail determination device respectively performs face feature recognition and vehicle recognition on the N groups of close-up images, and can associate each group of close-up images with the corresponding face feature set and associate each group of close-up images with the corresponding vehicle feature set when the N groups of face feature sets and the N groups of vehicle feature sets are obtained. The face feature set and the vehicle feature set may be set according to actual needs, and are not limited herein, and the face feature set may include, but is not limited to, a proportion of five sense organs, whether to wear a mask, whether to wear a mustache, and whether to wear glasses, for example. The vehicle feature set may include, but is not limited to, vehicle color, vehicle brand, and license plate number.
If a close-up image includes only pedestrians, the feature set of the vehicle in the close-up image is zero, and if a close-up image includes only vehicles, the feature set of the face in the close-up image is zero.
In the embodiment of the present application, each group of close-up images may include a plurality of pedestrians and/or a plurality of vehicles, and therefore, the facial feature set corresponding to the group of close-up images may also include a plurality of facial feature subsets, and the vehicle feature set corresponding to the group of close-up images may also include a plurality of vehicle feature subsets. That is, the motion trajectory determination means may associate a subset of the face features corresponding to each pedestrian image in each group of close-up images with a subset of the face features corresponding to that pedestrian image in the group of close-up images, and associate a subset of the vehicle features corresponding to each vehicle image in each group of close-up images with a subset of the vehicle features corresponding to that vehicle image in the group of close-up images, such that each face image in each group of close-up images corresponds to each subset of the face features, and each vehicle image in each group of close-up images corresponds to each subset of the vehicle features.
In the embodiment of the present application, the motion trajectory determination device stores a plurality of feature extraction models in advance, and includes: a first feature extraction model and a second feature extraction model.
The first feature extraction model is used for extracting features of the pedestrians in the close-up image to obtain a face feature set corresponding to each pedestrian in the close-up image. The first feature extraction model may be obtained by training a first deep learning model constructed in advance based on a first preset sample set. Each sample data in the first preset sample set comprises a first close-up sample image and a face feature set corresponding to the first close-up sample image. The value of each element in the face feature set is used to represent the probability that each pedestrian in the first close-up image has the face feature to which the element corresponds. When a first deep learning model which is constructed in advance is trained, a first sample close-up image in each sample is used as input of the first deep learning model, a face feature set corresponding to the first sample close-up image in each sample is used as output of the first deep learning model, through training, the first deep learning model can learn corresponding relations between all possible close-up images and the face feature set, and the trained first deep learning model is used as a first feature extraction model.
Illustratively, assuming that the face feature set is [ a1, B1, C1], the element a1, the element B1 and the element C1 correspond to the face features 1, 2 and 3, respectively, then the value of the element a1 is used to indicate the probability that each pedestrian in each group of close-up images has the face feature 1, the value of the element B1 is used to indicate the probability that each pedestrian in each group of close-up images has the face feature 2, and the value of the element C1 is used to indicate the probability that each pedestrian in each group of close-up images has the face feature 3.
The second feature extraction model is used for extracting features of the pedestrians in the close-up image to obtain a vehicle feature set corresponding to each vehicle in the close-up image. The second feature extraction model may be obtained by training a second deep learning model that is constructed in advance based on a second preset sample set. And each sample data in the second preset sample set comprises a second sample close-up image and a vehicle feature set corresponding to the second sample close-up image. The value of each element in the vehicle feature set is used to represent the probability that each vehicle in the second sample close-up image has the vehicle feature to which the element corresponds. When a pre-constructed second deep learning model is trained, the second sample close-up images in each sample are used as the input of the second deep learning model, the vehicle feature set corresponding to the second sample close-up images in each sample is used as the output of the second deep learning model, through training, the second deep learning model can learn the corresponding relation between all possible close-up images and the vehicle feature set, and the trained second deep learning model is used as a second feature extraction model.
For example, assuming that the vehicle feature set is [ a2, B2, C2], the element a2, the element B2 and the element C2 correspond to the vehicle features 1, 2 and 3, respectively, then the value of the element a2 is used to indicate the probability that the vehicle feature 1 exists for each vehicle in each group of close-up images, the value of the element B2 is used to indicate the probability that the vehicle feature 2 exists for each vehicle in each group of close-up images, and the value of the element C2 is used to indicate the probability that the vehicle feature 3 exists for each vehicle in each group of close-up images.
Based on this, in an embodiment of the present application, the motion trajectory determining apparatus may specifically obtain N sets of human faces and N sets of vehicle feature sets through the following steps S201 to S202, which are detailed as follows:
in S201, the N groups of close-up images are respectively input into a preset first feature extraction model to be processed, and N groups of face feature sets corresponding to the N groups of close-up images are obtained.
In this embodiment, the motion trajectory determining apparatus may further store a plurality of detection models in advance, including: a face detection model and a vehicle detection model.
The face detection model is used to detect the integrity of a face contained in the first image. The face detection model can be obtained by training a third deep learning model which is constructed in advance based on a third preset sample set. Each sample data in the third preset sample set comprises a first sample image and a face score value corresponding to the first sample image. The face score value is used for representing the completeness and definition of the five sense organs in the face of each pedestrian in the first sample image, and the higher the score value is, the more complete the five sense organs in the face is, and the higher the definition of the five sense organs is. When a pre-constructed third deep learning model is trained, a first sample image in each sample is used as the input of the third deep learning model, a face score value corresponding to the first sample image in each sample is used as the output of the third deep learning model, through training, the third deep learning model can learn the corresponding relation between all possible sample images and the face score values, and the trained third deep learning model is used as a face detection model.
The vehicle detection model is used to detect the integrity of the vehicle contained in the first image. The vehicle detection model may be obtained by training a fourth deep learning model that is constructed in advance based on a fourth preset sample set. Each sample data in the fourth preset sample set comprises a second sample image and a vehicle score value corresponding to the second sample image. The vehicle score value is used for representing the integrity and the definition of the vehicle body of each vehicle in the second sample image, and the higher the score value is, the more complete the vehicle body is, and the higher the definition of the vehicle body is. When a pre-constructed fourth deep learning model is trained, the second sample image in each sample is used as the input of the fourth deep learning model, the vehicle score value corresponding to the second sample image in each sample is used as the output of the fourth deep learning model, through training, the fourth deep learning model can learn the corresponding relation between all possible sample images and the vehicle score values, and the trained fourth deep learning model is used as the vehicle detection model.
Based on this, in another embodiment of the present application, the motion trajectory determination means may determine N groups of close-up images through S301 to S303 as shown in fig. 3, which is detailed as follows:
in S301, N sets of first images are acquired.
In this embodiment, since the video image may include other living objects and/or inanimate objects besides pedestrians and vehicles, the motion trajectory determination device may be preset to store the target detection model. The target detection model is used for detecting a target object in the image and identifying the type of the target object. The object detection model may be an existing convolutional neural network-based object detection model.
Based on this, in a further embodiment of the present application, the motion trajectory determining apparatus may determine the video image set as the first image set through S401 to S403 shown in fig. 4, which is detailed as follows:
in S401, N sets of video images are acquired.
In S402, inputting N sets of video image sets into a preset target detection model for target recognition, so as to obtain N sets of target recognition result sets corresponding to the N sets of video image sets.
In S403, N groups of video image sets including faces and/or vehicles in N groups of the target recognition result sets are determined as N groups of the first image sets.
In this embodiment, the target recognition result set is used to describe the type of the target object included in each video image in the video image set.
Types of target objects may include, but are not limited to, pedestrians, vehicles, and other non-pedestrians. By way of example, a non-pedestrian may include, but is not limited to, a cat, a dog, a mouse, a building, or the like.
When the target recognition result corresponding to a certain video image includes a pedestrian and/or a vehicle, it is described that the video image includes the pedestrian and/or the vehicle, and therefore, the motion trail determination device may determine the video image set including the pedestrian and/or the vehicle in the target recognition result set as the first image set.
In S302, inputting N groups of first image sets into a preset face detection model and a preset vehicle detection model, respectively, to obtain a face score value and a vehicle score value corresponding to each first image in the N groups of first image sets; wherein the face score value is used to represent the integrity of the face in the first image; the vehicle score value is used to represent the integrity of the vehicle in the first image.
In S303, N groups of close-up images are determined according to a preset strategy and the face score value and the vehicle score value which respectively correspond to each first image in the N groups of first images.
In this embodiment, the preset policy may be set according to actual needs, and is not limited herein, and for example, the preset policy may include three sub-policies: the first sub-policy may be: if the first image set only comprises pedestrians, determining the close-up images of each group according to the face score values corresponding to the first image set; the second sub-policy may be: if the first image set only comprises vehicles, determining close-up images of each group according to the vehicle score values respectively corresponding to the first image set; the third sub-policy may be: and if the first image set comprises pedestrians and vehicles, determining the close-up images of each group according to the face score value and the vehicle score value which respectively correspond to the first image set.
Specifically, if the motion trail determination device detects that the first image set only includes pedestrians, the first image with the largest face score value is determined as a close-up image of each group; if the motion trail determining device detects that the first image set only comprises the vehicles, determining the first image with the maximum vehicle score value as a close-up image of each group; and if the motion trail determination device detects that the first image set comprises pedestrians and vehicles, determining the first images with the face score values larger than the face score value threshold value and the vehicle score values larger than the vehicle score value threshold value as close-up images of each group.
After the motion trail determination device determines the N groups of close-up images, the N groups of close-up images may be respectively input into a first feature extraction model which is constructed in advance to be processed, so as to obtain N groups of face feature sets corresponding to the N groups of close-up images.
In this embodiment, after obtaining N groups of face feature sets corresponding to the N groups of close-up images, the motion trajectory determination device may compare the value of each element in each group of face feature sets with a first preset probability threshold corresponding to the element. And if the motion trail determination device detects that the value of at least one element in the face feature set is greater than a first preset probability threshold corresponding to the element, determining that the face feature corresponding to the element exists in the pedestrian in the close-up image corresponding to the group of face feature sets.
For example, assuming that the face feature sets [ a1, B1, C1] take values of [0.9,0.7,0.95], the first preset probability threshold corresponding to each element is 0.9, and since the probability value of the element C1 is greater than 0.9, the motion trajectory determining apparatus may determine that the face feature 3 corresponding to the pedestrian presence element C1 in the close-up image corresponding to the group of face feature sets exists.
It should be noted that the first preset probability threshold corresponding to each element may be the same or different, and the first preset probability threshold may be set according to actual needs, which is not limited herein.
The step of inputting, by the motion trajectory determination device, each group of close-up images into the first feature extraction model which is constructed in advance may specifically include: and respectively inputting each group of close-up images into a plurality of first feature extraction submodels for detecting different face features, wherein each first feature extraction submodel outputs a probability value, and then a face feature set formed by each probability value is obtained. Wherein, each probability value is used for representing the probability of a certain face feature of the pedestrian in the close-up image.
In S202, the N groups of close-up images are respectively input into a preset second feature extraction model to be processed, and N groups of vehicle feature sets corresponding to the N groups of close-up images are obtained.
In this embodiment, after obtaining N groups of vehicle feature sets corresponding to the N groups of close-up images, the motion trajectory determination device may compare the value of each element in each group of vehicle feature sets with a second preset probability threshold corresponding to the element. And if the motion trail determination device detects that the value of at least one element in the vehicle feature set is greater than a second preset probability threshold corresponding to the element, determining that the vehicle feature corresponding to the element exists in the vehicle in the close-up image corresponding to the group of vehicle feature sets.
For example, assuming that the values of the vehicle feature sets [ a2, B2, C2] are [0.9,0.7,0.95], the second preset probability threshold value corresponding to each element is 0.9, and since the probability value of the element C1 is greater than 0.9, the motion trajectory determination device may determine that the vehicle feature 3 corresponding to the vehicle presence element C2 in the close-up images corresponding to the group of vehicle feature sets.
It should be noted that the second preset probability threshold corresponding to each element may be the same or different, and the second preset probability threshold may be set according to actual needs, which is not limited herein.
The step of inputting, by the motion trajectory determination device, each group of close-up images into the second feature extraction model which is constructed in advance may specifically include: and respectively inputting each group of close-up images into a plurality of second feature extraction submodels for detecting different vehicle features, wherein each second feature extraction submodel outputs a probability value, and then a vehicle feature set formed by each probability value is obtained. Wherein each probability value is used for representing the probability of the vehicle in the close-up image having a certain vehicle characteristic.
In S102, a motion trajectory of the target person and a motion trajectory of the target vehicle are determined based on comparison results between the N sets of face feature sets and comparison results between the N sets of vehicle feature sets, respectively.
In the embodiment of the application, after obtaining N groups of face feature sets and N groups of vehicle feature sets, the motion trajectory determination device may compare each group of face feature sets one by one, compare each group of vehicle feature sets one by one, that is, compare each face feature subset in each group of face feature sets with each face feature subset in other groups of face feature sets one by one, compare each vehicle feature subset in each group of vehicle feature sets with each vehicle feature subset in other groups of vehicle feature sets one by one, obtain a comparison result of each face feature subset in each group of face feature sets with each face feature subset in other groups of face feature sets and a comparison result of each vehicle feature subset in each group of vehicle feature sets with each vehicle feature subset in other groups of vehicle feature sets, and determining the motion tracks of the target person and the target vehicle according to the comparison result.
Specifically, in another embodiment of the present application, the motion trajectory determination device may determine the motion trajectories of the target person and the target vehicle through steps S501 to S502 shown in fig. 5, which are detailed as follows:
in S501, if it is detected that a first face feature set exists in the N groups of face feature sets, determining a motion trajectory of the target person according to the first face feature set; the first face feature set is a face feature set of which the similarity of a target face feature set corresponding to the target person is greater than or equal to a first preset similarity threshold.
In this embodiment, with reference to S102, each group of face feature sets includes a plurality of face feature subsets, and therefore, the motion trajectory determination apparatus may determine a pedestrian image corresponding to any one of the face feature subsets in any group of face feature sets as a target person, and determine the face feature subset as a target face feature set of the target person.
Based on this, the motion trajectory determination device may compare the face feature subsets in the N-1 groups of face feature sets except the group of face feature sets with the target face feature set corresponding to the target person one by one. If the motion trail determination device detects that a first face feature set corresponding to a first face feature subset with the similarity between the first face feature set and the target face feature set being larger than or equal to a first preset similarity threshold exists in the face feature subsets in the rest N-1 groups of face feature sets, determining a pedestrian in a pedestrian image corresponding to the first face feature set as a target person, and indicating that the target person reaches a first sub-region corresponding to a close-up image corresponding to the first face feature set from a target sub-region corresponding to the close-up image corresponding to the target face feature set, so as to determine the motion trail of the target person. The first preset similarity threshold may be set according to actual needs, and is not limited herein.
It should be noted that, in this embodiment, the similarity between the target human face feature set corresponding to the target person and the first human face feature set is greater than or equal to a first preset similarity threshold, and specifically, the similarity between each human face feature included in the target human face feature set and each corresponding human face feature included in the first human face feature set is greater than or equal to the first preset similarity threshold.
In S502, if it is detected that a first vehicle feature set exists in the N groups of vehicle feature sets, determining a motion trajectory of the target vehicle according to the first vehicle feature set; the first vehicle feature set is a vehicle feature set of which the similarity of a target vehicle feature set corresponding to the target vehicle is greater than or equal to a second preset similarity threshold.
In this embodiment, in combination with S102, each group of vehicle feature sets includes a plurality of vehicle feature subsets, and therefore, the motion trajectory determination device may determine a pedestrian image corresponding to any one vehicle feature subset in any group of vehicle feature sets as the target vehicle, and determine the vehicle feature subset as the target vehicle feature set of the target vehicle.
Based on this, the motion trail determination device may compare the subset of vehicle features in the remaining N-1 sets of vehicle features except the set of vehicle features with the target vehicle feature set corresponding to the target vehicle one by one. If the motion trail determining device detects that the vehicle feature subset in the rest N-1 groups of vehicle feature sets has a first vehicle feature set corresponding to a first vehicle feature subset of which the similarity between the vehicle feature subset and the target vehicle feature set is greater than or equal to a second preset similarity threshold, determining a pedestrian in a pedestrian image corresponding to the first vehicle feature set as the target vehicle, and indicating that the target vehicle reaches a first sub-region corresponding to a close-up image corresponding to the first vehicle feature set from a target sub-region corresponding to the close-up image corresponding to the target vehicle feature set, so as to determine the motion trail of the target vehicle. The second preset similarity threshold may be set according to actual needs, and is not limited herein.
It should be noted that, in this embodiment, the similarity between the target vehicle feature set corresponding to the target vehicle and the first vehicle feature set is greater than or equal to a second preset similarity threshold, and specifically, the similarity between each vehicle feature included in the target vehicle feature set and each corresponding vehicle feature included in the first vehicle feature set is greater than or equal to the second preset similarity threshold.
In another embodiment of the application, after determining the motion trajectory of the target person and the motion trajectory of the target vehicle, the motion trajectory determination device may further send the motion trajectory of the target person and the motion trajectory of the target vehicle to the target terminal, and the target terminal stores the motion trajectory of the target person and the motion trajectory of the target vehicle, so that the relevant person can determine the motion trajectories of the target person and the target vehicle through the target terminal conveniently. The target terminal may be a tablet computer or a desktop computer.
In another embodiment of the present application, the motion trajectory determination device may further associate the motion trajectory of the target person with the target human face feature set of the target person, and send the association to the target terminal; and associating the motion track of the target vehicle with the target vehicle feature set of the target vehicle, and sending the motion track to the target terminal, so that related personnel can browse the motion track of any pedestrian or vehicle through the human face feature set and the vehicle feature set.
As can be seen from the above, in the motion trajectory determination method provided in this embodiment, the N groups of close-up images are respectively subjected to face feature recognition and vehicle feature recognition to obtain N groups of face feature sets and N groups of vehicle feature sets; the monitoring equipment acquires N different areas within a preset time period at N acquisition angles by using N groups of close-up images; and respectively determining the motion trail of the target person and the motion trail of the target vehicle based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets. In the motion trail determination method, the N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles through one monitoring device, so that the time sequences of the N groups of close-up images are consistent, and complete motion trails of the target person and the target vehicle in the N different areas can be obtained through the determination method.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of a motion trajectory determination device provided in an embodiment of the present application, and for convenience of description, only the relevant parts of the embodiment of the present application are shown. Referring to fig. 6, the motion trajectory determination device 600 includes: a recognition unit 61 and a first determination unit 62. Wherein:
the recognition unit 61 is configured to perform face feature recognition and vehicle feature recognition on the N groups of close-up images respectively to obtain N groups of face feature sets and N groups of vehicle feature sets; wherein, N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles by one monitoring device.
The first determining unit 62 is configured to determine a motion trajectory of the target person and a motion trajectory of the target vehicle based on comparison results between the N sets of face feature sets and comparison results between the N sets of vehicle feature sets, respectively.
In an embodiment of the present application, the identifying unit 62 specifically includes: a first processing unit and a second processing unit. Wherein:
and the first processing unit is used for respectively inputting the N groups of close-up images into a preset first feature extraction model for processing to obtain N groups of face feature sets corresponding to the N groups of close-up images.
And the second processing unit is used for respectively inputting the N groups of close-up images into a preset second feature extraction model for processing to obtain N groups of vehicle feature sets corresponding to the N groups of close-up images.
In an embodiment of the present application, the motion trajectory determining apparatus 600 further includes: the device comprises a first acquisition unit, a first detection unit and a second determination unit. Wherein:
the first acquisition unit is used for acquiring N groups of first image sets.
The first detection unit is used for respectively inputting N groups of first image sets into a preset face detection model and a preset vehicle detection model to obtain a face score value and a vehicle score value which respectively correspond to each first image in the N groups of first image sets; wherein the face score value is used to represent the integrity of the face in the first image; the vehicle score value is used to represent the integrity of the vehicle in the first image.
The second determining unit is used for determining N groups of close-up images according to a preset strategy and the face score value and the vehicle score value which respectively correspond to each of the N groups of first images in the first image set.
In an embodiment of the present application, the motion trajectory determining apparatus 600 further includes: the device comprises a second acquisition unit, a target identification unit and a third determination unit. Wherein:
the second acquisition unit is used for acquiring the N groups of video image sets.
And the target identification unit is used for inputting the N groups of video image sets into a preset target detection model for target identification to obtain N groups of target identification result sets corresponding to the N groups of video image sets.
The third determining unit is used for determining N groups of video image sets including human faces and/or vehicles in the N groups of target recognition result sets as N groups of first image sets.
In one embodiment of the present application, the first determination unit 62 further includes: a fourth determination unit and a fifth determination unit. Wherein:
the fourth determining unit is used for determining the motion track of the target person according to the first face feature set if the first face feature set is detected to exist in the N groups of face feature sets; the first face feature set is a face feature set of which the similarity of a target face feature set corresponding to the target person is greater than or equal to a first preset similarity threshold.
The fifth determining unit is used for determining the motion track of the target vehicle according to the first vehicle feature set if the first vehicle feature set is detected to exist in the N groups of vehicle feature sets; the first vehicle feature set is a vehicle feature set of which the similarity of a target vehicle feature set corresponding to the target vehicle is greater than or equal to a second preset similarity threshold.
In one embodiment of the present application, the motion trajectory determination device further includes: and a sending unit.
The sending unit is used for sending the motion trail of the target person and the motion trail of the target vehicle to a target terminal.
As can be seen from the above, the motion trajectory determination device provided in the embodiment of the present application obtains N groups of face feature sets and N groups of vehicle feature sets by performing face feature recognition and vehicle feature recognition on N groups of close-up images, respectively; the monitoring equipment acquires N different areas within a preset time period at N acquisition angles by using N groups of close-up images; and respectively determining the motion trail of the target person and the motion trail of the target vehicle based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets. In the motion trail determination method, the N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles through one monitoring device, so that the time sequences of the N groups of close-up images are consistent, and complete motion trails of the target person and the target vehicle in the N different areas can be obtained through the determination method.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 7 is a schematic structural diagram of a motion trajectory determination device according to an embodiment of the present application. As shown in fig. 7, the motion trajectory determination device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, wherein the processor 70 implements the steps of any of the above-mentioned motion trajectory determination method embodiments when executing the computer program 72.
The movement track determination device 7 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The motion trajectory determination means may include, but is not limited to, a processor 70, a memory 71. It will be understood by those skilled in the art that fig. 7 is only an example of the motion trajectory determination device 7, and does not constitute a limitation of the motion trajectory determination device 7, and may include more or less components than those shown, or combine some components, or different components, for example, may further include an input/output device, a network access device, and the like.
The Processor 70 may be a Central Processing Unit (CPU), and the Processor 70 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the motion trajectory determination device 7, such as a hard disk or a memory of the motion trajectory determination device 7. The memory 71 may also be an external storage device of the motion trajectory determination device 7 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the motion trajectory determination device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the motion trajectory determination apparatus 7. The memory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 71 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The present application provides a computer program product, which when running on a motion trajectory determination apparatus, causes the motion trajectory determination apparatus to implement the steps that can be implemented in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed motion trajectory determination apparatus and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A motion trajectory determination method is characterized by comprising the following steps:
respectively carrying out face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets; wherein, N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles by one monitoring device;
and respectively determining the motion trail of the target person and the motion trail of the target vehicle based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets.
2. The motion trail determination method according to claim 1, wherein the step of respectively performing face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets comprises:
respectively inputting the N groups of close-up images into a preset first feature extraction model for processing to obtain N groups of face feature sets corresponding to the N groups of close-up images;
and respectively inputting the N groups of close-up images into a preset second feature extraction model for processing to obtain N groups of vehicle feature sets corresponding to the N groups of close-up images.
3. The motion trail determination method according to claim 2, wherein before inputting the N groups of close-up images into a preset first feature extraction model for processing, further comprising:
acquiring N groups of first image sets;
inputting N groups of first image sets into a preset face detection model and a preset vehicle detection model respectively to obtain face score values and vehicle score values corresponding to each first image in the N groups of first image sets; wherein the face score value is used to represent the integrity of the face in the first image; the vehicle score value is used to represent the integrity of the vehicle in the first image;
and determining N groups of close-up images according to a preset strategy and the face score value and the vehicle score value which respectively correspond to each first image in the N groups of first images.
4. The method of determining a motion trajectory according to claim 3, wherein before acquiring the N sets of first images, further comprising:
acquiring N groups of video image sets;
inputting N groups of video image sets into a preset target detection model for target recognition to obtain N groups of target recognition result sets corresponding to the N groups of video image sets;
and determining N groups of video image sets including faces and/or vehicles in the N groups of target recognition result sets as N groups of first image sets.
5. The method for determining a motion trajectory according to claim 1, wherein the determining a motion trajectory of a target person and a motion trajectory of a target vehicle based on comparison results between N sets of the face feature sets and N sets of the vehicle feature sets, respectively, comprises:
if a first face feature set exists in the N groups of face feature sets, determining the motion track of the target person according to the first face feature set; the first face feature set is a face feature set of which the similarity of a target face feature set corresponding to the target person is greater than or equal to a first preset similarity threshold;
if detecting that a first vehicle feature set exists in the N groups of vehicle feature sets, determining the motion track of the target vehicle according to the first vehicle feature set; the first vehicle feature set is a vehicle feature set of which the similarity of a target vehicle feature set corresponding to the target vehicle is greater than or equal to a second preset similarity threshold.
6. The method of determining a motion trajectory of claim 1, after determining the motion trajectory of the target person and the motion trajectory of the target vehicle, further comprising:
and sending the motion trail of the target person and the motion trail of the target vehicle to a target terminal.
7. A motion trajectory determination device characterized by comprising:
the recognition unit is used for respectively carrying out face feature recognition and vehicle feature recognition on the N groups of close-up images to obtain N groups of face feature sets and N groups of vehicle feature sets; wherein, N groups of close-up images are obtained by acquiring N different areas within a preset time period at N acquisition angles by one monitoring device;
and the first determining unit is used for determining the motion trail of the target person and the motion trail of the target vehicle respectively based on the comparison results among the N groups of human face feature sets and the comparison results among the N groups of vehicle feature sets.
8. The motion trajectory determination device according to claim 7, wherein the recognition unit includes:
the first processing unit is used for respectively inputting the N groups of close-up images into a preset first feature extraction model for processing to obtain N groups of face feature sets corresponding to the N groups of close-up images;
and the second processing unit is used for respectively inputting the N groups of close-up images into a preset second feature extraction model for processing to obtain N groups of vehicle feature sets corresponding to the N groups of close-up images.
9. A motion trajectory determination device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
CN202011162643.8A 2020-10-27 2020-10-27 Motion trajectory determination method and device and computer readable storage medium Pending CN112270257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011162643.8A CN112270257A (en) 2020-10-27 2020-10-27 Motion trajectory determination method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011162643.8A CN112270257A (en) 2020-10-27 2020-10-27 Motion trajectory determination method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112270257A true CN112270257A (en) 2021-01-26

Family

ID=74342912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011162643.8A Pending CN112270257A (en) 2020-10-27 2020-10-27 Motion trajectory determination method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112270257A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493434A (en) * 2023-11-03 2024-02-02 青岛以萨数据技术有限公司 Face image storage method, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173965A1 (en) * 2015-06-17 2018-06-21 Zhejiang Dahua Technology Co., Ltd. Methods and systems for video surveillance
CN109753920A (en) * 2018-12-29 2019-05-14 深圳市商汤科技有限公司 A kind of pedestrian recognition method and device
WO2020000196A1 (en) * 2018-06-26 2020-01-02 深圳齐心集团股份有限公司 Face recognition method and apparatus, and access control attendance machine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173965A1 (en) * 2015-06-17 2018-06-21 Zhejiang Dahua Technology Co., Ltd. Methods and systems for video surveillance
WO2020000196A1 (en) * 2018-06-26 2020-01-02 深圳齐心集团股份有限公司 Face recognition method and apparatus, and access control attendance machine
CN109753920A (en) * 2018-12-29 2019-05-14 深圳市商汤科技有限公司 A kind of pedestrian recognition method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493434A (en) * 2023-11-03 2024-02-02 青岛以萨数据技术有限公司 Face image storage method, equipment and medium
CN117493434B (en) * 2023-11-03 2024-05-03 青岛以萨数据技术有限公司 Face image storage method, equipment and medium

Similar Documents

Publication Publication Date Title
WO2020207190A1 (en) Three-dimensional information determination method, three-dimensional information determination device, and terminal apparatus
CN109598743B (en) Pedestrian target tracking method, device and equipment
CN110852311A (en) Three-dimensional human hand key point positioning method and device
CN108108711B (en) Face control method, electronic device and storage medium
CN114155284A (en) Pedestrian tracking method, device, equipment and medium based on multi-target pedestrian scene
CN110490171B (en) Dangerous posture recognition method and device, computer equipment and storage medium
CN112330715A (en) Tracking method, tracking device, terminal equipment and readable storage medium
CN112801235A (en) Model training method, prediction device, re-recognition model and electronic equipment
CN111400550A (en) Target motion trajectory construction method and device and computer storage medium
CN113393487A (en) Moving object detection method, moving object detection device, electronic equipment and medium
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
CN110619280A (en) Vehicle heavy identification method and device based on deep joint discrimination learning
CN112270257A (en) Motion trajectory determination method and device and computer readable storage medium
CN111242084B (en) Robot control method, robot control device, robot and computer readable storage medium
CN112950961B (en) Traffic flow statistical method, device, equipment and storage medium
CN112416128B (en) Gesture recognition method and terminal equipment
CN114926887A (en) Face recognition method and device and terminal equipment
CN111639640A (en) License plate recognition method, device and equipment based on artificial intelligence
WO2022165675A1 (en) Gesture recognition method and apparatus, terminal device, and readable storage medium
WO2022222143A1 (en) Security test method and apparatus for artificial intelligence system, and terminal device
CN115546192B (en) Livestock quantity identification method, device, equipment and storage medium
CN116091984B (en) Video object segmentation method, device, electronic equipment and storage medium
CN113837006B (en) Face recognition method and device, storage medium and electronic equipment
CN112422953B (en) Method and device for identifying whether camera is shielded or not and terminal equipment
CN110276244B (en) Method, device, computer equipment and storage medium for forming moving track

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230105

Address after: 518000 Yingfei Haocheng Science Park, Guansheng 5th Road, Luhu Community, Guanhu Street, Longhua District, Shenzhen, Guangdong 1515

Applicant after: Shenzhen Infineon Information Co.,Ltd.

Address before: 3 / F, building H-3, East Industrial Zone, Huaqiaocheng, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN INFINOVA Ltd.