WO2018228218A1 - Procédé d'identification, dispositif informatique et support de stockage - Google Patents

Procédé d'identification, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2018228218A1
WO2018228218A1 PCT/CN2018/089499 CN2018089499W WO2018228218A1 WO 2018228218 A1 WO2018228218 A1 WO 2018228218A1 CN 2018089499 W CN2018089499 W CN 2018089499W WO 2018228218 A1 WO2018228218 A1 WO 2018228218A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
individual
trajectory
identity
motion trajectory
Prior art date
Application number
PCT/CN2018/089499
Other languages
English (en)
Chinese (zh)
Inventor
王达峰
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018228218A1 publication Critical patent/WO2018228218A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the embodiments of the present application relate to the field of image analysis technologies, and in particular, to an identity recognition method, a computing device, and a storage medium.
  • Identifying an individual means determining the identity of the individual.
  • the identity of an individual can be the name of an individual (such as a name).
  • a method for identifying an individual based on a face first obtaining a face image of the individual to be identified and a face image of the target individual, and then calculating the similarity between the two face images by feature matching, when When the similarity is greater than the preset threshold, it is determined that the individual to be identified is the target individual.
  • the embodiments of the present application provide an identity identification method, a computing device, and a storage medium, to improve the accuracy of identity recognition.
  • the technical solution is as follows:
  • an identification method is provided, which is applied to a computing device, the method comprising: acquiring a video recording a target motion of an individual to be identified; and acquiring a motion trajectory of a feature point of the target motion based on the video And determining, according to the trajectory feature of the motion trajectory and the sample data, the identity of the to-be-identified individual, wherein the sample data includes: an identity of the at least one sample individual and a trajectory feature of the corresponding sample motion trajectory.
  • a computing device comprising: a processor and a memory; the memory storing computer readable instructions that cause the processor to perform an identification method according to the present application.
  • a non-volatile storage medium storing a data processing program, the data processing program comprising instructions that, when executed by a computing device, cause the computing device to perform according to the present application Instructions for the identification method.
  • FIG. 1A shows a schematic diagram of an application scenario according to some embodiments of the present application
  • FIG. 1B is a flowchart of an identity recognition method provided by some embodiments of the present application.
  • FIG. 2 is a schematic diagram of a sequence of frames in an action cycle provided by some embodiments of the present application.
  • FIG. 3 is a schematic diagram of feature points provided by some embodiments of the present application.
  • FIG. 4 is a schematic diagram of motion trajectories of feature points provided by some embodiments of the present application.
  • FIG. 5 is a block diagram of an identity recognition apparatus provided by some embodiments of the present application.
  • FIG. 6 is a schematic structural diagram of a computing device provided by some embodiments of the present application.
  • a technical solution for identifying an individual based on an action refers to the movement posture of the body part of the individual, such as the walking posture, the running posture, the swing arm posture, and the like. Since there are some differences in the actions of different individuals, individuals can be identified based on actions. For example, during walking, different individuals' step sizes, stride length, knee flexion, swing arm height, elbow curvature, etc. may be different, and they are difficult to change deliberately due to personal habits, so they can be used As a feature of identifying individuals.
  • a video image recording the motion of the individual to be identified is acquired, and the motion image of the individual to be identified is obtained by processing and analyzing the video image, and then determining the identity of the individual to be identified based on the motion feature.
  • the technical solution provided by the embodiment of the present application can provide the public security department with the auxiliary identification of the criminal suspect.
  • a criminal suspect commits a crime and runs away, he usually camouflages the face (such as wearing a hat, a mask, or a face mask). Therefore, it is difficult for the surveillance camera to collect a clear and complete face image of the suspect. In this case, the suspect cannot be identified by the face image.
  • the surveillance camera will record the suspicion of the suspect in the crime, the gait when the escape, the posture of the swing arm, and so on. Therefore, the suspect can be identified by the action.
  • the technical solution provided by the embodiment of the present application has high practical application value in the field of public security criminal investigation.
  • the technical solution provided by the embodiment of the present application is also applicable to other application scenarios that have an identification requirement for an individual identity, which is not limited by the embodiment of the present application.
  • individuals are identified. Because faces are easy to be made up and easy to accommodate, and individual actions are difficult to be imitated by personal habits, Identity is more accurate than identifying individuals based on faces.
  • the execution subject of each step is an identity recognition device.
  • the identification device can be a server, or a computer.
  • the server can be a server, a server cluster consisting of multiple servers, or a cloud computing service center.
  • FIG. 1A shows a schematic diagram of an application scenario in accordance with some embodiments of the present application.
  • application scenario 100 can include terminal devices (e.g., 108-a, 108-b, and 108-c, etc.) and identity recognition system 102.
  • the terminal device may be, for example, various smart terminals such as a mobile phone, a tablet computer, a handheld game console, and a video camera.
  • the identity system 102 can include one or more servers.
  • the terminal device can communicate with the identity recognition system 102 over the network 106.
  • the terminal device can obtain a video about the individual to be identified. Based on this, the terminal device can perform an identity recognition method based on the video to determine the identity of the individual to be identified.
  • the terminal device can upload a video about the individual to be identified to the identity recognition system 102.
  • the identification system 102 can perform an identification method on the video to determine the identity of the individual to be identified.
  • the identification system 102 can include an identification application 104.
  • the identity recognition application 104 can perform an identification method.
  • the identity application 104 can be an independent application or a distributed application, which is not limited in this application.
  • the identification system 102 can transmit the identification results to the terminal device.
  • FIG. 1B shows a flowchart of an identification method provided by some embodiments of the present application.
  • the identification method can be performed in a computing device.
  • the computing device may be, for example, a terminal device or a server in the identity recognition system 102, but is not limited thereto.
  • the method can include the following steps.
  • Step 101 Acquire a video recording a target action of the individual to be identified.
  • An individual to be identified refers to an individual who needs to identify and determine his or her identity.
  • the action refers to the movement posture of the body part of the individual, such as the walking posture, the running posture, the swing arm posture, and the like.
  • a target action is a specific action, such as a target action being a walking posture.
  • the video of the target action may be a video recording the walking position of the individual to be identified.
  • Step 102 Acquire a motion trajectory of a feature point of the target motion based on the video.
  • step 102 can obtain a sequence of frames within one or more target action periods.
  • step 102 may extract a sequence of frames within any one of the target action periods from the video to be identified.
  • the action cycle is the time taken to perform a complete action.
  • the target action cycle is the time taken to perform a complete target action.
  • the individual's walking posture is repetitive. For example, the left foot is the right foot, the left foot is the right foot, the left foot is the right foot, and so on.
  • the action cycle of the walking posture is the complete motion flow from the steps of lifting the left foot, taking the left foot, lowering the left foot, lifting the right foot, taking the right foot, lowering the right foot, and then returning to the left foot. time.
  • the frame sequence within one target action period contains multiple frames of pictures.
  • a multi-frame picture as shown in FIG. 2 is included in a target action cycle.
  • the time of each target action cycle is substantially the same, that is, the number of pictures included in the sequence of frames in each target action cycle is substantially the same.
  • step 102 may include the following sub-steps: dividing the video into a plurality of target action cycles; extracting a sequence of frames within any one of the target action cycles .
  • the computing device may identify from the video a target picture of a specified action step in which the target action is recorded, and a time range containing a sequence of frames between adjacent two frames of the target picture as a target action period.
  • a specified action step of the target action is any action step included in a complete walking posture, such as lifting the left foot, taking the left foot, lowering the left foot, lifting the right foot, and taking out. Any of the action steps in the right foot.
  • the second frame is The 7th frame has a total of 6 frames of pictures as a target action cycle, and the 7th frame to the 12th frame have a total of 6 frames.
  • the picture is a target action cycle, and the 12th frame to the 17th frame are a total of 6 frames.
  • the picture is a target action cycle and the 17th frame.
  • a total of 7 frames of pictures in the 23rd frame is a target action cycle, and a total of 6 frames of pictures from the 23rd frame to the 28th frame are a target action cycle, and so on.
  • the computing device can automatically divide the target action cycle.
  • the computing device may also divide the video into multiple target action cycles according to the operation result for labeling the target action cycle.
  • the video records other actions of the individual to be identified in addition to the target action of the individual to be identified.
  • the computing device can obtain a video segment selected from the video that only records the target motion of the individual to be identified. Taking the target action as the walking posture as an example, if the walking position and the running posture of the individual to be recognized are recorded in the to-be-identified video, the computing device may acquire a video segment selected from the video and only recording the walking posture of the individual to be identified. The computing device may divide the video segment into multiple target action cycles and extract a sequence of frames within any one of the target action cycles.
  • any target action cycle may be selected as the analysis target of the identity recognition.
  • Step 103 Acquire a motion trajectory of a feature point of the target motion in a sequence of frames.
  • the feature point of the target action refers to the feature point of the body part involved in performing the target action.
  • the feature points may include: several feature points of the thigh part (such as the joint position of the thigh and the ankle, the outer side of the thigh, the inner side of the thigh, the connecting position of the thigh and the knee, etc.), and some of the knee parts.
  • Feature points, several feature points of the calf part, and several feature points of the foot As shown in FIG. 3, taking the target motion as the walking posture, each feature point of the leg is represented by a black small dot. The number and location of feature points can be set according to actual needs.
  • the number of feature points may be 30 to 40, and the position of the feature points may be several positions as described above.
  • the motion trajectory of the feature point in the frame sequence is used to reflect the action feature.
  • the step includes the following substeps: identifying each feature point from each frame of the frame sequence; obtaining the position of each feature point in each frame of the frame sequence; The position of each feature point in the sequence of frames is determined at the position in each frame of the frame sequence.
  • step 103 can employ a uniform coordinate system to represent the location of each feature point in each frame of the picture. For example, taking the lower left corner of each frame as the origin, the bottom edge of the image is the horizontal axis, perpendicular to the bottom edge of the image and intersecting the side of the origin as the vertical axis, establishing a two-dimensional Cartesian coordinate system, each feature point is The position in any one of the frames may be represented by a combination of the abscissa and the ordinate of the feature point in the Cartesian coordinate system.
  • the computing device may acquire the horizontal and vertical coordinates of each feature point in each frame of the frame sequence, and sequentially connect the horizontal and vertical coordinates according to the order of the pictures in the frame sequence to obtain each The trajectory of the feature point in the sequence of frames.
  • a target action cycle includes the second frame to the sixth frame picture, and the coordinates of the feature points located at the ankle position in the above five frames are (x1, y1), (x2, y2), (x3, y3), ( X4, y4) and (x5, y5), the above coordinate points are sequentially connected to obtain a motion trajectory of the feature point located at the position of the ankle in a target action period.
  • the algorithm used for feature point location is not limited, and the related algorithm used for the location of the feature point of the face may be referred to.
  • feature point localization algorithm based on statistical learning feature point localization algorithm based on principal component analysis
  • feature point localization algorithm based on Point Distribution Model (PDM) Point Distribution Model
  • feature point localization algorithm using shape estimation based on gray scale Feature point location algorithm for information, and so on.
  • Step 104 extracting trajectory features of the above motion trajectory.
  • the trajectory feature refers to the characteristics of the motion trajectory.
  • the trajectory feature includes at least one of the following: coordinates of a plurality of feature points on the motion trajectory, curvature of the motion trajectory, length of the motion trajectory, and the like.
  • the coordinates of the feature point may be the coordinates of the feature point of the target action in each frame picture
  • the arc of the motion track may be extracted from each arc position in the motion track
  • the length of the motion track may be adopted by the motion track.
  • the number of pixels is represented.
  • Step 105 Determine the identity of the individual to be identified according to the trajectory feature of the motion trajectory and the sample data.
  • the sample data includes: an identity of at least one sample individual and a trajectory feature of the corresponding sample motion trajectory.
  • the sample motion trajectory refers to a motion trajectory of a feature point of the target motion in a sequence of frames in which an action period in which the sample individual performs the target motion is recorded. For example, a sample video of a sample individual is obtained in advance, a target motion of the sample individual is recorded in the sample video, a plurality of target motion cycles are extracted from the sample video, and a sample motion trajectory can be extracted from the frame sequence in a target motion cycle.
  • a plurality of sample motion trajectories corresponding to the sample individual are usually acquired.
  • step 105 includes the following sub-steps: detecting whether there is a sample motion trajectory matching the motion trajectory according to the trajectory feature of the motion trajectory and the trajectory feature of the sample motion trajectory corresponding to each sample individual; When the motion trajectory matches the sample motion trajectory, the identity of the sample individual corresponding to the sample motion trajectory matching the motion trajectory is determined as the identity of the individual to be identified.
  • the trajectory feature of the sample motion trajectory corresponding to the sample individual is the trajectory feature extracted from the motion trajectory of the sample.
  • the trajectory features may be extracted from each sample motion trajectory separately, and the extracted trajectory features may be integrated (for example, respectively The average of each trajectory feature is obtained, and the trajectory feature of the sample motion trajectory corresponding to the sample individual is obtained.
  • the computing device can calculate the similarity between the two according to the trajectory feature of the motion trajectory and the trajectory feature of the sample motion trajectory corresponding to the sample individual.
  • the similarity is greater than the preset threshold, the computing device determines that the motion trajectory matches the sample motion trajectory.
  • the similarity is less than the preset threshold, the computing device determines that the motion trajectory does not match the sample motion trajectory.
  • the preset threshold is an empirical value set according to requirements, for example, the preset threshold is 95%.
  • the computing device may select a sample motion trajectory with the highest similarity between the motion trajectories and greater than a preset threshold as a sample matching the motion trajectory. Movement track.
  • the preset threshold is 95%.
  • the number of sample individuals is one, and the identity of the sample individual is Zhang San, assuming that the similarity between the corresponding motion trajectory of the individual to be identified and the sample motion trajectory corresponding to the sample individual is 96%, then Determine the identity of the individual to be identified as Zhang San.
  • the number of sample individuals is three
  • the identity of the three sample individuals is Zhang San, Li Si, and Wang Wu, respectively
  • the corresponding motion trajectory of the individual to be identified corresponds to the sample of the above three sample individuals.
  • the similarities between the motion trajectories are 96%, 70%, and 99%, respectively, and the identity of the individual to be identified is determined to be Wang Wu.
  • step 105 includes using the trajectory feature of the motion trajectory as an input to the identity recognition model and using the identity recognition model to determine the identity of the individual to be identified.
  • the identity model is trained based on sample data. See below for an introduction to the training process for the identity model.
  • the computing device inputs the trajectory feature of the motion trajectory into the identity recognition model, and the trajectory feature of the motion trajectory is processed and calculated by the identity recognition model, and the output result of the model is the identity of the individual to be identified.
  • the neural network includes an input layer, at least one hidden layer, and an output layer.
  • the input layer includes a plurality of input nodes, each of which corresponds to a trajectory feature.
  • the output layer includes at least one output node, each output node corresponding to an identity.
  • the hidden layer is located between the input layer and the output layer and is connected to the input layer and the output layer, respectively.
  • the process of using the neural network for identification is as follows: the trajectory features of the motion trajectory corresponding to the individual to be identified are input to the input layer of the neural network, and the trajectory features are combined and abstracted by the hidden layer to obtain data suitable for classification by the output layer. Finally, the identity of the individual to be identified is output by the output layer.
  • the above is only an example of constructing an identity recognition model using a neural network. In practical applications, other algorithms may be selected to construct an identity recognition model.
  • the identity of the individual may be the name of the individual, for example, the identity of the individual is represented by a name. In some embodiments, the identity of the individual is, for example, the name of the individual.
  • the computing device can identify the individual identity based on the walking posture.
  • the sample data includes the names of multiple sample individuals such as Zhang San, Li Si, Wang Wu, Zhao Liu, and Sun Qi, as well as the trajectory features extracted from the sample motion trajectory corresponding to each sample individual. In this way, the computing device can train the identification data using the sample data described above, and the identification model can be used to determine the name of the individual to be identified.
  • the sample data further includes identity association information corresponding to each sample individual, and the identity association information includes personal information such as age, gender, contact information, occupation, address, etc., after determining the identity of the individual to be identified, The identity data of the individual to be identified is obtained in the sample data.
  • the above identity association information may be collected in advance and stored in the sample data.
  • the body orientation of the individual to be identified in the extracted target action cycle is the same as the body orientation of the sample individual in the corresponding target action cycle, for example, both toward the left side, or All face the right side, or both face forward and so on.
  • a sample action cycle of a plurality of different body orientations is recorded in the sample video of each sample individual, and when the identification of the individual to be identified is performed, the target to be identified is first determined in the extracted target. The body orientation within the action cycle, and then the identified individual is identified using sample data (or an identification model) that is consistent with the body orientation.
  • the computing device may collect only relevant data of the leg, and may also collect relevant data of the leg and the upper limb.
  • the feature points of the upper limb may include: several feature points of the upper arm part (such as the joint position of the arm and the shoulder joint, the middle position of the upper arm, the joint position of the upper arm and the elbow joint, etc.), and several characteristic points of the forearm part (such as the forearm and The articulation position of the elbow joint, the middle of the forearm, the position of the forearm and the wrist joint, etc.).
  • the acquisition of the trajectory of the feature points of the upper limbs and the extraction of the corresponding trajectory features are the same as those of the legs, as described above.
  • the relevant data of the leg and the upper limb is integrated, and the recognition accuracy is improved compared to the relevant data considering only the leg.
  • the method provided by the embodiment of the present invention obtains a motion of a to-be-identified individual by acquiring a video recording an action of the individual to be identified, and then determining the to-be-identified function based on the motion feature.
  • the identity of the individual makes the identification of the individual not necessarily limited to the face image, and provides a technical solution for identifying the individual based on the action, enriching the technical means for identifying the individual.
  • the identification of the individual based on the action is more accurate than the identification based on the face based on the face. high.
  • the training process can include the following steps.
  • Step 201 Construct a training sample set according to the sample data, where the training sample set includes a plurality of training samples.
  • Each training sample includes: a trajectory feature extracted from a sample motion trajectory corresponding to a sample individual, and an identity of the sample individual.
  • the source data acquired by the computing device may be a sample video of the sample individual.
  • the computing device divides the sample video of the target action recorded by the sample individual into a plurality of target action cycles.
  • the computing device can extract a sequence of frames within one or more target action cycles.
  • the computing device can acquire the motion trajectory of each feature point of the target action in the frame sequence of the target action cycle, and extract the trajectory feature, thereby combining the identity of the sample individual to obtain a training sample.
  • the feature point location, the motion trajectory extraction, and the trajectory feature extraction refer to the description in the embodiment of FIG. 1B, which is not described in this embodiment.
  • the above source data may be collected in advance and stored in a computing device.
  • the identity of the individual is the name of the individual, and the identification of the individual's identity based on the walking posture is taken as an example.
  • the sample data includes the names of multiple sample individuals such as Zhang San, Li Si, Wang Wu, Zhao Liu, and Sun Qi, as well as the trajectory features extracted from the sample motion trajectory corresponding to each sample individual. Taking Zhang San as an example, the computing device divides the sample video recorded with the three-three walking posture into a plurality of action cycles, and extracts a sequence of frames in one or more action cycles.
  • the computing device can acquire the motion trajectory of each feature point of the walking posture in the frame sequence in the action period, and extract the trajectory feature, thereby combining the name of the sample individual of "Zhang San” to obtain a training. sample.
  • the computing device can acquire a plurality of training samples related to Zhang San.
  • training samples of other sample individuals such as Li Si, Wang Wu, Zhao Liu, and Sun Qi are also obtained in the above manner.
  • the computing device when the name of the individual is identified, if the computing device only acquires the training samples associated with one sample individual (eg, Zhang San), the identification model obtained by the subsequent training can be used to determine the name of the individual to be identified. Whether it is Zhang San.
  • step 202 the training sample is trained by using a machine learning algorithm to obtain an identity recognition model.
  • the machine learning algorithm may adopt a Bayesian algorithm, a support vector machine (SVM) algorithm, a decision tree algorithm, a neural network algorithm, a deep learning algorithm, and the like. Make a limit.
  • SVM support vector machine
  • the computing device can input the trajectory feature of the sample motion trajectory corresponding to the sample individual and the identity of the sample individual into the identity recognition model, and train the model by using a machine learning algorithm, and finally obtain an identity recognition model whose accuracy meets the requirement.
  • step 202 can verify the identity model in the following manner.
  • Step 202 can construct a verification sample set based on the verification data.
  • the verification data includes: at least one identity of the verification individual and a corresponding trajectory feature of the verification motion trajectory.
  • the verification motion trajectory refers to a motion trajectory of each feature point of the target motion in a sequence of frames in which an action period in which the verification individual performs the target motion is recorded.
  • the verification video of the verification individual is obtained in advance
  • the target motion of the verification individual is recorded in the verification video
  • a plurality of target action cycles are extracted from the verification video
  • a verification motion track may be extracted from the frame sequence in one target action cycle.
  • the validation sample set includes multiple validation samples that are used to validate the model.
  • the verification sample is also called a test sample.
  • Each verification sample includes: a trajectory feature extracted from a verification motion trajectory corresponding to a verification individual, and the identity of the verification individual.
  • step 202 may use the trajectory feature of the verification motion trajectory corresponding to the verification sample as an input of the identity recognition model, and use the identity recognition model to determine the identity of the verification individual.
  • Step 202 may determine the accuracy of the identity recognition model according to the identity of each verification individual output by the identity recognition model and the identity of each verification individual recorded in the verification sample.
  • step 202 stops training. In the event that the accuracy of the identity model does not meet the preset requirements, step 202 may continue to train the identity model with more training samples.
  • the method provided by the embodiment of the present application can obtain the identity recognition model according to the sample data, and adopt the modeling method for identity recognition, which helps to improve the accuracy of the identity recognition.
  • FIG. 5 shows a block diagram of an identification device provided by some embodiments of the present application.
  • the device has the function of implementing the above method examples.
  • the functions may be implemented by hardware, or may be implemented by hardware by executing corresponding software.
  • the identification device can reside, for example, in a computing device.
  • the apparatus may include: a video acquisition module 501, a frame sequence extraction module 502, a trajectory acquisition module 503, a feature extraction module 504, and an identity determination module 505.
  • the video obtaining module 501 is configured to acquire a video to be identified that records a target action of the individual to be identified.
  • the frame sequence extraction module 502 is configured to extract a sequence of frames in any one of the target action periods from the to-be-identified video, where the target action period refers to a time taken to perform a complete target action.
  • the trajectory obtaining module 503 is configured to acquire a motion trajectory of each feature point of the target motion in the sequence of frames.
  • the feature extraction module 504 is configured to extract a trajectory feature of the motion trajectory.
  • the identity determining module 505 is configured to determine an identity of the to-be-identified individual according to the trajectory feature and the sample data of the motion trajectory, where the sample data includes: an identity of the at least one sample individual and a trajectory of the corresponding sample motion trajectory feature.
  • the identity determining module 505 is configured to detect whether there is a sample matching the motion trajectory according to a trajectory feature of the motion trajectory and a trajectory feature of a sample motion trajectory corresponding to each sample individual. a motion trajectory; if there is a sample motion trajectory matching the motion trajectory, determining an identity of the sample individual corresponding to the sample motion trajectory matching the motion trajectory as the identity of the to-be-identified individual.
  • the identity determining module 505 is configured to: use the trajectory feature of the motion trajectory as an input of an identity recognition model, and determine the identity of the to-be-identified entity by using the identity recognition model; wherein the identity recognition The model is trained based on the sample data.
  • the apparatus further includes: a sample building module and a model training module.
  • a sample construction module configured to construct a training sample set according to the sample data, where the training sample set includes a plurality of training samples, each training sample includes: a trajectory feature extracted from a sample motion trajectory corresponding to a sample individual, and a The identity of the sample individual.
  • a model training module configured to train the training sample by using a machine learning algorithm to obtain the identity recognition model.
  • the trajectory acquisition module 503 includes: a feature recognition unit, a location acquisition unit, and a trajectory acquisition unit.
  • a feature recognition unit configured to identify each of the feature points from each frame of the frame sequence.
  • a location acquiring unit configured to acquire a location of each feature point in each frame of the frame sequence.
  • a trajectory acquiring unit configured to determine a motion trajectory of each feature point in the frame sequence according to a position of each feature point in each frame of the frame sequence.
  • the frame sequence extraction module 502 includes: a period dividing unit and a frame sequence extracting unit.
  • a period dividing unit configured to divide the to-be-identified video into multiple target action periods.
  • the frame sequence extracting unit is configured to extract a sequence of frames in any one of the target action periods.
  • FIG. 6 is a schematic structural diagram of a computing device provided by some embodiments of the present application.
  • the computing device is for implementing the identity recognition method provided in the above embodiments. Specifically:
  • the computing device 600 includes a central processing unit (CPU) 601, a system memory 604 including a random access memory (RAM) 602 and a read only memory (ROM) 603, and a system bus 605 that connects the system memory 604 and the central processing unit 601. .
  • the computing device 600 also includes a basic input/output system (I/O system) 606 that facilitates transfer of information between various devices within the computer, and a large capacity for storing the operating system 613, applications 614, and other program modules 616.
  • the basic input/output system 606 includes a display 608 for displaying information and an input device 609 such as a mouse or keyboard for user input of information.
  • the display 608 and input device 609 are both connected to the central processing unit 601 via an input and output controller 610 that is coupled to the system bus 605.
  • the basic input/output system 606 can also include an input output controller 610 for receiving and processing input from a plurality of other devices, such as a keyboard, mouse, or electronic stylus.
  • input and output controller 610 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 607 is connected to the central processing unit 601 by a mass storage controller (not shown) connected to the system bus 605.
  • the mass storage device 607 and its associated computer readable medium provide non-volatile storage for the computing device 600. That is, the mass storage device 607 can include a computer readable medium (not shown) such as a hard disk or a CD-ROM drive.
  • the computer readable medium can include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • computing device 600 may also be operated by a remote computer connected to the network via a network such as the Internet. That is, the computing device 600 can be connected to the network 612 through a network interface unit 611 connected to the system bus 605, or can be connected to other types of networks or remote computer systems using the network interface unit 611 (not shown) ).
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program
  • the code set or instruction set is loaded and executed by a processor of the server to implement the various steps in the above method embodiments.
  • the computer readable storage medium described above can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Des modes de réalisation de l'invention concernent le domaine technique de l'analyse d'image. L'invention concerne un procédé d'identification, un dispositif informatique et un support de stockage. Le procédé consiste à : obtenir une vidéo enregistrant une action cible d'un individu à identifier ; obtenir les trajectoires de mouvement des points caractéristiques de l'action cible d'après la vidéo ; et déterminer une identité de l'individu à identifier en fonction des caractéristiques des trajectoires de mouvement et des données d'échantillon, les données d'échantillon comprenant : une identité d'au moins un individu d'échantillon et les caractéristiques d'une trajectoire de mouvement d'échantillon correspondante. Les modes de réalisation de l'invention concernent une solution technique permettant d'identifier un individu d'après une action et d'enrichir les moyens techniques permettant d'identifier un individu.
PCT/CN2018/089499 2017-06-16 2018-06-01 Procédé d'identification, dispositif informatique et support de stockage WO2018228218A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710458868.XA CN108304757A (zh) 2017-06-16 2017-06-16 身份识别方法及装置
CN201710458868.X 2017-06-16

Publications (1)

Publication Number Publication Date
WO2018228218A1 true WO2018228218A1 (fr) 2018-12-20

Family

ID=62872539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/089499 WO2018228218A1 (fr) 2017-06-16 2018-06-01 Procédé d'identification, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN108304757A (fr)
WO (1) WO2018228218A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111435452A (zh) * 2019-01-11 2020-07-21 百度在线网络技术(北京)有限公司 模型训练方法、装置、设备和介质
CN111639578A (zh) * 2020-05-25 2020-09-08 上海中通吉网络技术有限公司 智能识别违规抛物的方法、装置、设备和存储介质
WO2022038591A1 (fr) * 2020-08-20 2022-02-24 Ramot At Tel-Aviv University Ltd. Authentification d'identité dynamique
CN115424353A (zh) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 基于ai模型的业务用户特征识别方法及系统
CN116884130A (zh) * 2023-08-29 2023-10-13 深圳市亲邻科技有限公司 一种基于体态识别的智能门禁控制方法和系统

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325456B (zh) * 2018-09-29 2020-05-12 佳都新太科技股份有限公司 目标识别方法、装置、目标识别设备及存储介质
CN109718528B (zh) * 2018-11-28 2021-06-04 浙江骏炜健电子科技有限责任公司 基于运动特征参数的身份识别方法和系统
CN109829369A (zh) * 2018-12-25 2019-05-31 深圳市天彦通信股份有限公司 目标确定方法及相关装置
CN110011741A (zh) * 2019-03-29 2019-07-12 河北工程大学 基于无线信号的身份识别方法及装置
CN110059661B (zh) * 2019-04-26 2022-11-22 腾讯科技(深圳)有限公司 动作识别方法、人机交互方法、装置及存储介质
CN111860063B (zh) * 2019-04-30 2023-08-11 杭州海康威视数字技术股份有限公司 步态数据构建系统、方法及装置
CN110705438B (zh) * 2019-09-27 2023-07-25 腾讯科技(深圳)有限公司 步态识别方法、装置、设备及存储介质
CN111539298A (zh) * 2020-04-20 2020-08-14 深知智能科技(金华)有限公司 一种基于动态数据的身份信息融合系统及方法
CN111524164B (zh) * 2020-04-21 2023-10-13 北京爱笔科技有限公司 一种目标跟踪方法、装置及电子设备
CN112164096A (zh) * 2020-09-30 2021-01-01 杭州海康威视系统技术有限公司 一种对象识别方法、装置及设备
CN112288050B (zh) * 2020-12-29 2021-05-11 中电科新型智慧城市研究院有限公司 一种异常行为识别方法、识别装置、终端设备及存储介质
CN113312596B (zh) * 2021-06-10 2023-04-07 重庆市勘测院 一种基于深度学习和异步轨迹数据的用户身份识别方法
CN113672159B (zh) * 2021-08-24 2024-03-15 数贸科技(北京)有限公司 风控方法、装置、计算设备及计算机存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8160371B2 (en) * 2007-12-03 2012-04-17 Honeywell International Inc. System for finding archived objects in video data
CN104156650A (zh) * 2014-08-08 2014-11-19 浙江大学 一种基于手部运动的用户身份识别方法
CN105354468A (zh) * 2015-10-29 2016-02-24 丽水学院 一种基于多轴力平台步态分析的用户身份识别方法
CN105760835A (zh) * 2016-02-17 2016-07-13 天津中科智能识别产业技术研究院有限公司 一种基于深度学习的步态分割与步态识别一体化方法
CN106845403A (zh) * 2017-01-20 2017-06-13 武汉哒呤科技有限公司 一种通过用户行为轨迹确定其身份特质的方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102697508B (zh) * 2012-04-23 2013-10-16 中国人民解放军国防科学技术大学 采用单目视觉的三维重建来进行步态识别的方法
CN103377366A (zh) * 2012-04-26 2013-10-30 哈尔滨工业大学深圳研究生院 一种步态识别方法和系统
CN103942577B (zh) * 2014-04-29 2018-08-28 上海复控华龙微系统技术有限公司 视频监控中基于自建立样本库及混合特征的身份识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8160371B2 (en) * 2007-12-03 2012-04-17 Honeywell International Inc. System for finding archived objects in video data
CN104156650A (zh) * 2014-08-08 2014-11-19 浙江大学 一种基于手部运动的用户身份识别方法
CN105354468A (zh) * 2015-10-29 2016-02-24 丽水学院 一种基于多轴力平台步态分析的用户身份识别方法
CN105760835A (zh) * 2016-02-17 2016-07-13 天津中科智能识别产业技术研究院有限公司 一种基于深度学习的步态分割与步态识别一体化方法
CN106845403A (zh) * 2017-01-20 2017-06-13 武汉哒呤科技有限公司 一种通过用户行为轨迹确定其身份特质的方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111435452A (zh) * 2019-01-11 2020-07-21 百度在线网络技术(北京)有限公司 模型训练方法、装置、设备和介质
CN111435452B (zh) * 2019-01-11 2023-11-03 百度在线网络技术(北京)有限公司 模型训练方法、装置、设备和介质
CN111639578A (zh) * 2020-05-25 2020-09-08 上海中通吉网络技术有限公司 智能识别违规抛物的方法、装置、设备和存储介质
CN111639578B (zh) * 2020-05-25 2023-09-19 上海中通吉网络技术有限公司 智能识别违规抛物的方法、装置、设备和存储介质
WO2022038591A1 (fr) * 2020-08-20 2022-02-24 Ramot At Tel-Aviv University Ltd. Authentification d'identité dynamique
CN115424353A (zh) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 基于ai模型的业务用户特征识别方法及系统
CN116884130A (zh) * 2023-08-29 2023-10-13 深圳市亲邻科技有限公司 一种基于体态识别的智能门禁控制方法和系统
CN116884130B (zh) * 2023-08-29 2024-02-20 深圳市亲邻科技有限公司 一种基于体态识别的智能门禁控制方法和系统

Also Published As

Publication number Publication date
CN108304757A (zh) 2018-07-20

Similar Documents

Publication Publication Date Title
WO2018228218A1 (fr) Procédé d'identification, dispositif informatique et support de stockage
CN110941990B (zh) 基于骨骼关键点进行人体动作评估的方法和装置
CN108922622B (zh) 一种动物健康监测方法、装置及计算机可读存储介质
CN108205655B (zh) 一种关键点预测方法、装置、电子设备及存储介质
WO2021114892A1 (fr) Procédé de reconnaissance de mouvement corporel basé sur la compréhension sémantique environnementale, appareil, dispositif et support de stockage
US11238272B2 (en) Method and apparatus for detecting face image
WO2019200749A1 (fr) Procédé, appareil, dispositif informatique et support d'enregistrement de reconnaissance faciale
EP3447679A1 (fr) Procédé et dispositif de vérification faciale in vivo
WO2019105163A1 (fr) Procédé et appareil de recherche de personne cible, dispositif, produit-programme et support
WO2020107847A1 (fr) Procédé de détection de chute sur la base des points osseux et dispositif de détection de chute associé
WO2019033571A1 (fr) Procédé de détection de point de caractéristique faciale, appareil et support de stockage
CN108229375B (zh) 用于检测人脸图像的方法和装置
WO2022105118A1 (fr) Procédé et appareil d'identification d'état de santé basés sur une image, dispositif et support de stockage
WO2019114726A1 (fr) Procédé et dispositif de reconnaissance d'image, appareil électronique et support d'informations lisible par ordinateur
JPWO2015186436A1 (ja) 画像処理装置、画像処理方法、および、画像処理プログラム
CN108509994B (zh) 人物图像聚类方法和装置
CN110633004B (zh) 基于人体姿态估计的交互方法、装置和系统
WO2022001106A1 (fr) Procédé et appareil de détection de points clés, dispositif électronique et support de stockage
CN109872407B (zh) 一种人脸识别方法、装置、设备及打卡方法、装置和系统
CN108388889B (zh) 用于分析人脸图像的方法和装置
WO2019056503A1 (fr) Procédé d'évaluation de surveillance de magasin, dispositif, et support d'informations
WO2019033567A1 (fr) Procédé de capture de mouvement de globe oculaire, dispositif et support d'informations
CN112149615A (zh) 人脸活体检测方法、装置、介质及电子设备
CN110738650B (zh) 一种传染病感染识别方法、终端设备及存储介质
WO2019033568A1 (fr) Procédé de saisie de mouvement labial, appareil et support d'informations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18818021

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18818021

Country of ref document: EP

Kind code of ref document: A1