CN110991261A - Interactive behavior recognition method and device, computer equipment and storage medium - Google Patents

Interactive behavior recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110991261A
CN110991261A CN201911100457.9A CN201911100457A CN110991261A CN 110991261 A CN110991261 A CN 110991261A CN 201911100457 A CN201911100457 A CN 201911100457A CN 110991261 A CN110991261 A CN 110991261A
Authority
CN
China
Prior art keywords
pedestrian
image
detected
preset
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911100457.9A
Other languages
Chinese (zh)
Inventor
余代伟
孙皓
董昱青
庄喜阳
李永翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suning Cloud Computing Co Ltd
Original Assignee
Suning Cloud Computing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suning Cloud Computing Co Ltd filed Critical Suning Cloud Computing Co Ltd
Priority to CN201911100457.9A priority Critical patent/CN110991261A/en
Publication of CN110991261A publication Critical patent/CN110991261A/en
Priority to CA3160731A priority patent/CA3160731A1/en
Priority to PCT/CN2020/097002 priority patent/WO2021093329A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to an interactive behavior recognition method, an interactive behavior recognition device, computer equipment and a storage medium. The method comprises the following steps: acquiring an image to be detected; inputting an image to be detected into a preset multitask model to obtain key points and a detection frame of pedestrians in the image to be detected, wherein the key points are located in the detection frame, and the multitask model is used for detecting the pedestrians and the key points of the human body; and determining the interactive behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected. By adopting the method, the interaction behavior of the pedestrian and the article can be effectively identified.

Description

Interactive behavior recognition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an interactive behavior recognition method, an interactive behavior recognition apparatus, a computer device, and a storage medium.
Background
With the coming of the internet era, the retail industry starts to enter a rapid development stage, the future retail is intelligent retail, namely, technologies such as internet and big data are used for sensing the consumption habits of users, so that diversified and personalized products and services are provided for consumers, and human-cargo interaction behavior recognition is a problem to be solved in the field of intelligent retail.
The traditional human-cargo interaction behavior identification method generally realizes behavior identification by means of acoustic, optical, electric and other sensor devices, requires high hardware cost, is limited in use scene, and cannot be applied to complex environments such as business overload and the like in a large scale; the business super monitoring equipment generates a large amount of video data every day, and relevant information of a plurality of human-cargo interaction behaviors can be obtained by analyzing the monitoring video, but huge manpower is consumed, and the problem of low efficiency exists.
Disclosure of Invention
In view of the above, it is necessary to provide an interactive behavior recognition method, an apparatus, a computer device, and a storage medium capable of efficiently recognizing an interactive behavior between a human body and an article, in order to solve the above technical problems.
An interactive behavior recognition method, the method comprising:
acquiring an image to be detected;
inputting an image to be detected into a preset multitask model to obtain key points and a detection frame of pedestrians in the image to be detected, wherein the key points are located in the detection frame, and the multitask model is used for detecting the pedestrians and the key points of the human body;
and determining the interactive behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
In one embodiment, the preset article frame image is a preset article frame mask image, and the interactive behavior information of the pedestrian and the corresponding article frame is determined according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected, including:
selecting a wrist key point from the key points of the pedestrians;
obtaining a hand area of the pedestrian according to the wrist key point and a preset radius threshold;
when the intersection area of the image of the hand area and the preset article frame mask image is larger than a preset area threshold value, judging that the pedestrian and the corresponding article frame have an interactive behavior;
and when the intersection area of the image of the hand area and the preset article frame mask image is smaller than or equal to the area threshold value, judging that no interaction action occurs between the pedestrian and the corresponding article frame.
In one embodiment, the method further comprises:
selecting any point in a detection frame of the pedestrian as a positioning point, and setting the position coordinate of the positioning point in the image to be tested as a first position coordinate of the pedestrian;
mapping the first position coordinate of the pedestrian to a world coordinate system according to a preset coordinate mapping relation to obtain a second position coordinate of the pedestrian, wherein the second position coordinate is the position coordinate of the pedestrian in the world coordinate system;
and acquiring second position coordinates of the pedestrian at each time point in a preset time period to obtain a route map of the pedestrian in the preset time period.
In one embodiment, the method further comprises:
obtaining orientation information of the pedestrian according to the key points of the pedestrian;
and obtaining the object frame area oriented by the pedestrian according to the orientation information of the pedestrian and the preset object frame image.
In one embodiment, obtaining the orientation information of the pedestrian according to the key points of the pedestrian comprises:
selecting shoulder key points from the key points of the pedestrians, wherein the shoulder key points comprise a left shoulder key point and a right shoulder key point;
calculating the difference between the coordinates of the left shoulder key point and the coordinates of the right shoulder key point to obtain a shoulder vector;
calculating an included angle between the shoulder vector and a preset unit vector by adopting an inverse cosine function, wherein the preset unit vector is a unit vector in the negative direction of the y axis of a coordinate system of the image to be detected;
summing the radian value of the included angle and pi to obtain the orientation angle of the pedestrian;
when the orientation angle is larger than or equal to pi and smaller than 1.5 pi, judging that the pedestrian faces one side of the image to be detected;
and when the orientation angle is larger than 1.5 pi and smaller than or equal to 2 pi, judging that the pedestrian faces the other side of the image to be detected.
In one embodiment, acquiring an image to be detected includes:
acquiring a monitoring video of a target place;
and screening out an image with pedestrians from the monitoring video to be used as an image to be detected.
In one embodiment, the method further comprises:
acquiring a sample image;
carrying out key point labeling and detection frame labeling on pedestrians in the sample image to obtain labeled image data;
inputting the labeled image data into a neural network model for training to obtain a multi-task model; preferably, the neural network model adopts a ResNet-101+ FPN network model.
A human-cargo interaction behavior recognition apparatus, the apparatus comprising:
the acquisition module is used for acquiring an image to be detected;
the detection module is used for inputting the image to be detected into a preset multitask model to obtain key points and a detection frame of the pedestrian in the image to be detected, the key points are located in the detection frame, and the multitask model is used for pedestrian detection and human body key point detection;
and the identification module is used for determining the interaction behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring an image to be detected;
inputting an image to be detected into a preset multitask model to obtain key points and a detection frame of pedestrians in the image to be detected, wherein the key points are located in the detection frame, and the multitask model is used for detecting the pedestrians and the key points of the human body;
and determining the interactive behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an image to be detected;
inputting an image to be detected into a preset multitask model to obtain key points and a detection frame of pedestrians in the image to be detected, wherein the key points are located in the detection frame, and the multitask model is used for detecting the pedestrians and the key points of the human body;
and determining the interactive behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
According to the interactive behavior recognition method, the interactive behavior recognition device, the computer equipment and the storage medium, the image to be detected is obtained, the key points and the detection frames of the pedestrian in the image to be detected are obtained by inputting the image to be detected into the preset multitask model, the pedestrian detection frames and the human body key points can be synchronously obtained by the multitask model for pedestrian detection and human body key point detection, and the image processing efficiency is improved; the key points are all positioned in the detection frame, and wrong key points outside the detection frame can be eliminated, so that the purposes of comprehensively utilizing the detection frame and the key points and improving the key point marking accuracy are achieved; according to the key points of the pedestrians and the preset article frame images corresponding to the images to be detected, the interaction behavior information of the pedestrians and the corresponding article frames is determined, the interaction behaviors can be identified efficiently, and the identification accuracy is improved.
Drawings
FIG. 1 is a diagram of an application environment for a method of interactive behavior recognition in one embodiment;
FIG. 2 is a flow diagram that illustrates a method for interactive behavior recognition, according to one embodiment;
FIG. 3 is a flowchart illustrating the interactive behavior determination step in one embodiment;
FIG. 4 is a flowchart illustrating an interactive behavior recognition method according to another embodiment;
FIG. 5 is a block diagram showing the structure of an interactive behavior recognition apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The interactive behavior recognition method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 may be, but is not limited to, various image capturing devices, specifically, the terminal 102 may be an existing monitoring device in a market supermarket, a library, or the like, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, an interactive behavior recognition method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 202, obtaining an image to be detected.
The image to be detected is an image which is acquired by the image acquisition device and is provided with pedestrians, the image acquisition device can be a monitoring device which is installed and used in target places such as shopping malls, supermarkets or libraries, for example, an existing camera in the target place does not need to be modified, and the deployment cost is low.
Specifically, a monitoring video is obtained through a camera, and a picture with pedestrians is screened out from the monitoring video to serve as an image to be detected.
And 204, inputting the image to be detected into a preset multitask model to obtain key points and a detection frame of the pedestrian in the image to be detected, wherein the key points are all positioned in the detection frame, and the multitask model is used for pedestrian detection and human body key point detection.
The multi-task model can obtain detection frames of pedestrians in the images to be detected through pedestrian detection, human key point detection is carried out simultaneously to obtain key points of the pedestrians, accordingly synchronous acquisition of the detection frames and the key points of the pedestrians is achieved, characteristics are shared among different tasks, the calculation amount is reduced, hardware resource occupation is reduced, single-frame image processing time is shortened, the images to be detected obtained from multiple cameras can be processed simultaneously, and parallel processing of the multiple cameras is achieved.
Specifically, the acquired image to be detected is input into a preset multitask model, the multitask model carries out pedestrian detection and human body key point detection on the image to be detected, key points outside a detection frame can be excluded from the multitask model in the process of processing the image to be detected, the output key points are all located inside the detection frame, and finally the multitask model can output the key points of the pedestrian in the image to be detected and the detection frame.
For example, inputting the picture I to be detected into the multitask modelH×W×3Output key points of the multitask model
Figure BDA0002269696990000061
And a detection frame
Figure BDA0002269696990000062
Wherein the content of the first and second substances,
Figure BDA0002269696990000063
Figure BDA0002269696990000064
wherein, N is the number of pedestrians in the image to be detected, K is the number of key points of each pedestrian, and usually K is 17;
Figure BDA0002269696990000065
the coordinates of the jth key point of the ith person on the image to be detected;
Figure BDA0002269696990000066
coordinates of the upper left corner and the lower right corner of the detection box of the ith person on the picture to be detected are represented, and score represents the confidence coefficient, namely the credibility, of the detection box.
And step 206, determining the interaction behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
The system comprises cameras, a target place layout and an article rack, wherein the cameras, the target place layout and the article rack are arranged in advance, each camera is provided with a corresponding preset article rack image, the known image to be detected is obtained through one of the cameras, the images to be detected obtained through the same camera correspond to the cameras, and the images to be detected correspond to the preset article rack images configured by the cameras.
Specifically, a part key point in the key points of the pedestrian may be selected as a reference key point, and then the interaction behavior between the pedestrian and the corresponding article rack is determined according to the correlation, such as the distance or the intersection area, between the reference key point and the preset article rack image.
In the interactive behavior identification method, the image to be detected is obtained, and the key points and the detection frame of the pedestrian in the image to be detected are obtained by inputting the image to be detected into the preset multitask model; the key points are all positioned in the detection frame, and wrong key points outside the detection frame can be eliminated, so that the purposes of comprehensively utilizing the detection frame and the key points and improving the key point marking accuracy are achieved; according to the key points of the pedestrians and the preset article frame image corresponding to the image to be detected, the interactive behavior information of the pedestrians and the corresponding article frame is determined, the interactive behavior can be efficiently identified, and the identification accuracy is improved; moreover, the method can realize full-process automatic treatment without manual intervention, thereby greatly reducing the labor cost.
In one embodiment, as shown in fig. 3, the preset item holder image is a preset item holder mask image, which may be an image obtained by extracting one frame of image from a large amount of surveillance videos and then labeling the outline of the item holder in the image with a polygon; according to the pedestrian's key point and wait to detect the preset article frame image that the image corresponds, confirm pedestrian and the interactive behavior information that corresponds article frame, include:
step 302, selecting a wrist key point from key points of pedestrians;
the wrist key point data comprises left wrist key point data and right wrist key point data.
304, obtaining a hand area of the pedestrian according to the wrist key points and a preset radius threshold;
specifically, a left-hand area and a right-hand area are divided by taking a left-hand wrist key point and a right-hand wrist key point as circle centers and a preset radius threshold as a radius respectively, so that an image of the left-hand area and an image of the right-hand area are obtained.
Step 306, judging whether the intersection area of the image of the hand area and the preset article frame mask image is larger than a preset area threshold value;
step 308, if yes, judging that the pedestrian and the corresponding article shelf are in interactive behavior;
and step 310, if not, determining that no interaction between the pedestrian and the corresponding article shelf occurs.
In step 306, the hand region includes a left hand region and a right hand region, specifically, when the intersection area of the image of at least one hand region of the left hand region and the right hand region and the preset article holder mask image is greater than a preset area threshold, it is determined that the pedestrian and the corresponding article holder have an interactive behavior; otherwise, judging that the pedestrian and the corresponding article shelf do not have the interactive behavior.
For example,
Figure BDA0002269696990000071
the hand area with the left wrist as the center of a circle and R as the radius is shown, namely the left hand area;
Figure BDA0002269696990000072
the representation takes the wrist of the right hand as the center of a circleR is a hand area with a radius, namely a right hand area;
the predetermined area threshold is 150 units area, when HR∩MsWhen the number is more than 150, judging that the pedestrian and the corresponding article frame have interactive behaviors, namely the pedestrian is shopping;
when H is presentR∩MsAnd when the distance is less than or equal to 150, judging that the pedestrian does not interact with the corresponding article shelf, namely the pedestrian does not shop.
In the embodiment, an interactive behavior recognition method is provided, and the interactive behavior recognition method judges the interactive behavior by directly estimating the intersection area of the hand and the article rack, so that the method is simple and easy to implement, high in expandability, high in calculation speed and better in real-time performance; the method is usually used for identifying the human-cargo interaction behavior in the market supermarket, and the article shelf is the shelf in the market supermarket at the moment.
In one embodiment, the method further comprises:
selecting any point in a detection frame of the pedestrian as a positioning point, and setting the position coordinate of the positioning point in the image to be tested as a first position coordinate of the pedestrian;
specifically, the central point of the detection frame is selected as a positioning point, the selection is convenient, and the central point can more accurately indicate the position of the pedestrian.
Mapping the first position coordinate of the pedestrian to a world coordinate system according to a preset coordinate mapping relation to obtain a second position coordinate of the pedestrian, wherein the second position coordinate is the position coordinate of the pedestrian in the world coordinate system;
here, the preset coordinate mapping relationship is a coordinate mapping relationship between a coordinate system of the image to be detected and a world coordinate system; specifically, the position of the image acquisition device in the world coordinate system is calibrated in advance, and the coordinate position of the image to be detected acquired by the image acquisition device in the world coordinate system can be obtained through the position information of the image acquisition device, so that the coordinate mapping relation between the coordinate system of the image to be detected and the world coordinate system is deduced.
And acquiring second position coordinates of the pedestrian at each time point in a preset time period to obtain a route map of the pedestrian in the preset time period.
The preset time period is the time from the time when the pedestrian enters the target place to the time when the pedestrian leaves the target place, and a route map of the pedestrian in the preset time period, namely a route through which the pedestrian leaves the target place from the time when the pedestrian enters the target place, namely a pedestrian moving line map, can be combined with the target place layout map to draw a pedestrian moving line map of the pedestrian entering the target place on the target place layout map.
In this embodiment, an interactive behavior recognition method is provided, where a route map of a pedestrian in a preset time period can be obtained according to a detection frame of the pedestrian and a preset coordinate mapping relationship, so as to record a movement track of the pedestrian in a target place within a preset time.
In one embodiment, the method further comprises:
obtaining orientation information of the pedestrian according to the key points of the pedestrian;
specifically, selecting shoulder key points from key points of pedestrians;
for example: the shoulder key points include left shoulder key point
Figure BDA0002269696990000091
And right shoulder key point
Figure BDA0002269696990000092
Wherein the content of the first and second substances,
Figure BDA0002269696990000093
and (3) calculating the difference between the coordinates of the left shoulder key point and the coordinates of the right shoulder key point to obtain a shoulder vector:
Figure BDA0002269696990000094
calculating an included angle between the shoulder vector and a preset unit vector by adopting an inverse cosine function, wherein the preset unit vector is a unit vector in the negative direction of the y axis of a coordinate system of the image to be detected;
and summing the radian value of the included angle with pi to obtain the orientation angle of the pedestrian:
Figure BDA0002269696990000095
when the orientation angle is larger than or equal to pi and smaller than 1.5 pi, judging that the pedestrian faces one side of the image to be detected; and when the orientation angle is larger than 1.5 pi and smaller than or equal to 2 pi, judging that the pedestrian faces the other side of the image to be detected.
And obtaining the object frame area oriented by the pedestrian according to the orientation information of the pedestrian and the preset object frame image. Specifically, according to the orientation of the pedestrian in the image to be detected and the preset article frame image corresponding to the image to be detected, the article frame region towards the pedestrian can be obtained.
In this embodiment, an interactive behavior recognition method is provided, which uses the shoulder key point data to calculate the orientation of the pedestrian, and the robustness of the orientation result is higher, so as to determine the shelf area concerned by the customer, and provide a reference for goods placement beyond business.
In one embodiment, acquiring an image to be detected includes:
acquiring a monitoring video of a target place;
specifically, the position of an image acquisition device installed and used in Shanghai is calibrated, a corresponding shelf mask image is configured for each image acquisition device, and a monitoring video shot by the image acquisition device is obtained, wherein the image acquisition device generally adopts a camera.
And screening out an image with pedestrians from the monitoring video to be used as an image to be detected.
In this embodiment, an interactive behavior recognition method is provided, in which existing monitoring equipment in a target place, such as a camera in a mall or a supermarket, is directly used, and thus, the site does not need to be modified, the deployment cost is low, and the method is easy to popularize.
In one embodiment, the method further comprises:
acquiring a sample image; specifically, a monitoring video of a supermarket or a shopping mall is obtained, and a large number of images with pedestrians are screened out from the monitoring video to serve as sample images.
Carrying out key point labeling and detection frame labeling on pedestrians in the sample image to obtain labeled image data; specifically, labeling a pedestrian detection frame in the sample image, labeling positions of key points of the pedestrian such as eyes, nose, ears, shoulders, elbows, wrists, hips, knees, ankles and the like, and finally obtaining labeled image data.
Inputting the labeled image data into a neural network model for training to obtain a multi-task model; preferably, the neural network model adopts a ResNet-101+ FPN network model, is a one-stage bottom-up multitask network model, and saves processing time compared with a similar multi-stage algorithm; compared with the top-down algorithm, the processing time does not change with the number of people in the picture.
In this embodiment, an interactive behavior recognition method is provided, in which a multitask model is established and trained, an image to be detected is processed, and both training and optimization of the model are completed in the background, so that operations in places such as a mall, a supermarket, a library and the like are not affected; the model generalization capability is strong, and the deployment can be conveniently and rapidly carried out; the characteristics of different tasks of the multi-task model can be shared, the calculation amount is reduced, the hardware resource occupation is reduced, the single-frame image processing time is shortened, and the parallel processing of multiple cameras is realized.
In one embodiment, as shown in FIG. 4, the method includes the steps of:
step 402, acquiring a monitoring video of a target place;
step 404, screening an image with pedestrians from the monitoring video as an image to be detected;
step 406, inputting the image to be detected into a preset multitask model to obtain key points and a detection frame of the pedestrian in the image to be detected, wherein the key points are located inside the detection frame;
step 408, determining interaction behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and a preset article frame image corresponding to the image to be detected;
step 410, obtaining a route map of the pedestrian in a preset time period according to the detection frame of the pedestrian and a preset coordinate mapping relation;
and step 412, obtaining the orientation information of the pedestrian according to the key points of the pedestrian.
It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, an interactive behavior recognition apparatus is provided, which includes an obtaining module 502, a detecting module 504, and a recognition module 506, wherein:
an obtaining module 502, configured to obtain an image to be detected;
the detection module 504 is configured to input the image to be detected into a preset multitask model, so as to obtain key points and a detection frame of a pedestrian in the image to be detected, where the key points are located inside the detection frame, and the multitask model is used for pedestrian detection and human body key point detection;
and the identification module 506 is configured to determine interaction behavior information of the pedestrian and the corresponding article rack according to the key point of the pedestrian and the preset article rack image corresponding to the image to be detected.
In one embodiment, the preset item holder image is a preset item holder mask image, and the identifying module 506 includes:
the first key point selecting unit is used for selecting a wrist key point from key points of pedestrians;
the hand area unit is used for obtaining a hand area of the pedestrian according to the wrist key point and a preset radius threshold;
the interaction judging unit is used for judging that the pedestrian and the corresponding article frame have interaction behavior when the intersection area of the hand area image and the preset article frame mask image is larger than a preset area threshold value; and when the intersection area of the hand area image and the preset article frame mask image is smaller than or equal to the area threshold value, judging that no interaction action occurs between the pedestrian and the corresponding article frame.
In one embodiment, the apparatus further comprises:
the first position coordinate module is used for selecting any point in a detection frame of the pedestrian as a positioning point and setting the position coordinate of the positioning point in the image to be tested as the first position coordinate of the pedestrian;
the second position coordinate module is used for mapping the first position coordinate of the pedestrian to the world coordinate system according to the preset coordinate mapping relation to obtain a second position coordinate of the pedestrian, and the second position coordinate is the position coordinate of the pedestrian in the world coordinate system;
and the route map module is used for acquiring second position coordinates of the pedestrian at each time point in a preset time period to obtain a route map of the pedestrian in the preset time period.
In one embodiment, the apparatus further comprises:
the orientation information module is used for obtaining orientation information of the pedestrian according to the key points of the pedestrian;
and the orientation area module is used for obtaining the object frame area oriented by the pedestrian according to the orientation information of the pedestrian and the preset object frame image.
In one embodiment, the orientation information module includes:
the second key point selecting unit is used for selecting shoulder key points from the key points of the pedestrians, wherein the shoulder key points comprise a left shoulder key point and a right shoulder key point;
the orientation angle calculation unit is used for calculating the difference between the coordinates of the left shoulder key point and the coordinates of the right shoulder key point to obtain a shoulder vector; calculating an included angle between the shoulder vector and a preset unit vector by adopting an inverse cosine function, wherein the preset unit vector is a unit vector in the negative direction of the y axis of a coordinate system of the image to be detected; summing the radian value of the included angle and pi to obtain the orientation angle of the pedestrian;
the orientation judging unit is used for judging that the pedestrian faces one side of the image to be detected when the orientation angle is larger than or equal to pi and smaller than 1.5 pi; and when the orientation angle is larger than 1.5 pi and smaller than or equal to 2 pi, judging that the pedestrian faces the other side of the image to be detected.
In one embodiment, the obtaining module 502 includes:
the video acquisition unit is used for acquiring a monitoring video of a target place;
and the image acquisition unit is used for screening out an image with pedestrians from the monitoring video to serve as an image to be detected.
In one embodiment, the apparatus further comprises:
the sample acquisition module is used for acquiring a sample image;
the sample data module is used for carrying out key point labeling and detection frame labeling on pedestrians in the sample image to obtain labeled image data;
the model training module is used for inputting the labeled image data into the neural network model for training to obtain a multi-task model; preferably, the neural network model adopts a ResNet-101+ FPN network model.
For specific definition of the interactive behavior recognition device, reference may be made to the above definition of the interactive behavior recognition method, which is not described herein again. The modules in the above-mentioned interactive behavior recognition apparatus may be implemented wholly or partially by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an interactive behavior recognition method.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring an image to be detected; inputting an image to be detected into a preset multitask model to obtain key points and a detection frame of pedestrians in the image to be detected, wherein the key points are located in the detection frame, and the multitask model is used for detecting the pedestrians and the key points of the human body; and determining the interactive behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
In one embodiment, the processor, when executing the computer program, further performs the steps of: presetting the article frame image for presetting article frame mask image, according to pedestrian's key point and waiting to detect the preset article frame image that the image corresponds, when confirming pedestrian and this step of the interactive behavior information of corresponding article frame, include: selecting a wrist key point from the key points of the pedestrians; obtaining a hand area of the pedestrian according to the wrist key point and a preset radius threshold; when the intersection area of the hand area image and the preset article frame mask image is larger than a preset area threshold value, judging that the pedestrian and the corresponding article frame have an interactive behavior; and when the intersection area of the hand area image and the preset article frame mask image is smaller than or equal to the area threshold value, judging that no interaction action occurs between the pedestrian and the corresponding article frame.
In one embodiment, the processor, when executing the computer program, further performs the steps of: selecting any point in a detection frame of the pedestrian as a positioning point, and setting the position coordinate of the positioning point in the image to be tested as a first position coordinate of the pedestrian; mapping the first position coordinate of the pedestrian to a world coordinate system according to a preset coordinate mapping relation to obtain a second position coordinate of the pedestrian, wherein the second position coordinate is the position coordinate of the pedestrian in the world coordinate system; and acquiring second position coordinates of the pedestrian at each time point in a preset time period to obtain a route map of the pedestrian in the preset time period.
In one embodiment, the processor, when executing the computer program, further performs the steps of: obtaining orientation information of the pedestrian according to the key points of the pedestrian; and obtaining the object frame area oriented by the pedestrian according to the orientation information of the pedestrian and the preset object frame image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: obtaining the orientation information of the pedestrian according to the key points of the pedestrian, comprising the following steps: selecting shoulder key points from the key points of the pedestrians, wherein the shoulder key points comprise a left shoulder key point and a right shoulder key point; calculating the difference between the coordinates of the left shoulder key point and the coordinates of the right shoulder key point to obtain a shoulder vector; calculating an included angle between the shoulder vector and a preset unit vector by adopting an inverse cosine function, wherein the preset unit vector is a unit vector in the negative direction of the y axis of a coordinate system of the image to be detected; summing the radian value of the included angle and pi to obtain the orientation angle of the pedestrian; when the orientation angle is larger than or equal to pi and smaller than 1.5 pi, judging that the pedestrian faces one side of the image to be detected; and when the orientation angle is larger than 1.5 pi and smaller than or equal to 2 pi, judging that the pedestrian faces the other side of the image to be detected.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring an image to be detected, comprising: acquiring a monitoring video of a target place; and screening out an image with pedestrians from the monitoring video to be used as an image to be detected.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a sample image; carrying out key point labeling and detection frame labeling on pedestrians in the sample image to obtain labeled image data; inputting the labeled image data into a neural network model for training to obtain a multi-task model; preferably, the neural network model adopts a ResNet-101+ FPN network model.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring an image to be detected; inputting an image to be detected into a preset multitask model to obtain key points and a detection frame of pedestrians in the image to be detected, wherein the key points are located in the detection frame, and the multitask model is used for detecting the pedestrians and the key points of the human body; and determining the interactive behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of: presetting the article frame image for presetting article frame mask image, according to pedestrian's key point and waiting to detect the preset article frame image that the image corresponds, when confirming pedestrian and this step of the interactive behavior information of corresponding article frame, include: selecting a wrist key point from the key points of the pedestrians; obtaining a hand area of the pedestrian according to the wrist key point and a preset radius threshold; when the intersection area of the hand area image and the preset article frame mask image is larger than a preset area threshold value, judging that the pedestrian and the corresponding article frame have an interactive behavior; and when the intersection area of the hand area image and the preset article frame mask image is smaller than or equal to the area threshold value, judging that no interaction action occurs between the pedestrian and the corresponding article frame.
In one embodiment, the computer program when executed by the processor further performs the steps of: selecting any point in a detection frame of the pedestrian as a positioning point, and setting the position coordinate of the positioning point in the image to be tested as a first position coordinate of the pedestrian; mapping the first position coordinate of the pedestrian to a world coordinate system according to a preset coordinate mapping relation to obtain a second position coordinate of the pedestrian, wherein the second position coordinate is the position coordinate of the pedestrian in the world coordinate system; and acquiring second position coordinates of the pedestrian at each time point in a preset time period to obtain a route map of the pedestrian in the preset time period.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining orientation information of the pedestrian according to the key points of the pedestrian; and obtaining the object frame area oriented by the pedestrian according to the orientation information of the pedestrian and the preset object frame image.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining the orientation information of the pedestrian according to the key points of the pedestrian, comprising the following steps: selecting shoulder key points from the key points of the pedestrians, wherein the shoulder key points comprise a left shoulder key point and a right shoulder key point; calculating the difference between the coordinates of the left shoulder key point and the coordinates of the right shoulder key point to obtain a shoulder vector; calculating an included angle between the shoulder vector and a preset unit vector by adopting an inverse cosine function, wherein the preset unit vector is a unit vector in the negative direction of the y axis of a coordinate system of the image to be detected; summing the radian value of the included angle and pi to obtain the orientation angle of the pedestrian; when the orientation angle is larger than or equal to pi and smaller than 1.5 pi, judging that the pedestrian faces one side of the image to be detected; and when the orientation angle is larger than 1.5 pi and smaller than or equal to 2 pi, judging that the pedestrian faces the other side of the image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an image to be detected, comprising: acquiring a monitoring video of a target place; and screening out an image with pedestrians from the monitoring video to be used as an image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a sample image; carrying out key point labeling and detection frame labeling on pedestrians in the sample image to obtain labeled image data; inputting the labeled image data into a neural network model for training to obtain a multi-task model; preferably, the neural network model adopts a ResNet-101+ FPN network model.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An interactive behavior recognition method, the method comprising:
acquiring an image to be detected;
inputting the image to be detected into a preset multitask model to obtain key points and a detection frame of the pedestrian in the image to be detected, wherein the key points are located in the detection frame, and the multitask model is used for pedestrian detection and human body key point detection;
and determining the interactive behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
2. The method according to claim 1, wherein the preset article shelf image is a preset article shelf mask image, and the determining of the interaction behavior information of the pedestrian and the corresponding article shelf according to the key point of the pedestrian and the preset article shelf image corresponding to the image to be detected comprises:
selecting a wrist key point from the key points of the pedestrian;
obtaining a hand area of the pedestrian according to the wrist key point and a preset radius threshold;
when the intersection area of the image of the hand area and the preset article frame mask image is larger than a preset area threshold value, judging that the pedestrian and the corresponding article frame have an interactive behavior;
when the intersection area of the image of the hand area and the preset article holder mask image is smaller than or equal to the area threshold value, judging that no interaction action occurs between the pedestrian and the corresponding article holder.
3. The method of claim 1, further comprising:
selecting any point in the detection frame of the pedestrian as a positioning point, and setting the position coordinate of the positioning point in the image to be tested as the first position coordinate of the pedestrian;
mapping the first position coordinate of the pedestrian to a world coordinate system according to a preset coordinate mapping relation to obtain a second position coordinate of the pedestrian, wherein the second position coordinate is the position coordinate of the pedestrian in the world coordinate system;
and acquiring second position coordinates of the pedestrian at each time point in a preset time period to obtain a route map of the pedestrian in the preset time period.
4. The method of claim 1, further comprising:
obtaining orientation information of the pedestrian according to the key points of the pedestrian;
and obtaining the object frame area oriented by the pedestrian according to the orientation information of the pedestrian and the preset object frame image.
5. The method according to claim 4, wherein the obtaining orientation information of the pedestrian according to the key point of the pedestrian comprises:
selecting shoulder key points from the key points of the pedestrian, wherein the shoulder key points comprise a left shoulder key point and a right shoulder key point;
calculating the difference between the coordinates of the left shoulder key point and the coordinates of the right shoulder key point to obtain a shoulder vector;
calculating an included angle between the shoulder vector and a preset unit vector by adopting an inverse cosine function, wherein the preset unit vector is a unit vector in the negative direction of the y axis of the coordinate system of the image to be detected;
summing the radian value of the included angle with pi to obtain the orientation angle of the pedestrian;
when the orientation angle is larger than or equal to pi and smaller than 1.5 pi, judging that the pedestrian faces one side of the image to be detected;
and when the orientation angle is larger than 1.5 pi and smaller than or equal to 2 pi, judging that the pedestrian faces the other side of the image to be detected.
6. The method according to any one of claims 1 to 5, wherein the acquiring the image to be detected comprises:
acquiring a monitoring video of a target place;
and screening out an image with pedestrians from the monitoring video to serve as the image to be detected.
7. The method according to any one of claims 1 to 5, further comprising:
acquiring a sample image;
carrying out key point labeling and detection frame labeling on the pedestrians in the sample image to obtain labeled image data;
inputting the labeled image data into a neural network model for training to obtain the multitask model; preferably, the neural network model adopts a ResNet-101+ FPN network model.
8. An interactive behavior recognition apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an image to be detected;
the detection module is used for inputting the image to be detected into a preset multitask model to obtain key points and a detection frame of the pedestrian in the image to be detected, the key points are located in the detection frame, and the multitask model is used for pedestrian detection and human body key point detection;
and the identification module is used for determining the interaction behavior information of the pedestrian and the corresponding article frame according to the key point of the pedestrian and the preset article frame image corresponding to the image to be detected.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911100457.9A 2019-11-12 2019-11-12 Interactive behavior recognition method and device, computer equipment and storage medium Pending CN110991261A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201911100457.9A CN110991261A (en) 2019-11-12 2019-11-12 Interactive behavior recognition method and device, computer equipment and storage medium
CA3160731A CA3160731A1 (en) 2019-11-12 2020-06-19 Interactive behavior recognizing method, device, computer equipment and storage medium
PCT/CN2020/097002 WO2021093329A1 (en) 2019-11-12 2020-06-19 Interactive behavior identification method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911100457.9A CN110991261A (en) 2019-11-12 2019-11-12 Interactive behavior recognition method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110991261A true CN110991261A (en) 2020-04-10

Family

ID=70083879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911100457.9A Pending CN110991261A (en) 2019-11-12 2019-11-12 Interactive behavior recognition method and device, computer equipment and storage medium

Country Status (3)

Country Link
CN (1) CN110991261A (en)
CA (1) CA3160731A1 (en)
WO (1) WO2021093329A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611970A (en) * 2020-06-01 2020-09-01 城云科技(中国)有限公司 Urban management monitoring video-based disposable garbage behavior detection method
CN111783724A (en) * 2020-07-14 2020-10-16 上海依图网络科技有限公司 Target object identification method and device
CN111798341A (en) * 2020-06-30 2020-10-20 深圳市幸福人居建筑科技有限公司 Green property management method, system computer equipment and storage medium thereof
CN112016528A (en) * 2020-10-20 2020-12-01 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN112084984A (en) * 2020-09-15 2020-12-15 山东鲁能软件技术有限公司 Escalator action detection method based on improved Mask RCNN
CN112307871A (en) * 2020-05-29 2021-02-02 北京沃东天骏信息技术有限公司 Information acquisition method and device, attention detection method, device and system
CN112528850A (en) * 2020-12-11 2021-03-19 北京百度网讯科技有限公司 Human body recognition method, device, equipment and storage medium
WO2021093329A1 (en) * 2019-11-12 2021-05-20 苏宁易购集团股份有限公司 Interactive behavior identification method and apparatus, computer device and storage medium
CN113377192A (en) * 2021-05-20 2021-09-10 广州紫为云科技有限公司 Motion sensing game tracking method and device based on deep learning
CN113642361A (en) * 2020-05-11 2021-11-12 杭州萤石软件有限公司 Method and equipment for detecting falling behavior
CN112528850B (en) * 2020-12-11 2024-06-04 北京百度网讯科技有限公司 Human body identification method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116862980B (en) * 2023-06-12 2024-01-23 上海玉贲智能科技有限公司 Target detection frame position optimization correction method, system, medium and terminal for image edge

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245828A (en) * 2015-09-02 2016-01-13 北京旷视科技有限公司 Item analysis method and equipment
CN106709422A (en) * 2016-11-16 2017-05-24 南京亿猫信息技术有限公司 Supermarket shopping cart hand identification method and identification system thereof
CN109993067A (en) * 2019-03-07 2019-07-09 北京旷视科技有限公司 Facial key point extracting method, device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10853651B2 (en) * 2016-10-26 2020-12-01 Htc Corporation Virtual reality interaction method, apparatus and system
CN109934075A (en) * 2017-12-19 2019-06-25 杭州海康威视数字技术股份有限公司 Accident detection method, apparatus, system and electronic equipment
CN110991261A (en) * 2019-11-12 2020-04-10 苏宁云计算有限公司 Interactive behavior recognition method and device, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245828A (en) * 2015-09-02 2016-01-13 北京旷视科技有限公司 Item analysis method and equipment
CN106709422A (en) * 2016-11-16 2017-05-24 南京亿猫信息技术有限公司 Supermarket shopping cart hand identification method and identification system thereof
CN109993067A (en) * 2019-03-07 2019-07-09 北京旷视科技有限公司 Facial key point extracting method, device, computer equipment and storage medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021093329A1 (en) * 2019-11-12 2021-05-20 苏宁易购集团股份有限公司 Interactive behavior identification method and apparatus, computer device and storage medium
CN113642361B (en) * 2020-05-11 2024-01-23 杭州萤石软件有限公司 Fall behavior detection method and equipment
CN113642361A (en) * 2020-05-11 2021-11-12 杭州萤石软件有限公司 Method and equipment for detecting falling behavior
CN112307871A (en) * 2020-05-29 2021-02-02 北京沃东天骏信息技术有限公司 Information acquisition method and device, attention detection method, device and system
CN111611970A (en) * 2020-06-01 2020-09-01 城云科技(中国)有限公司 Urban management monitoring video-based disposable garbage behavior detection method
CN111611970B (en) * 2020-06-01 2023-08-22 城云科技(中国)有限公司 Urban management monitoring video-based random garbage throwing behavior detection method
CN111798341A (en) * 2020-06-30 2020-10-20 深圳市幸福人居建筑科技有限公司 Green property management method, system computer equipment and storage medium thereof
CN111783724A (en) * 2020-07-14 2020-10-16 上海依图网络科技有限公司 Target object identification method and device
CN111783724B (en) * 2020-07-14 2024-03-26 上海依图网络科技有限公司 Target object identification method and device
CN112084984A (en) * 2020-09-15 2020-12-15 山东鲁能软件技术有限公司 Escalator action detection method based on improved Mask RCNN
CN112016528A (en) * 2020-10-20 2020-12-01 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN112016528B (en) * 2020-10-20 2021-07-20 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN112528850A (en) * 2020-12-11 2021-03-19 北京百度网讯科技有限公司 Human body recognition method, device, equipment and storage medium
CN112528850B (en) * 2020-12-11 2024-06-04 北京百度网讯科技有限公司 Human body identification method, device, equipment and storage medium
CN113377192A (en) * 2021-05-20 2021-09-10 广州紫为云科技有限公司 Motion sensing game tracking method and device based on deep learning

Also Published As

Publication number Publication date
CA3160731A1 (en) 2021-05-20
WO2021093329A1 (en) 2021-05-20

Similar Documents

Publication Publication Date Title
CN110991261A (en) Interactive behavior recognition method and device, computer equipment and storage medium
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
CN107358149B (en) Human body posture detection method and device
KR102189262B1 (en) Apparatus and method for collecting traffic information using edge computing
CN107784282B (en) Object attribute identification method, device and system
CN107808111B (en) Method and apparatus for pedestrian detection and attitude estimation
CN111062239A (en) Human body target detection method and device, computer equipment and storage medium
CN108447061B (en) Commodity information processing method and device, computer equipment and storage medium
CN110245611B (en) Image recognition method and device, computer equipment and storage medium
CN109522790A (en) Human body attribute recognition approach, device, storage medium and electronic equipment
JP7192143B2 (en) Method and system for object tracking using online learning
WO2019033567A1 (en) Method for capturing eyeball movement, device and storage medium
CN110796472A (en) Information pushing method and device, computer readable storage medium and computer equipment
CN110930434A (en) Target object tracking method and device, storage medium and computer equipment
CN113420682A (en) Target detection method and device in vehicle-road cooperation and road side equipment
CN112183307A (en) Text recognition method, computer device, and storage medium
CN112307864A (en) Method and device for determining target object and man-machine interaction system
CN111428743B (en) Commodity identification method, commodity processing device and electronic equipment
US20220300774A1 (en) Methods, apparatuses, devices and storage media for detecting correlated objects involved in image
CN111353429A (en) Interest degree method and system based on eyeball turning
CN108898067B (en) Method and device for determining association degree of person and object and computer-readable storage medium
CN114360182B (en) Intelligent alarm method, device, equipment and storage medium
JP2021089778A (en) Information processing apparatus, information processing method, and program
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410