CN107871111B - Behavior analysis method and system - Google Patents

Behavior analysis method and system Download PDF

Info

Publication number
CN107871111B
CN107871111B CN201610860255.4A CN201610860255A CN107871111B CN 107871111 B CN107871111 B CN 107871111B CN 201610860255 A CN201610860255 A CN 201610860255A CN 107871111 B CN107871111 B CN 107871111B
Authority
CN
China
Prior art keywords
behavior
target
video
targets
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610860255.4A
Other languages
Chinese (zh)
Other versions
CN107871111A (en
Inventor
常江龙
冯玉玺
叶进进
杨现
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SuningCom Co ltd
Original Assignee
SuningCom Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SuningCom Co ltd filed Critical SuningCom Co ltd
Priority to CN201610860255.4A priority Critical patent/CN107871111B/en
Publication of CN107871111A publication Critical patent/CN107871111A/en
Application granted granted Critical
Publication of CN107871111B publication Critical patent/CN107871111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a behavior analysis method and a behavior analysis system, which relate to the technical field of intelligent analysis and can reduce errors of analysis results at lower cost. The invention comprises the following steps: acquiring video data shot by shooting equipment arranged in a specified space; identifying a target in a video according to the acquired video data, extracting a tracking result of the target in the video, and obtaining behavior data of the target in the video in each behavior segment according to the tracking result; screening a first type of target and a second type of target from the targets in the video according to the behavior data of the targets in the video in each behavior segment, and performing duplicate removal processing on the screened first type of target; and obtaining the global behavior data of the first class targets subjected to the de-emphasis processing according to the behavior data of the first class targets subjected to the de-emphasis processing in each behavior section. The invention is suitable for acquiring the personnel behaviors in the offline scene.

Description

Behavior analysis method and system
Technical Field
The invention relates to the technical field of intelligent analysis, in particular to a behavior analysis method and system.
Background
With the development of mobile communication technology and internet technology, online trading/shopping has become a main consumption mode, and all large retail enterprises begin to combine an online shopping platform operated with a large data analysis system, an online marketing system and other systems, collect customer behavior data and operation history data in real time, and introduce the collected data into a subsequent marketing process, so as to improve the efficiency of online trading/shopping and the accuracy of marketing. However, traditional off-line business customer behavior data is very difficult to obtain compared to online shopping platforms.
In the current offline commercial place, a common method for acquiring behavior data of a customer is mainly to provide a free wifi hotspot for the customer to use, and to track the browsing track of the customer based on a wifi probe technology. However, in practical application, the positioning tracking precision based on the wifi probe technology is not high, the obtained behavior data of the customer is limited to the browsing track of the customer, and the behavior details of the customer cannot be accurately reflected.
In a common acquisition mode of customer behavior data, more detailed customer behavior data is obtained through analysis by computer vision and video analysis technologies, such as: the method comprises the steps of analyzing the approximate physical characteristics of a customer through a monitoring shooting device arranged in a storefront, determining the approximate age and sex of the customer, and performing statistical analysis on the people flow in front of some shelves through a means of head recognition. However, the accuracy of such analysis schemes is limited by hardware configuration and the existing analysis techniques, and the error of the individual analysis result is large, so that accurate data can be obtained only when statistical analysis is performed on the whole object of people stream and people, and the error of the analysis result is large if analysis is performed on a small number of customers or even individuals. If the analysis error needs to be reduced, shooting equipment such as a camera and an acquisition card with higher resolution and definition is generally required to be purchased so as to more finely identify the facial and physical features of the human body, but the cost of the shooting equipment is extremely high, so that the shooting equipment is more applied to security systems of important places such as airports, high-speed railway stations and large exhibition centers at present and is difficult to apply to common off-line commercial places.
Disclosure of Invention
Embodiments of the present invention provide a behavior analysis method and system, which can reduce errors of analysis results at a low cost.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method, including: acquiring video data shot by shooting equipment arranged in a specified space;
identifying a target in a video according to the acquired video data, extracting a tracking result of the target in the video, and obtaining behavior data of the target in the video in each behavior segment according to the tracking result;
screening a first type of target and a second type of target from the targets in the video according to the behavior data of the targets in the video in each behavior segment, and performing duplicate removal processing on the screened first type of target;
and obtaining the global behavior data of the first class targets subjected to the de-emphasis processing according to the behavior data of the first class targets subjected to the de-emphasis processing in each behavior section.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the method further includes:
determining a behavior template library associated with the designated space;
the screening of the first type of targets and the second type of targets from the targets in the video according to the behavior data of the targets in the video in each behavior segment comprises the following steps:
respectively reading the behavior templates of the first class of targets and the second class of targets from a behavior template library associated with the specified space;
and screening the first type of objects and the second type of objects from the objects in the video according to the read behavior template.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner, the identifying a target in a video according to the acquired video data includes:
according to the acquired video data, distinguishing a background area and a motion area from a video;
screening a human body motion area from the distinguished motion areas to serve as a target obtained through identification;
tracking the target obtained by identification to obtain a motion track, and recording a tracking result of the obtained motion track, wherein the tracking result comprises: the behavior target comprises a motion subject in the identified target, the behavior segment comprises a time segment for completely tracking the behavior target, and the behavior content comprises a continuous motion process of the motion subject.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, the extracting a tracking result of the target in the video, and obtaining behavior data of the target in the video in each behavior segment according to the tracking result includes:
according to the overall speed and the motion vector of the target in the video, segmenting the behavior segment of the target in the video to obtain a sub-behavior segment, wherein the sub-behavior segment comprises: the behavior content of the behavior target is as follows: a time segment of fast walking, completely stationary or stationary in place but with torso activity;
for a child behavior segment: extracting the motion vector of the image frame in the sub-behavior segment, and sampling and fusing the motion vector and the corresponding image frame to obtain data representation of each image frame; and fusing the data representation of each image frame to obtain the behavior data of the sub-behavior segment.
With reference to the first possible implementation manner of the first aspect, in a fourth possible implementation manner, the screening a first class of targets and a second class of targets from the targets in the video according to behavior data of the targets in each behavior segment in the video includes:
determining a behavior discrimination result of each behavior segment according to the behavior data and the behavior analysis model of each behavior segment of the target in the video;
and screening the first class of targets and the second class of targets from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior templates of the first class of targets and the second class of targets.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner, the method further includes:
respectively obtaining global behavior data of each second class target according to the behavior data of each second class target in each behavior section;
and correcting the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets.
In a second aspect, embodiments of the present invention provide a system, comprising: the system comprises shooting equipment, an analysis server connected with the shooting equipment, terminal equipment connected with the analysis server and a database system connected with the analysis server;
the shooting equipment is arranged in the designated space and is used for shooting to obtain video data in the designated space;
the analysis server is used for acquiring video data shot by the shooting equipment; identifying a target in a video according to the acquired video data, extracting a tracking result of the target in the video, and obtaining behavior data of the target in the video in each behavior section according to the tracking result; screening a first type of target and a second type of target from the targets in the video according to the behavior data of the targets in the video in each behavior segment, and performing duplicate removal processing on the screened first type of target; obtaining global behavior data of the first class targets subjected to the deduplication processing in each behavior segment according to the behavior data of the first class targets subjected to the deduplication processing in each behavior segment, uploading the global behavior data to the database system, and sending the global behavior data to the terminal equipment;
the terminal device is used for displaying the global behavior data;
and the database system is used for storing the global behavior data uploaded by the analysis server.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the database system is further configured to store the determined behavior template library associated with the specified space;
the analysis server is specifically configured to access the database system, and read the behavior templates of the first class of objects and the second class of objects from the behavior template library associated with the specified space; screening the first type of target and the second type of target from the targets in the video according to the read behavior template;
or the analysis server is further configured to store the determined behavior template library associated with the specified space, and specifically, to read the behavior templates of the first class of objects and the second class of objects from the behavior template library associated with the specified space, respectively; and screening the first type of targets and the second type of targets from the targets in the video according to the read behavior template.
With reference to the second aspect or the first possible implementation manner of the second aspect, in a second possible implementation manner, the system further includes: the analysis server is respectively connected with the terminal equipment and the database system through the background server;
the analysis server is specifically used for determining a behavior judgment result of each behavior segment according to the behavior data and the behavior analysis model of each behavior segment of the target in the video; screening a second type of target from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior template of the second type of target; sending the behavior judgment result of each behavior segment of the target in the video to the background server;
and the background server is used for screening the first class of targets from the targets in the video according to the behavior judgment result of each behavior segment and the matching degree of the behavior template of the first class of targets.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, the analysis server is further configured to obtain global behavior data of each second class target according to the behavior data of each second class target in each behavior segment, and send the global behavior data to the background server;
and the background server is also used for correcting the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets.
According to the behavior analysis method and system provided by the embodiment of the invention, a scheme for analyzing the target behavior is provided by using an intelligent video analysis technology, and the target is deduplicated by acquiring data of various behaviors of the target from the video. When the motion trail is analyzed, only the background and the dynamic area need to be distinguished, a camera with high resolution and definition does not need to be configured, the hardware configuration requirement is reduced, the accuracy of the acquired behavior data is improved, and therefore the error of the analysis result is reduced at lower cost.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 and 2 are schematic diagrams of system architectures provided by embodiments of the present invention;
fig. 3 is a schematic flow chart of a behavior analysis method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an analysis server according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a background server according to an embodiment of the present invention;
fig. 7 is a schematic diagram of another embodiment according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The method flow in the embodiment of the present invention may be specifically executed by a system as shown in fig. 1, where the system at least includes: the system comprises shooting equipment arranged in a designated space, an analysis server connected with the shooting equipment, terminal equipment used for displaying the analysis result of the analysis server in real time, and a database system connected with the analysis server.
The designated space can be indoor space of off-line commercial places such as off-line stores, markets, supermarkets and the like;
shooting equipment can specifically adopt the camera of types such as surveillance camera head, security protection camera, and shooting equipment's shooting definition and resolution ratio can satisfy the human body and the static background of discernment motion to and satisfy discernment limbs action can. The number of the shooting devices arranged in the designated space can be one or more, and mounting modes such as top mounting or oblique mounting can be adopted, so that the whole designated space is within the visible range of the shooting devices.
The analysis server may specifically be a separately made server device, such as: rack, blade, tower or cabinet type server equipment, or hardware equipment with high computing power such as workstations and mainframe computers. The analysis server may also be a server cluster consisting of a plurality of server devices. The analysis server may be disposed in an indoor space, such as: monitoring centers in various commercial places, or in the vicinity of indoor spaces, such as: is arranged in an external machine room adjacent to various commercial places. Generally, the analysis server and the photographing apparatus are connected by a cable, and the wiring manner of the cable is determined according to the specific scale of a designated space and the indoor structure.
The terminal device can be specifically made into a single device or integrated into various media data playing devices, can be a personal computer such as a desktop computer, a notebook computer and the like, and establishes communication with the analysis server in a cable, internet or wireless network mode; or may be a mobile terminal such as a smart phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), or a Personal Digital Assistant (PDA), and establishes communication with the analysis server through a wireless network. The terminal device is used for displaying data output by the analysis server in real time, such as an analysis result.
The database system may be a server device that is separately created and used for data management and storage, or may be a server cluster that is composed of a plurality of server devices. And the database is operated on hardware equipment of the database system and is used for managing and storing data such as video data, behavior data and the like acquired and sent by the analysis server. Specifically, a Database architecture such as a commonly used mesh Database (Network Database), a Relational Database (Relational Database), a tree Database (Hierarchical Database), and an Object-oriented Database (Object-oriented Database) may be used. The database system may be specifically deployed in a region outside the designated space, such as: if the designated space is various business places, the database system can be deployed in a special cloud computing center, a data service center and the like, for example, the system needs to manage a plurality of business places simultaneously. Or may be deployed in a designated space, such as where the system need only manage a single business.
Further, the method flow in the embodiment of the present invention may be specifically executed by a system as shown in fig. 2, so as to manage a plurality of businesses. Wherein, the analysis server arranged in the designated space is connected with the background server, for example: the analysis servers of all the commercial places are connected with the background server of the data center or the management center through the Internet, the terminal equipment is communicated with the background server through the Internet or a wireless network, and the database system is connected with the background server through the Internet or a cable and is communicated with the background server.
An embodiment of the present invention provides a behavior analysis method, as shown in fig. 3, including:
and S1, acquiring the video data shot by the shooting equipment arranged in the designated space.
The number of the shooting devices arranged in the designated space can be one or more, installation modes such as top installation or oblique installation are adopted, and the shot video comprises people in the visible range of the whole designated space, such as: in each shop or single shop in a shopping mall or a store, video data of an area in the shop is acquired through a shooting device, and the shot video comprises customers and salespeople in the store. Real-time images are shot by shooting equipment arranged in the appointed space and transmitted to the analysis server, so that the analysis server obtains video data shot by the shooting equipment arranged in the appointed space through a built-in video acquisition unit such as a video acquisition card.
S2, identifying the target in the video according to the acquired video data, extracting the tracking result of the target in the video, and obtaining the behavior data of the target in the video in each behavior segment according to the tracking result.
The analysis server processes video data acquired in real time, and obtains each moving target in the video through target detection and tracking, for example: and searching moving targets in the video sequence, and tracking each moving target to obtain the motion track of each target. The analysis server also identifies the behaviors of the moving target in a certain period or a specified number of periods (behavior periods), and obtains behavior data of the target in each behavior period, for example: and analyzing the behaviors of the behavior targets in each behavior segment to obtain more detailed behavior data.
If the behavior segments are multiple behavior segments, multiple behavior segments that are continuous in time may be selected, and behavior segments with a certain time interval may be selected, and a specific selection manner of the behavior segments may be determined according to an actual application scenario, which is not limited in this embodiment.
S3, according to the behavior data of the targets in the video in each behavior segment, screening out a first type of targets and a second type of targets from the targets in the video, and carrying out duplicate removal processing on the screened first type of targets.
The target photographed in the designated space may be divided into different roles according to the behavior pattern of the target, such as: a first class of objects and a second class of objects. For example: if the designated space is the indoor space of a shop, a wide-angle camera is arranged on the ceiling of the shop and used for shooting the panorama in the shop. And summarizing and analyzing the behaviors of all the target person-shaped targets shot in the shop in all the behavior sections, and determining the targets of the salesmen as the second type of targets and the targets of the customers as the first type of targets. And carrying out deduplication processing on the targets which are possibly duplicated in the determined first-class targets, so that the accuracy of the global behavior data is improved. Such as: and performing deduplication analysis on the shot customers, performing deduplication on the repeated customer identities according to the dynamic features and the visual features of the shots in the video, and identifying the target of the salesman. Therefore, the global behavior data of each customer is accurately obtained.
And S4, obtaining the global behavior data of the first class targets subjected to the re-processing according to the behavior data of the first class targets subjected to the re-processing in each behavior segment.
For example: the analysis server can be used for collecting and analyzing the behavior data of each target in each behavior segment, determining the salespersons, and performing deduplication analysis on the customers to obtain the global behavior data of each customer. The behavior data of the current customer is collected and then sent to the terminal equipment in the shop, and the behavior data is presented in the interface of the program or application in the form of a chart or a curve and the like for reference of the salespersons in the shop. The behavior data of customers and the behavior data of salesmen of the analysis server in the plurality of shops can be collected and recorded, and the collected and recorded behavior data can be transmitted to the data center, so that the data center can collect and analyze the behavior data of all targets in the plurality of shops.
According to the behavior analysis method and system provided by the embodiment of the invention, a scheme for analyzing the target behavior is provided by utilizing an intelligent video analysis technology, the data of various behaviors of the target are obtained from the video, the target is subjected to duplicate removal, and the global behavior data is obtained based on the motion track of the target. Only the background and the dynamic area need to be distinguished when the motion trail is analyzed, and a camera with high resolution and definition does not need to be configured, so that the hardware configuration requirement is reduced; in addition, compared with the existing analysis mode for people stream and people group, the embodiment can reduce the analysis and the duplication removal for the motion trail of a single target, thereby more accurately acquiring the intention of the target, such as: through the acquired data of the behavior of the customer (including the interaction behavior with the commodity and with the salesperson), the duplication of the customer can be removed, and the purchasing intention of the customer can be effectively analyzed. And simultaneously, the method is suitable for scenes such as single stores, multiple stores and the like, and respectively obtains short-term and long-term behavior data of the customers, so that the obtained abundant and accurate behavior data of the customers are obtained. Therefore, the accuracy of the acquired behavior data is improved, and the accuracy of analysis based on the behavior data is improved while the cost is saved.
In this embodiment, a specific manner for dividing the target captured in the designated space into different roles according to the behavior pattern of the target is further provided, including:
and determining a behavior template library associated with the specified space.
The screening of the first type of targets and the second type of targets from the targets in the video according to the behavior data of the targets in the video in each behavior segment comprises the following steps: and respectively reading the behavior templates of the first class of targets and the second class of targets from the behavior template library associated with the specified space. And screening the first type of objects and the second type of objects from the objects in the video according to the read behavior template.
The behavior template library may specifically include pre-stored behavior patterns of multiple targets, and serve as behavior templates for identifying various targets, for example: and after the analysis server acquires the behavior data of the target in the video in each behavior segment, judging whether the target in the video matches the behavior pattern of the customer by referring to the behavior pattern of the customer prestored in the behavior template library, and if so, judging the target in the video as the customer.
In this embodiment, according to different behavior templates, a first class of objects and a second class of objects may be screened from the objects in the video, or a third class of objects may be screened from the objects in the video, which may be specifically determined according to the behavior template types in the behavior template library in practical applications, for example: if N behavior templates such as customers, sales personnel, managers, cleaning personnel and the like are stored in the behavior template library, the analysis server can screen 1 st to N types of targets from the targets in the video at most. In this embodiment, the behavior template library may be stored in a local memory of the analysis server; or stored in a database system and queried by the analysis server to obtain the behavior template.
In this embodiment, the specific manner of identifying the target in the video according to the acquired video data may include:
according to the acquired video data, a background area and a motion area are distinguished from the video. And screening the human body motion area from the motion areas obtained by distinguishing to be used as the target obtained by identification. And tracking the target obtained by identification to obtain a motion track, and recording the tracking result of the obtained motion track.
Wherein the tracking result comprises: the behavior target comprises a motion subject in the identified target, the behavior segment comprises a time segment for completely tracking the behavior target, and the behavior content comprises a continuous motion process of the motion subject.
For example: for an indoor scene, a background area in the indoor scene can be obtained through statistics, and a motion area is obtained by subtracting the background area from a currently shot image frame. And then screening according to a preset identification rule (such as a transformation rule of a motion area during human motion or other existing motion identification rules) to obtain a human motion area. And (3) tracking the targets in the obtained human motion areas by adopting a preset tracking method (such as Kalman filtering or particle filtering) to obtain the motion tracks of all the targets, wherein when one target leaves the shop area, the target finishes the tracking process. Dividing the motion trail into a plurality of segments according to the time sequence, wherein the tracking result of each segment comprises the following steps: behavioral goals, behavioral segments, and behavioral content.
In this embodiment, a tracking result of a target in the video is extracted, a specific manner of behavior data of the target in the video in each behavior segment is obtained according to the tracking result, and behaviors of the target in each behavior segment are analyzed to obtain detailed behavior data of the target. The behavior analysis may include at least two granularities of decision process of behavior coarse decision and behavior fine decision, which may include:
and segmenting the behavior segment of the target in the video to obtain a sub-behavior segment according to the overall speed and the motion vector of the target in the video.
Wherein the child behavior segment comprises: the behavior content of the behavior target is as follows: fast walking, completely stationary or stationary in place but with time segments of torso activity. For example: according to the overall speed and the motion vector of the target, the behavior of the target can be further cut into small behavior subsections, and the judgment result of the behavior rough judgment of the behavior subsections is obtained, which comprises the following steps: fast walking, complete rest, in-situ rest but torso movement, etc. Where fast walking and complete stillness do not require continuous judgment.
And further analyzing the behaviors in each behavior subsection, thereby completing behavior fine judgment, wherein for one behavior subsection: and extracting the motion vector of the image frame in the sub-line segment, and sampling and fusing the motion vector and the corresponding image frame to obtain data representation of each image frame. And fusing the data representation of each image frame to obtain the behavior data of the sub-behavior segment. For example: the analysis server first segments the sub-segment behavior, for example, the sub-segment may be segmented into a plurality of uniform segments, or further samples may be taken based on the above to accelerate the analysis speed. Secondly, extracting the motion vector of each frame or fixed sampling frame of the segment, taking the motion vector as the motion description of the target, wherein the number of the motion vector frames of each segment is the same. And downsampling the motion vector and the corresponding image frame in the same proportion and fusing to obtain data representation of each frame. And fusing the data representation of each frame to obtain the behavior data of the small segment. And taking the data as the input of a behavior analysis model to obtain the behavior judgment result of the small segment. And each small segment analysis server can adopt multi-thread parallel processing, so that the processing efficiency is improved.
In this embodiment, the behavior analysis model may be a classification model based on a deep neural network. Such as: and training by using a large amount of marked behavior video data. The deep neural network is reconstructed on the basis of a two-dimensional convolutional neural network, and a three-dimensional convolutional neural network model can be adopted in the preferred scheme of the embodiment; or the convolution layer and the pooling layer of the conventional two-dimensional convolution neural network are used as the basis, and the previous full-connection layer is replaced by the new motion analysis layer.
In this embodiment, a specific manner for screening out a first type of object and a second type of object from the objects in the video according to the behavior data of the objects in the video in each behavior segment includes:
and determining a behavior judgment result of each behavior segment according to the behavior data and the behavior analysis model of each behavior segment of the target in the video.
And screening the first class of targets and the second class of targets from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior templates of the first class of targets and the second class of targets.
For example: and the analysis server further obtains global behavior data of the salespersons and the customers according to the result of the target behavior analysis, and summarizes the global behavior data.
Discriminant analysis for salespersons: the method is mainly characterized in that the method is initialized according to the existing movement rules of some salespeople by analyzing the movement track, such as: on the basis of the above behavior rough judgment and fine judgment, the rough judgment of the salesperson (as the second type of target) further includes: the salesperson is decided as the earliest to the store and a single person stays in the store for a long time.
Discriminant analysis for customers (as first class targets): in each line segment, a series of image frames are sampled and binarized. And inputting the image frame after binarization into a classifier, and judging the action state possibly corresponding to the image frame by the classifier. The classifier is a multi-class classifier obtained by training according to a large number of corresponding labeled samples, the input of the classifier is a binary image frame, and the output is a possible action state of a target. Further, for each action segment, in the image frames sampled on the top surface, the corresponding color images in the same action state are extracted by the deep learning network to obtain features, and comparison voting is performed according to the features, and the image frames with high similarity are regarded as the same customer, for example: and m pairs of images with the same action state are shared between the action section A and the action section B, the characteristics of the corresponding 2m images are extracted, m times of comparison is carried out between the corresponding image pairs, and in one pair of images: and if the mutual distance between the features of one image and the features of the other image is smaller than or equal to a preset minimum threshold value, the images are regarded as similar persons and voted for +1, and when the total votes of the number of similarities is higher than a preset maximum threshold value, the images are judged to be the same customer.
Optionally, in this embodiment, the method further includes:
and respectively obtaining the global behavior data of each second class target according to the behavior data of each second class target in each behavior section. And correcting the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets. For example: and the analysis server collects the behavior data of the previous targets to obtain the global behavior data of each customer and each salesman. Which comprises the following steps: and connecting the behavior data of the same customer or salesman according to the time and space continuity of the same target to obtain complete and continuous behavior data of the same customer or salesman, wherein the complete and continuous behavior data is used as global behavior data. And correcting the target behaviors with possible interaction according to the correlation of different targets on the behavior time and the behavior space. Such as: when the salesman and the customer are in a similar space at the same time and both have a conversation action, the actions of both in the action section are corrected to the conversation action.
An embodiment of the present invention further provides a behavior analysis system, as shown in fig. 1, where the system includes: the system comprises shooting equipment, an analysis server connected with the shooting equipment, terminal equipment connected with the analysis server and a database system connected with the analysis server; the system can be installed in the designated space or in the area adjacent to the designated space, such as: installing a System as shown in FIG. 1 in a Single store
The shooting equipment is arranged in the designated space and is used for shooting to obtain video data in the designated space;
the analysis server is used for acquiring video data shot by the shooting equipment; identifying a target in a video according to the acquired video data, extracting a tracking result of the target in the video, and obtaining behavior data of the target in the video in each behavior section according to the tracking result; screening a first type of target and a second type of target from the targets in the video according to the behavior data of the targets in the video in each behavior segment, and performing duplicate removal processing on the screened first type of target; obtaining global behavior data of the first class targets subjected to the deduplication processing in each behavior segment according to the behavior data of the first class targets subjected to the deduplication processing in each behavior segment, uploading the global behavior data to the database system, and sending the global behavior data to the terminal equipment;
the terminal device is used for displaying the global behavior data;
and the database system is used for storing the global behavior data uploaded by the analysis server.
The specific hardware structure of the analysis server in this embodiment provides a possible specific architecture as shown in fig. 4, including: at least one processor 111, such as a CPU, at least one network interface 114 or other user interface 113, memory 115, and at least one communication bus 112. The communication bus 112 is used to enable connection communication between these components. Optionally, a user interface 113 is also included, including a display, keyboard or pointing device (e.g., mouse, trackball, touch pad or touch sensitive display). Memory 115 may comprise high-speed RAM memory and may also include non-volatile memory, such as at least one disk memory. The memory 115 may optionally include at least one memory device located remotely from the processor 111.
In some embodiments, memory 115 stores elements, executable modules or data structures, or a subset or an expanded set thereof, such as an operating system 1151, which contains various system programs for implementing various underlying services and for handling hardware-based tasks; the application programs 1152 include various application programs for implementing various application services. As shown in fig. 5, the application 1152 includes, but is not limited to: a preprocessing module 51 and a summary analysis module 52.
The preprocessing module 51 is configured to identify a target in a video according to the acquired video data, extract a tracking result of the target in the video, and obtain behavior data of the target in the video in each behavior segment according to the tracking result. The preprocessing module 51 specifically includes: a target detection and tracking sub-module 511 and a target behavior analysis sub-module 512.
A target detection and tracking sub-module 511, configured to distinguish a background region and a motion region from a video according to the acquired video data; screening a human body motion area from the motion areas obtained by distinguishing to serve as a target obtained by identification; and tracking the target obtained by identification to obtain a motion track, and recording the tracking result of the obtained motion track. Therefore, moving targets are searched in the video sequence, and each moving target is tracked to obtain the motion track of each target. The target behavior analysis submodule 512 is configured to segment a behavior segment of a target in the video according to the overall rate and the motion vector of the target in the video to obtain a sub-behavior segment, where the sub-behavior segment includes: the behavior content of the behavior target is as follows: a time segment of fast walking, completely stationary or stationary in place but with torso activity; for a child behavior segment: extracting the motion vector of the image frame in the sub-behavior segment, and sampling and fusing the motion vector and the corresponding image frame to obtain data representation of each image frame; and fusing the data representation of each image frame to obtain the behavior data of the sub-behavior segment.
And the summarizing and analyzing module 52 is used for further obtaining and summarizing the global behavior data of the salesman and the customer according to the behavior data output by the preprocessing module 51. Wherein, the summary analysis module 52 specifically includes: a second analysis submodule 521, a first analysis submodule 522 and a data summarization module 523.
And the second analysis submodule block 521 is configured to screen out a second type of target, such as a salesman, from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior template of the second type of target. The first analysis sub-module 522 is configured to screen out a first class of targets, such as customers, from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior template of the first class of targets. The data summarization submodule 523 is configured to correct the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets.
In this embodiment, the database system is further configured to store the determined behavior template library associated with the specified space; the analysis server is specifically configured to access the database system, and read the behavior templates of the first class of objects and the second class of objects from the behavior template library associated with the specified space; screening the first type of target and the second type of target from the targets in the video according to the read behavior template;
or the analysis server is further configured to store the determined behavior template library associated with the specified space, and specifically, to read the behavior templates of the first class of objects and the second class of objects from the behavior template library associated with the specified space, respectively; and screening the first type of targets and the second type of targets from the targets in the video according to the read behavior template.
In this embodiment, on the basis of the system shown in fig. 1, an embodiment of the present invention further provides a behavior analysis system, in which, with respect to the system shown in fig. 1, in fig. 2, a shooting device and an analysis server connected to the shooting device may be installed in the designated space or an area adjacent to the designated space, where there are a plurality of designated spaces or areas adjacent to the designated space, such as in a scene of a plurality of stores, a matching shooting device is arranged in each designated space, and an analysis server is arranged in each designated space or area adjacent to the designated space. The system further comprises: the background servers are used for respectively connecting the analysis servers with the terminal equipment and the database system;
the analysis server is specifically used for determining a behavior judgment result of each behavior segment according to the behavior data and the behavior analysis model of each behavior segment of the target in the video; screening a second type of target from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior template of the second type of target; sending the behavior judgment result of each behavior segment of the target in the video to the background server;
and the background server is used for screening the first class of targets from the targets in the video according to the behavior judgment result of each behavior segment and the matching degree of the behavior template of the first class of targets.
The specific hardware structure of the background server described in this embodiment provides a possible specific architecture as shown in fig. 6, including: at least one processor 211, such as a CPU, at least one network interface 214 or other user interface 213, memory 215, and at least one communication bus 212. A communication bus 212 is used to enable connective communication between these components. Optionally, a user interface 213 is also included, including a display, keyboard or pointing device (e.g., mouse, trackball, touch pad or touch sensitive display screen). Memory 215 may comprise high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory. Memory 215 may optionally include at least one memory device located remotely from the aforementioned processor 211.
In some embodiments, memory 215 stores elements, executable modules or data structures, or a subset or an expanded set thereof, such as an operating system 2151, which contains various system programs for implementing various underlying services and for handling hardware-based tasks; application 2152, which includes various applications for implementing various application services.
As shown in fig. 7, the application 1152 of the analysis server includes, but is not limited to: a pre-processing module 71. The analysis server may be embodied as shown in fig. 4. The preprocessing module 71 is configured to identify a target in a video according to the acquired video data, extract a tracking result of the target in the video, and obtain behavior data of the target in the video in each behavior segment according to the tracking result. The preprocessing module 71 specifically includes: a target detection and tracking sub-module 711, a target behavior analysis sub-module 712, and a second analysis sub-module 713. It should be noted that the analysis servers in the spaces of different fingers may adopt the same architecture.
The target detection and tracking sub-module 711 is configured to distinguish a background area and a motion area from a video according to the acquired video data; screening a human body motion area from the motion areas obtained by distinguishing to serve as a target obtained by identification; and tracking the target obtained by identification to obtain a motion track, and recording the tracking result of the obtained motion track. Therefore, moving targets are searched in the video sequence, and each moving target is tracked to obtain the motion track of each target.
The target behavior analysis submodule 712 is configured to segment a behavior segment of a target in the video according to the overall rate and the motion vector of the target in the video to obtain a sub-behavior segment, where the sub-behavior segment includes: the behavior content of the behavior target is as follows: a time segment of fast walking, completely stationary or stationary in place but with torso activity; for a child behavior segment: extracting the motion vector of the image frame in the sub-behavior segment, and sampling and fusing the motion vector and the corresponding image frame to obtain data representation of each image frame; and fusing the data representation of each image frame to obtain the behavior data of the sub-behavior segment.
The second analysis submodule 713 is configured to screen out a second type of target, such as a salesman, from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior template of the second type of target; and respectively obtaining the global behavior data of each second class target according to the behavior data of each second class target in each behavior segment, and sending the global behavior data to the background server.
Included among the backend server applications 2152 are, but not limited to: and the summarizing and analyzing module 72 is used for further obtaining and summarizing the global behavior data of the salespersons and the customers according to the behavior data output by the preprocessing module 71. Wherein, the summary analysis module 72 specifically includes: a first analysis submodule 721 and a data summarization module 722.
The first analysis sub-module 721 is configured to screen out a first type of target, such as a customer, from the targets in the video according to the behavior determination result of each behavior segment and the matching degree of the behavior template of the first type of target.
And the data summarizing submodule 722 is configured to correct the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets. And the method is also used for correcting the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets.
The behavior analysis system provided by the embodiment of the invention provides a scheme for analyzing the target behavior by using the intelligent video analysis technology, such as: the above-described system shown in fig. 1 that can be arranged in a single designated space and the system shown in fig. 2 that can be arranged in a plurality of designated spaces. In the system shown in fig. 2, the behaviors of the targets in each designated space in each behavior segment may be summarized and analyzed, a first class target and a second class target are determined, and the first class target is subjected to deduplication analysis to obtain global behavior data of each target. The method comprises the steps of acquiring data of various behaviors of a target from a video, removing the duplicate of the target, and acquiring global behavior data based on a motion track of the target. Only the background and the dynamic area need to be distinguished when the motion trail is analyzed, and a camera with high resolution and definition does not need to be configured, so that the hardware configuration requirement is reduced; in addition, compared with the existing analysis mode for people stream and people group, the embodiment can reduce the analysis and the duplication removal for the motion trail of a single target, thereby more accurately acquiring the intention of the target, such as: through the acquired data of the behavior of the customer (including the interaction behavior with the commodity and with the salesperson), the duplication of the customer can be removed, and the purchasing intention of the customer can be effectively analyzed. And simultaneously, the method is suitable for scenes such as single stores, multiple stores and the like, and respectively obtains short-term and long-term behavior data of the customers, so that the obtained abundant and accurate behavior data of the customers are obtained. Therefore, the accuracy of the acquired behavior data is improved, and the accuracy of analysis based on the behavior data is improved while the cost is saved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (4)

1. A method of behavioral analysis, comprising:
acquiring video data shot by shooting equipment arranged in a specified space;
identifying a target in a video according to the acquired video data, extracting a tracking result of the target in the video, and obtaining behavior data of the target in the video in each behavior segment according to the tracking result;
screening a first type of target and a second type of target from the targets in the video according to the behavior data of the targets in the video in each behavior segment, and performing duplicate removal processing on the screened first type of target;
obtaining global behavior data of the first class targets subjected to the deduplication processing in each behavior section according to the behavior data of the first class targets subjected to the deduplication processing in each behavior section;
further comprising:
determining a behavior template library associated with the designated space;
the screening of the first type of targets and the second type of targets from the targets in the video according to the behavior data of the targets in the video in each behavior segment comprises the following steps:
respectively reading the behavior templates of the first class of targets and the second class of targets from a behavior template library associated with the specified space;
screening the first type of targets and the second type of targets from the targets in the video according to the read behavior template;
the screening of the first type of targets and the second type of targets from the targets in the video according to the behavior data of the targets in the video in each behavior segment comprises the following steps:
determining a behavior discrimination result of each behavior segment according to the behavior data and the behavior analysis model of each behavior segment of the target in the video;
screening out a first class target and a second class target from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior templates of the first class target and the second class target;
further comprising:
respectively obtaining global behavior data of each second class target according to the behavior data of each second class target in each behavior section;
and correcting the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets.
2. The method of claim 1, wherein identifying objects in the video from the acquired video data comprises:
according to the acquired video data, distinguishing a background area and a motion area from a video;
screening a human body motion area from the distinguished motion areas to serve as a target obtained through identification;
tracking the target obtained by identification to obtain a motion track, and recording a tracking result of the obtained motion track, wherein the tracking result comprises: the behavior target comprises a motion subject in the identified target, the behavior segment comprises a time segment for completely tracking the behavior target, and the behavior content comprises a continuous motion process of the motion subject.
3. The method according to claim 2, wherein the extracting a tracking result of the target in the video and obtaining behavior data of the target in the video in each behavior segment according to the tracking result comprises:
according to the overall speed and the motion vector of the target in the video, segmenting the behavior segment of the target in the video to obtain a sub-behavior segment, wherein the sub-behavior segment comprises: the behavior content of the behavior target is as follows: a time segment of fast walking, completely stationary or stationary in place but with torso activity;
for a child behavior segment: extracting the motion vector of the image frame in the sub-behavior segment, and sampling and fusing the motion vector and the corresponding image frame to obtain data representation of each image frame; and fusing the data representation of each image frame to obtain the behavior data of the sub-behavior segment.
4. A behavior analysis system, characterized in that the system comprises: the system comprises shooting equipment, an analysis server connected with the shooting equipment, terminal equipment connected with the analysis server and a database system connected with the analysis server;
the shooting equipment is arranged in the designated space and is used for shooting to obtain video data in the designated space;
the analysis server is used for acquiring video data shot by the shooting equipment; identifying a target in a video according to the acquired video data, extracting a tracking result of the target in the video, and obtaining behavior data of the target in the video in each behavior section according to the tracking result; screening a first type of target and a second type of target from the targets in the video according to the behavior data of the targets in the video in each behavior segment, and performing duplicate removal processing on the screened first type of target; obtaining global behavior data of the first class targets subjected to the deduplication processing in each behavior segment according to the behavior data of the first class targets subjected to the deduplication processing in each behavior segment, uploading the global behavior data to the database system, and sending the global behavior data to the terminal equipment;
the terminal device is used for displaying the global behavior data;
the database system is used for storing the global behavior data uploaded by the analysis server;
the database system is also used for storing the determined behavior template library associated with the specified space;
the analysis server is specifically configured to access the database system, and read the behavior templates of the first class of objects and the second class of objects from the behavior template library associated with the specified space; screening the first type of target and the second type of target from the targets in the video according to the read behavior template;
or the analysis server is further configured to store the determined behavior template library associated with the specified space, and specifically, to read the behavior templates of the first class of objects and the second class of objects from the behavior template library associated with the specified space, respectively; screening the first type of target and the second type of target from the targets in the video according to the read behavior template;
the system further comprises: the analysis server is respectively connected with the terminal equipment and the database system through the background server;
the analysis server is specifically used for determining a behavior judgment result of each behavior segment according to the behavior data and the behavior analysis model of each behavior segment of the target in the video; screening a second type of target from the targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior template of the second type of target; sending the behavior judgment result of each behavior segment of the target in the video to the background server;
the background server is used for screening out a first class of target from targets in the video according to the behavior discrimination result of each behavior segment and the matching degree of the behavior template of the first class of target;
the analysis server is further used for respectively obtaining global behavior data of each second class target according to the behavior data of each second class target in each behavior segment, and sending the global behavior data to the background server;
and the background server is also used for correcting the obtained global behavior data of the first class of targets according to the obtained global behavior data of the second class of targets.
CN201610860255.4A 2016-09-28 2016-09-28 Behavior analysis method and system Active CN107871111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610860255.4A CN107871111B (en) 2016-09-28 2016-09-28 Behavior analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610860255.4A CN107871111B (en) 2016-09-28 2016-09-28 Behavior analysis method and system

Publications (2)

Publication Number Publication Date
CN107871111A CN107871111A (en) 2018-04-03
CN107871111B true CN107871111B (en) 2021-11-26

Family

ID=61761385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610860255.4A Active CN107871111B (en) 2016-09-28 2016-09-28 Behavior analysis method and system

Country Status (1)

Country Link
CN (1) CN107871111B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509657A (en) * 2018-04-27 2018-09-07 深圳爱酷智能科技有限公司 Data distribute store method, equipment and computer readable storage medium
CN108830644A (en) * 2018-05-31 2018-11-16 深圳正品创想科技有限公司 A kind of unmanned shop shopping guide method and its device, electronic equipment
CN108921645B (en) * 2018-06-07 2021-07-13 深圳码隆科技有限公司 Commodity purchase judgment method and device and user terminal
CN109711320B (en) * 2018-12-24 2021-05-11 兴唐通信科技有限公司 Method and system for detecting violation behaviors of staff on duty
CN111524164B (en) * 2020-04-21 2023-10-13 北京爱笔科技有限公司 Target tracking method and device and electronic equipment
CN111563438B (en) * 2020-04-28 2022-08-12 厦门市美亚柏科信息股份有限公司 Target duplication eliminating method and device for video structuring
CN111476202B (en) * 2020-04-30 2021-05-25 浙江申汇金融服务外包有限公司 User behavior analysis method and system of security system
CN115002341A (en) * 2022-04-28 2022-09-02 中科蓝卓(北京)信息科技有限公司 Target monitoring method and system based on segmentation prevention

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021927A (en) * 2007-03-20 2007-08-22 中国移动通信集团江苏有限公司 Unified marketing supporting system based on analysis of user behaviour and habit and method thereof
CN101639922A (en) * 2008-07-31 2010-02-03 Nec九州软件株式会社 System and method for guest path analysis
CN102122346A (en) * 2011-02-28 2011-07-13 济南纳维信息技术有限公司 Video analysis-based physical storefront customer interest point acquisition method
CN102682397A (en) * 2012-05-11 2012-09-19 北京吉亚互联科技有限公司 Advertising effect proving method and system of web portals
CN103839049A (en) * 2014-02-26 2014-06-04 中国计量学院 Double-person interactive behavior recognizing and active role determining method
CN104050239A (en) * 2014-05-27 2014-09-17 重庆爱思网安信息技术有限公司 Correlation matching analyzing method among multiple objects
CN104199903A (en) * 2014-08-27 2014-12-10 上海熙菱信息技术有限公司 Vehicle data query system and method based on path correlation
CN104318578A (en) * 2014-11-12 2015-01-28 苏州科达科技股份有限公司 Video image analyzing method and system
US9053589B1 (en) * 2008-10-23 2015-06-09 Experian Information Solutions, Inc. System and method for monitoring and predicting vehicle attributes
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
CN105205155A (en) * 2015-09-25 2015-12-30 珠海世纪鼎利科技股份有限公司 Big data criminal accomplice screening system and method
CN105760646A (en) * 2014-12-18 2016-07-13 中国移动通信集团公司 Method and device for activity classification
CN105809714A (en) * 2016-03-07 2016-07-27 广东顺德中山大学卡内基梅隆大学国际联合研究院 Track confidence coefficient based multi-object tracking method
CN105843919A (en) * 2016-03-24 2016-08-10 云南大学 Moving object track clustering method based on multi-feature fusion and clustering ensemble

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050071223A1 (en) * 2003-09-30 2005-03-31 Vivek Jain Method, system and computer program product for dynamic marketing strategy development
JP5870603B2 (en) * 2011-10-12 2016-03-01 富士ゼロックス株式会社 Information processing apparatus and information processing program
US10180321B2 (en) * 2014-05-31 2019-01-15 3Vr Security, Inc. Calculating duration time in a confined space
JP5720843B1 (en) * 2014-09-22 2015-05-20 富士ゼロックス株式会社 Position conversion program and information processing apparatus
CN104298974B (en) * 2014-10-10 2018-03-09 北京工业大学 A kind of Human bodys' response method based on deep video sequence
CN105678591A (en) * 2016-02-29 2016-06-15 北京时代云英科技有限公司 Video-analysis-based commercial intelligent operation decision-making support system and method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021927A (en) * 2007-03-20 2007-08-22 中国移动通信集团江苏有限公司 Unified marketing supporting system based on analysis of user behaviour and habit and method thereof
CN101639922A (en) * 2008-07-31 2010-02-03 Nec九州软件株式会社 System and method for guest path analysis
US9053589B1 (en) * 2008-10-23 2015-06-09 Experian Information Solutions, Inc. System and method for monitoring and predicting vehicle attributes
CN102122346A (en) * 2011-02-28 2011-07-13 济南纳维信息技术有限公司 Video analysis-based physical storefront customer interest point acquisition method
CN102682397A (en) * 2012-05-11 2012-09-19 北京吉亚互联科技有限公司 Advertising effect proving method and system of web portals
CN103839049A (en) * 2014-02-26 2014-06-04 中国计量学院 Double-person interactive behavior recognizing and active role determining method
CN104050239A (en) * 2014-05-27 2014-09-17 重庆爱思网安信息技术有限公司 Correlation matching analyzing method among multiple objects
CN104199903A (en) * 2014-08-27 2014-12-10 上海熙菱信息技术有限公司 Vehicle data query system and method based on path correlation
CN104318578A (en) * 2014-11-12 2015-01-28 苏州科达科技股份有限公司 Video image analyzing method and system
CN105760646A (en) * 2014-12-18 2016-07-13 中国移动通信集团公司 Method and device for activity classification
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
CN105205155A (en) * 2015-09-25 2015-12-30 珠海世纪鼎利科技股份有限公司 Big data criminal accomplice screening system and method
CN105809714A (en) * 2016-03-07 2016-07-27 广东顺德中山大学卡内基梅隆大学国际联合研究院 Track confidence coefficient based multi-object tracking method
CN105843919A (en) * 2016-03-24 2016-08-10 云南大学 Moving object track clustering method based on multi-feature fusion and clustering ensemble

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Robust Neural-Network-Based Data Association and Multiple Model-Based Tracking of Multiple Point Targets;Mukesh A. Zaveri 等;《IEEE Transactions on Systems》;20070416;第337-351页 *
基于智能视频分析的铁路入侵检测技术研究;董宏辉 等;《中国铁道科学》;20100331;第121-125页 *

Also Published As

Publication number Publication date
CN107871111A (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN107871111B (en) Behavior analysis method and system
Leng et al. A survey of open-world person re-identification
Gou et al. Dukemtmc4reid: A large-scale multi-camera person re-identification dataset
Shinde et al. YOLO based human action recognition and localization
CN106776619B (en) Method and device for determining attribute information of target object
Young et al. Pets metrics: On-line performance evaluation service
CN108229456B (en) Target tracking method and device, electronic equipment and computer storage medium
Merad et al. Tracking multiple persons under partial and global occlusions: Application to customers’ behavior analysis
Patruno et al. People re-identification using skeleton standard posture and color descriptors from RGB-D data
CN106663196A (en) Computerized prominent person recognition in videos
CN102122346A (en) Video analysis-based physical storefront customer interest point acquisition method
CN108335317A (en) Shopping guide method and device under a kind of line
Abdulghafoor et al. A novel real-time multiple objects detection and tracking framework for different challenges
CN108932509A (en) A kind of across scene objects search methods and device based on video tracking
CN110717885A (en) Customer number counting method and device, electronic equipment and readable storage medium
Nakahata et al. Anomaly detection with a moving camera using spatio-temporal codebooks
Nambiar et al. Shape context for soft biometrics in person re-identification and database retrieval
Abed et al. KeyFrame extraction based on face quality measurement and convolutional neural network for efficient face recognition in videos
Elharrouss et al. Mhad: multi-human action dataset
Vennila et al. A rough set framework for multihuman tracking in surveillance video
CN112131477A (en) Library book recommendation system and method based on user portrait
Guangjing et al. Research on static image recognition of sports based on machine learning
Wu et al. Collecting public RGB-D datasets for human daily activity recognition
Kröckel et al. Customer tracking and tracing data as a basis for service innovations at the point of sale
Bouma et al. WPSS: Watching people security services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 210000, 1-5 story, Jinshan building, 8 Shanxi Road, Nanjing, Jiangsu.

Applicant after: SUNING.COM Co.,Ltd.

Address before: 210042 Suning Headquarters, No. 1 Suning Avenue, Xuanwu District, Nanjing City, Jiangsu Province

Applicant before: SUNING COMMERCE GROUP Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant