CN107871111A - A kind of behavior analysis method and system - Google Patents

A kind of behavior analysis method and system Download PDF

Info

Publication number
CN107871111A
CN107871111A CN201610860255.4A CN201610860255A CN107871111A CN 107871111 A CN107871111 A CN 107871111A CN 201610860255 A CN201610860255 A CN 201610860255A CN 107871111 A CN107871111 A CN 107871111A
Authority
CN
China
Prior art keywords
target
behavior
video
data
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610860255.4A
Other languages
Chinese (zh)
Other versions
CN107871111B (en
Inventor
常江龙
冯玉玺
叶进进
杨现
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suning Commerce Group Co Ltd
Original Assignee
Suning Commerce Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suning Commerce Group Co Ltd filed Critical Suning Commerce Group Co Ltd
Priority to CN201610860255.4A priority Critical patent/CN107871111B/en
Publication of CN107871111A publication Critical patent/CN107871111A/en
Application granted granted Critical
Publication of CN107871111B publication Critical patent/CN107871111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The embodiment of the invention discloses a kind of behavior analysis method and system, is related to Intellectual Analysis Technology field, can reduce the error of analysis result with relatively low cost.The present invention includes:Obtain the video data captured by the capture apparatus set in designated space;According to the target in acquired video data identification video, the tracking result of the target in the video is extracted, and behavioral data of the target in each behavior section in the video is obtained according to the tracking result;Behavioral data of the target in each behavior section in the video, first kind target and the second class target are filtered out from the target in the video, and the first kind target to filtering out carries out duplicate removal processing;According to behavioral data of the first kind target handled by duplicate removal in each behavior section, the global behavior data of each first kind target handled by duplicate removal are obtained.Human behavior collection of the present invention suitable for scene under line.

Description

A kind of behavior analysis method and system
Technical field
The present invention relates to Intellectual Analysis Technology field, more particularly to a kind of behavior analysis method and system.
Background technology
With the development of mobile communication technology and Internet technology, online trading/shopping has become a kind of main disappear Take pattern, major retailer all starts the online shopping platform that will be managed and big data analysis system, on-line marketing system It is combined etc. system, the customer behavior data and operation history data gathered in real time, and the data of collection is introduced to follow-up battalion During pin, in order to improve the degree of accuracy of the efficiency of online trading/shopping and marketing.However, compared with online shopping platform, The very difficult acquisition of the customer behavior data of commercial location under traditional line.
Under current line in commercial location, the acquisition modes of common customer behavior data are mainly free by providing Wifi hotspot gives customer to use, and browses track based on wifi probe techniques come follow the trail of customer.But in actual applications, it is based on The precision that wifi probe techniques carry out location tracking is not also high, and what the behavioral data of resulting customer was only limitted to customer browses rail Mark, customer behavior details can not be accurately reflected.
In the acquisition modes of common customer behavior data, also pass through computer vision and Video Analysis Technology, analysis More full and accurate customer behavior data are obtained, such as:By the monitoring capture apparatus of the setting in StoreFront to customer's substantially figure and features Feature is analyzed, and determines substantially age and the sex of customer, and by the means of Head recognition to the people before some shelf Stream carries out statistical analysis.But the degree of accuracy of this kind of analytical plan of mesh is limited by hardware configuration and existing analytical technology, for individual Analysis result error it is larger, using the stream of people, crowd as whole object carry out statistical analysis when can just obtain accurately Data, to be analyzed for a small amount of customer even individual, then the error of analysis result is larger.Missed to reduce analysis Difference, then the capture apparatus such as buying resolution ratio and the higher camera of definition, capture card is usually required, in order to more subtly Face, the figure and features feature of human body are identified, but the cost of this kind of capture apparatus is high, more applies at present in airport, high ferro In the security system of the important places such as station, Exhibition Centers, it is difficult to apply in commercial location under common line.
The content of the invention
Embodiments of the invention provide a kind of behavior analysis method and system, can reduce analysis result with relatively low cost Error.
To reach above-mentioned purpose, embodiments of the invention adopt the following technical scheme that:
In a first aspect, the method that embodiments of the invention provide, including:Obtain the capture apparatus institute set in designated space The video data of shooting;
Target in video is identified according to acquired video data, extracts the tracking result of the target in the video, And behavioral data of the target in each behavior section in the video is obtained according to the tracking result;
Behavioral data of the target in each behavior section in the video, is screened from the target in the video Go out first kind target and the second class target, and the first kind target to filtering out carries out duplicate removal processing;
According to behavioral data of the first kind target handled by duplicate removal in each behavior section, obtain handling by duplicate removal Each first kind target global behavior data.
With reference in a first aspect, in the first possible implementation of first aspect, in addition to:
It is determined that the behaviour template storehouse associated with the designated space;
Behavioral data of the target in the video in each behavior section, from the target in the video First kind target and the second class target are filtered out, including:
Read the first kind target and second class respectively from the behaviour template storehouse associated with the designated space The behaviour template of target;
According to the behaviour template read, the first kind target and described are filtered out from the target in the video Two class targets.
With reference to the possible implementation of the first of first aspect or first aspect, in second of possible implementation In, the target in video data identification video acquired in the basis, including:
According to acquired video data, background area and moving region are distinguished from video;
Human motion region, the target obtained as identification are screened from the moving region that differentiation obtains;
Track the target that the identification obtains and obtain movement locus, and record the tracking result of resulting movement locus, The tracking result includes:Performance-based objective, behavior section and content of the act, the performance-based objective include the target for identifying and obtaining In moving person, the behavior section includes the time section that is completely tracked to the performance-based objective, the content of the act Continuous motion process including the moving person.
With reference to second of possible implementation of first aspect, in the third possible implementation, the extraction The tracking result of target in the video, and target in the video is obtained in each behavior section according to the tracking result Interior behavioral data, including:
The overall rate and motion vector of target in the video, the behavior section of the target in video described in cutting It is section to obtain sub-line, and the sub-line includes for section:The content of the act of the performance-based objective is:It is quick to walk, be totally stationary or former Ground is static but has the time section of trunk activity;
For a sub- behavior section:Motion vector of the sub-line for the picture frame in section is extracted, by motion vector and correspondingly Picture frame sampled and merged, obtain each picture frame data represent;And the data of each picture frame are represented into fusion, obtain The sub-line is the behavioral data of section.
With reference to the first possible implementation of first aspect, in the 4th kind of possible implementation, the basis Behavioral data of the target in each behavior section in the video, filters out first kind target from the target in the video With the second class target, including:
The behavioral data and Analysis model of network behaviors of each behavior section of target in the video, determine each behavior section Behavior differentiates result;
Result is differentiated according to the behavior of each behavior section, with the first kind target and the second classification target behaviour template Matching degree, first kind target and the second class target are filtered out from the target in the video.
With reference to the 4th kind of possible implementation of first aspect, in the 5th kind of possible implementation, in addition to:
Behavioral data in each behavior section is marked on according to each second classification, it is complete to respectively obtain each second classification target Office's behavioral data;
According to the second obtained classification target global behavior data, the obtained global behavior number of first kind target is corrected According to.
Second aspect, the system that embodiments of the invention provide, the system include:Capture apparatus, with it is described shooting set Standby connected Analysis server, the terminal device being connected with the Analysis server and the data being connected with the Analysis server Storehouse system;
The capture apparatus is arranged in designated space, and the video data in designated space is obtained for shooting;
The Analysis server, for obtaining the video data captured by the capture apparatus;According to acquired video Target in data identification video, the tracking result of the target in the video is extracted, according to obtaining the tracking result Behavioral data of the target in each behavior section in video;And row of the target in the video in each behavior section For data, first kind target and the second class target, and the first classification to filtering out are filtered out from the target in the video Mark carries out duplicate removal processing;Further according to behavioral data of the first kind target in each behavior section by duplicate removal processing, obtain through The global behavior data of each first kind target of duplicate removal processing are crossed, and upload the Database Systems, and by the global behavior Data are sent to the terminal device;
The terminal device, for showing the global behavior data;
The Database Systems, the global behavior data uploaded for storing the Analysis server.
With reference to second aspect, in the first possible implementation of second aspect, the Database Systems, it is additionally operable to The behaviour template storehouse associated determined by storage with the designated space;
The Analysis server, specifically for accessing the Database Systems, and from the row associated with the designated space For the first kind target and the second classification target behaviour template are read in ATL respectively;And according to the behavior read Template, the first kind target and the second class target are filtered out from the target in the video;
Or the Analysis server, the behaviour template storehouse associated determined by storage with the designated space is additionally operable to, And specifically for reading the first kind target and described second respectively from the behaviour template storehouse associated with the designated space Classification target behaviour template;And according to the behaviour template read, the first kind is filtered out from the target in the video Target and the second class target.
With reference to the possible implementation of the first of second aspect or second aspect, in second of possible implementation In, the system also includes:Background server, the Analysis server are set with the terminal respectively via the background server It is standby to be connected with the Database Systems;
The Analysis server, behavioral data and behavior specifically for each behavior section of the target in the video Analysis model, determine that the behavior of each behavior section differentiates result;And result is differentiated according to the behavior of each behavior section, with second class The matching degree of the behaviour template of target, the second class target is filtered out from the target in the video;And by the video Target each behavior section behavior differentiate result sent to the background server;
The background server, for differentiating result, the behavior with the first kind target according to the behavior of each behavior section The matching degree of template, first kind target is filtered out from the target in the video.
With reference to second of possible implementation of first aspect, in the third possible implementation, the analysis Server, the behavioral data being marked on according to each second classification in each behavior section is additionally operable to, respectively obtains each second classification Target global behavior data, and sent to the background server;
The background server, it is additionally operable to according to obtained the second classification target global behavior data, correct to obtain the The global behavior data of a kind of target.
Behavior analysis method provided in an embodiment of the present invention and system, utilize the Intellectual Analysis Technology of video, there is provided for The scheme that goal behavior is analyzed, the data of the various actions by obtaining target from video, duplicate removal is carried out to target.It is right Background and dynamic area need to be only distinguished when movement locus is analyzed, be not required to configure resolution ratio and definition it is high camera drop Low hardware configuration requirement, the accuracy of acquired behavioral data is improved, so as to reduce analysis result with relatively low cost Error.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, it will use below required in embodiment Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for ability For the those of ordinary skill of domain, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings other attached Figure.
Fig. 1,2 are system architecture schematic diagram provided in an embodiment of the present invention;
Fig. 3 is the schematic flow sheet of behavior analysis method provided in an embodiment of the present invention;
Fig. 4 is a kind of structural representation of Analysis server provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram of instantiation provided in an embodiment of the present invention;
Fig. 6 is a kind of structural representation of background server provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of another instantiation provided in an embodiment of the present invention.
Embodiment
To make those skilled in the art more fully understand technical scheme, below in conjunction with the accompanying drawings and specific embodiment party Formula is described in further detail to the present invention.Embodiments of the present invention are described in more detail below, the embodiment is shown Example is shown in the drawings, wherein same or similar label represents same or similar element or has identical or class from beginning to end Like the element of function.Embodiment below with reference to accompanying drawing description is exemplary, is only used for explaining the present invention, and can not It is construed to limitation of the present invention.Those skilled in the art of the present technique are appreciated that unless expressly stated, odd number shape used herein Formula " one ", "one", " described " and "the" may also comprise plural form.It is to be further understood that the specification of the present invention The middle wording " comprising " used refers to the feature, integer, step, operation, element and/or component be present, but it is not excluded that In the presence of or other one or more features of addition, integer, step, operation, element, component and/or their groups.It should be understood that When we claim element to be " connected " or during " coupled " to another element, it can be directly connected or coupled to other elements, or There may also be intermediary element.In addition, " connection " used herein or " coupling " can include wireless connection or coupling.Here make Wording "and/or" includes any cell of one or more associated list items and all combined.The art Technical staff is appreciated that unless otherwise defined all terms (including technical term and scientific terminology) used herein have With the general understanding identical meaning of the those of ordinary skill in art of the present invention.It is it should also be understood that such as general Those terms defined in dictionary, which should be understood that, has the meaning consistent with the meaning in the context of prior art, and Unless being defined as here, will not be explained with the implication of idealization or overly formal.
Method flow in the embodiment of the present invention, it can specifically be performed by a kind of system as shown in Figure 1, the system is at least Including:In the capture apparatus being arranged in designated space, the Analysis server being connected with the capture apparatus, for real-time exhibition The terminal device of the analysis result of Analysis server, and the Database Systems being connected with Analysis server.
Wherein, designated space can be specifically the interior space of commercial location under the lines such as Xian Xia shops, market, supermarket;
Capture apparatus specifically can use monitoring camera, security protection to image the camera of first-class type, the bat of capture apparatus Definition and resolution ratio are taken the photograph, disclosure satisfy that the human body and static background of identification motion, and meets identification limb action. The quantity for the capture apparatus arranged in designated space can be one or more, and top set can be used to install or squint installation Deng mounting means, in order to which whole designated space is all in the visible range of capture apparatus.
Analysis Service implement body can be the server apparatus being individually made, such as:Rack, blade, tower or machine Cabinet type server apparatus, work station, mainframe computer etc. can also be used to possess stronger computing capability hardware device.Analysis clothes The server cluster that business device can also be made up of multiple server apparatus.Analysis server can be disposed in the interior in space, Such as:The Surveillance center being arranged in various commercial place, or be disposed in the interior near space, such as:It is arranged on and adjoins In outside computer room in various commercial place.Common, Analysis server is connected with capture apparatus by cable, the wiring of cable Mode determines according to the specific scale and doors structure of designated space.
Terminal device can specifically make an independent table apparatus in fact, or be integrated in a variety of media data playing devices In, can be the personal computers such as desktop computer, notebook computer, and by way of cable, internet or wireless network Establish and communicate with Analysis server;Can also be such as smart mobile phone, tablet personal computer (Tablet Personal Computer), Laptop computer (Laptop Computer) or personal digital assistant (personal digital assistant, abbreviation The mobile terminal such as PDA), and established and communicated with Analysis server by way of wireless network.Terminal device is used for real-time exhibition The data of Analysis server output, such as analysis result.
Database Systems specifically can be individually made, management, the server apparatus of storage for data, can also The server cluster being made up of multiple server apparatus.The runtime database on the hardware device of Database Systems, for managing Manage and store the data such as the video data, behavioral data that Analysis server obtains and send.Specifically can be netted using what is commonly used Database (Network Database), relational database (Relational Database), tree shaped data storehouse The database schemas such as (Hierarchical Database), object-oriented database (Object-oriented Database). Database Systems can specifically be deployed in the region outside designated space, such as:If designated space is various commercial place, count Special cloud computing center, data service center etc. can be deployed according to storehouse system, for example system needs to manage multiple business simultaneously The situation in industry place.It can also be deployed in designated space, for example system only needs to manage the situation of single commercial location.
Further, the method flow in the embodiment of the present invention, can specifically be performed by a kind of system as shown in Figure 2, In order to manage multiple commercial locations.Wherein, the Analysis server for being arranged in designated space is connected with background server, such as: Analysis Service implement body is deployed in commercial location, the Analysis server of each commercial location by internet and data center or The background server of person administrative center is connected, and terminal device is established with background server by internet or wireless network and led to Letter, Database Systems are connected with background server by internet or cable and establish communication.
The embodiment of the present invention provides a kind of behavior analysis method, as shown in figure 3, including:
S1, obtain the video data captured by the capture apparatus of setting in designated space.
Wherein, the quantity for the capture apparatus arranged in designated space can be one or more, be pacified using top set Mounting means, the captured video such as dress or strabismus installation include the personnel in the visible range of whole designated space, such as: In each shop or single shop in market, shops, the video data of shop inner region is obtained by capture apparatus, is clapped The video taken the photograph includes customer and sales force in shops.Real-time imaging is shot by the capture apparatus set in designated space, And transmit to Analysis server so that Analysis server is by video acquisition units such as built-in video frequency collection cards, so as to obtain The video data captured by capture apparatus set in designated space.
S2, the target in acquired video data identification video, extract the tracking knot of the target in the video Fruit, and behavioral data of the target in each behavior section in the video is obtained according to the tracking result.
Wherein, Analysis server is handled the video data collected in real time, is obtained by target detection and tracking Each moving target in video, such as:Search out moving target in the video sequence, and each moving target is carried out with Track, obtain the movement locus of each target.Analysis server is also to moving target in a certain section or the period of specified quantity Behavior in (behavior section) is identified, and obtains behavioral data of the target in each behavior section, such as:For each behavior section The behavior of interior performance-based objective is analyzed, and obtains more detailed behavioral data.
If multiple behavior sections, then multiple behavior sections continuous in time can be chosen at, can chosen in the presence of a timing Between the behavior section that is spaced, the selection mode of specific behavior section can determine according to practical application scene, in the present embodiment simultaneously Do not limit.
The behavioral data of S3, the target in the video in each behavior section, from the target in the video First kind target and the second class target are filtered out, and the first kind target to filtering out carries out duplicate removal processing.
Wherein it is possible to the behavior pattern according to target, the target photographed in designated space is divided into different roles, Such as:First kind target and the second class target.Such as:If designated space is the interior space in a shop, on the ceiling in shop There is provided wide-angle camera, for shooting panorama in shop.Each behavior is marked on by each mesh photographed in the shop is humanoid Behavior in section carries out Macro or mass analysis, is defined as the target of sales force as the second class target, and the target conduct for customer First kind target.And the target to that may be repeated in identified first kind target carries out duplicate removal processing, so as to improve global row For the degree of accuracy of data.Such as:Weight analysis are carried out to the customer photographed, the behavioral characteristics and camera lens in video regard Feel feature, duplicate removal is carried out to recurrent customer identification, while salesman's target can be identified.It is each so as to accurately obtain The global behavior data of customer.
S4, the behavioral data according to the first kind target handled by duplicate removal in each behavior section, are obtained by duplicate removal The global behavior data of each first kind target of processing.
Such as:Behavioral data of each target in each behavior section can be subjected to Macro or mass analysis by Analysis server, Firm sale personnel, and weight analysis are carried out to customer, obtain the global behavior data of each customer.By the behavior of current customer After data summarization, the terminal device that is sent in shop, and program is presented on forms of expression such as chart or curves or should In interface, referred to for the sales force in shop.Can also be to the row of the customer of Analysis server in multiple shops Behavioral data for data, salesman is collected and recorded, and is sent to data center, in order to which data center is to multiple shops The behavioral data of each target in paving carries out Macro or mass analysis.
Behavior analysis method provided in an embodiment of the present invention and system, utilize the Intellectual Analysis Technology of video, there is provided for The scheme that goal behavior is analyzed, the data of the various actions by obtaining target from video, duplicate removal is carried out to target, and Movement locus based on target obtains global behavior data.Background need to be only distinguished during due to being analyzed for movement locus and is moved State region, being not required to configure resolution ratio and the high camera of definition reduces hardware configuration requirement;Also, relative to existing right In the analysis mode of the stream of people, crowd, the present embodiment can reduce analysis, duplicate removal for the movement locus of single target, so as to more Add the intention for obtaining target exactly, such as:By the behavior of the customer of acquisition (including with it is commodity and mutual with sales force Dynamic behavior) data, duplicate removal, and then the effective buying intention for analyzing customer can be carried out to customer.While suitable for single The scenes such as shops, more shops, customer is obtained respectively in short term and longer behavioral data so that acquired customer is abundant and essence Accurate behavioral data.So as to improve the accuracy of acquired behavioral data, improve based on row cost-effective while For the accuracy of the analysis of data.
In the present embodiment, a kind of behavior pattern according to target is also provided, the target photographed in designated space is drawn It is divided into the concrete mode of different roles, including:
It is determined that the behaviour template storehouse associated with the designated space.
Behavioral data of the target in the video in each behavior section, from the target in the video First kind target and the second class target are filtered out, including:Read respectively from the behaviour template storehouse associated with the designated space The first kind target and the second classification target behaviour template.According to the behaviour template read, from the video The first kind target and the second class target are filtered out in target.
Wherein, behaviour template storehouse can specifically include the behavior pattern of the plurality of target to prestore, and as each for identifying Classification target behaviour template, such as:Prestore the behavior pattern of customer in behaviour template storehouse, when Analysis server is obtained in video After behavioral data of the target in each behavior section, with reference to the behavior pattern for the customer being pre-stored in behaviour template storehouse, judge to regard Whether the target in frequency matches the behavior pattern of customer, if being then customer by the target discrimination in video.
In the present embodiment, according to different behaviour templates, the first kind can be filtered out from the target in the video Target and the second class target, can also filter out the target of more multiclass from the target in the video, specifically can be according to reality In the application of border, the behaviour template species in behaviour template storehouse determines, such as:If customer, sale people are stored in behaviour template storehouse N number of behaviour templates such as member, manager, cleaning worker, then Analysis server can at most be filtered out from the target in the video 1st to N class targets.In the present embodiment, behaviour template storehouse can be stored in the local storage of Analysis server;Can also It is stored in Database Systems, and is inquired about by Analysis server from Database Systems and obtain behaviour template.
In the present embodiment, the concrete mode of the target in acquired video data identification video, can include:
According to acquired video data, background area and moving region are distinguished from video.And the fortune obtained from differentiation Human motion region is screened in dynamic region, the target obtained as identification.And track the target that the identification obtains and moved Track, and record the tracking result of resulting movement locus.
Wherein, the tracking result includes:Performance-based objective, behavior section and content of the act, the performance-based objective include described The moving person in obtained target is identified, the behavior section includes the time zone completely tracked to the performance-based objective Section, the content of the act include the continuous motion process of the moving person.
Such as:For indoor scene, can count to obtain the background area in indoor scene, in the picture frame of current shooting In subtract background area and obtain moving region.Screened further according to default recognition rule (such as motion during human motion The transformation rule in region, or other existing motion recognition rules), obtain human motion region.For resulting human body Moving region, using default tracking carry out target following (such as:Kalman filtering or particle filter), obtain each mesh Target movement locus, wherein when a target leaves shop region, then this target terminates trace flow.By movement locus Multiple sections are divided into sequentially in time, and each section of tracking result includes:Performance-based objective, behavior section and content of the act.
In the present embodiment, the tracking result of the target in the video is extracted, and institute is obtained according to the tracking result The concrete mode of behavioral data of the target in each behavior section in video is stated, for the performance-based objective in each behavior section, Its behavior is analyzed, draws its detailed behavioral data.Behavioural analysis may include behavior, and slightly judgement and behavior are finely adjudicated extremely The judging process of few two granularities, wherein can include:
The overall rate and motion vector of target in the video, the behavior section of the target in video described in cutting It is section to obtain sub-line.
Wherein, the sub-line includes for section:The content of the act of the performance-based objective is:It is quick to walk, be totally stationary or former Ground is static but has the time section of trunk activity.Such as:, can be by the row of target according to the overall rate and motion vector of target To be further cut into small behavior subsegment, and the court verdict that the behavior for obtaining behavior subsegment is slightly adjudicated, including:Quick row Walk, be totally stationary, original place is static but has trunk activity etc. several.Wherein quickly walking and totally stationary it need not continue to judge.
And further the behavior in each behavior subsegment is analyzed, so as to which consummatory behavior is finely adjudicated, wherein for One sub- behavior section:Motion vector of the sub-line for the picture frame in section is extracted, motion vector and corresponding picture frame are carried out Sample and merge, the data for obtaining each picture frame represent.And the data of each picture frame are represented into fusion, the sub-line is obtained as section Behavioral data.Such as:Analysis server first carries out cutting to the subsegment behavior, for example can be cut into subsegment uniform some Segment, or more than on the basis of further sampling to accelerate analyze speed.Next to that the section is extracted per frame or fixed sample The motion vector of frame, the sports immunology using motion vector as target, each section of motion vector frame number is identical.To move to Amount and corresponding picture frame carry out down-sampling and the fusion of same ratio, and the data for obtaining each frame represent.By the tables of data of each frame Show and merged, obtain the behavioral data of the segment.Input using the data as Analysis model of network behaviors, obtains the row of the segment To differentiate result.Wherein, multi-threading parallel process can be taken for each segment Analysis server, so as to improve processing effect Rate.
In the present embodiment, Analysis model of network behaviors can be a kind of disaggregated model based on deep neural network.Such as:Utilize A large amount of behavior video datas marked train what is obtained.The deep neural network is changed based on two-dimensional convolution neutral net Make, Three dimensional convolution neural network model can be used in the preferred scheme of the present embodiment;Or using conventional two dimension volume Based on the convolutional layer and pond layer of product neutral net, with the full articulamentum before new motion analysis layer substitution.
In the present embodiment, behavioral data of the target in the video in each behavior section, from the video In target in filter out first kind target and the second classification target concrete mode, including:
The behavioral data and Analysis model of network behaviors of each behavior section of target in the video, determine each behavior section Behavior differentiates result.
Result is differentiated according to the behavior of each behavior section, with the first kind target and the second classification target behaviour template Matching degree, first kind target and the second class target are filtered out from the target in the video.
Such as:The result that Analysis server is analyzed according to goal behavior, from which further follow that the overall situation of sales force and customer Behavioral data, and collect.
Discriminant analysis for sales force:Mainly by the analysis of movement locus, according to some sales forces both There is sports rule, it is initialized, such as:On the basis of above-mentioned behavior is slightly adjudicated and is finely adjudicated, for selling people The thick judgement of member's (as the second class target), in addition to:By earliest to shop and one be detained for a long time in shop, people is sold in judgement Member.
Discriminant analysis for customer's (as first kind target):In each behavior section, sampled to obtain a series of Picture frame, and these images are subjected to binaryzation.Picture frame after binaryzation is input in grader, judged by grader Operating state corresponding to this picture frame is possible.Wherein, the grader is the grader of a multiclass, is according to a large amount of corresponding Mark what sample training obtained, input grader is the picture frame of binaryzation, and output is the possible operating state of target.Enter One step, between each behavior section, in the picture frame that sample above goes out, by corresponding coloured image under same action state By deep learning network extraction feature, ballot is compared according to feature, and by similarity it is high be then considered as same customer, Such as:Between behavior section A sections and B sections, m is shared to operating state identical image, then corresponding 2m image is all extracted it Feature, and m comparison is carried out between corresponding image pair, in a pair of images:The feature of one image and another image The mutual distance of feature be less than or equal to default minimum threshold, then it is assumed that be similar people and vote+1, when similarity number is always voted height When default max-thresholds, it is determined as it being same customer.
Optionally, in the present embodiment, in addition to:
Behavioral data in each behavior section is marked on according to each second classification, it is complete to respectively obtain each second classification target Office's behavioral data.And according to the second obtained classification target global behavior data, correct the obtained global row of first kind target For data.Such as:Analysis server is collected the behavioral data of preceding each target, obtains the complete of each customer and salesman Office's behavioral data.Including:According to the continuity of same target over time and space, by same customer or salesman Behavioral data is attached, and its complete and continuous behavioral data is obtained, as global behavior data.And existed according to different target Correlation on time of the act and action space, it would be possible to interactive goal behavior be present and be corrected.Such as:Salesman and Gu Visitor is in phase near space in the same time, and both sides have talk action, then are corrected to behavior of the both sides in this behavior section Talk behavior.
The embodiment of the present invention also provides a kind of behavior analysis system, as shown in Figure 1, the system includes:Capture apparatus, The Analysis server that is connected with the capture apparatus, the terminal device being connected with the Analysis server and with the Analysis Service The connected Database Systems of device;The system may be mounted at the region that in the designated space or the designated space adjoins, Such as:System as shown in Figure 1 is installed in single shop
The capture apparatus is arranged in designated space, and the video data in designated space is obtained for shooting;
The Analysis server, for obtaining the video data captured by the capture apparatus;According to acquired video Target in data identification video, the tracking result of the target in the video is extracted, according to obtaining the tracking result Behavioral data of the target in each behavior section in video;And row of the target in the video in each behavior section For data, first kind target and the second class target, and the first classification to filtering out are filtered out from the target in the video Mark carries out duplicate removal processing;Further according to behavioral data of the first kind target in each behavior section by duplicate removal processing, obtain through The global behavior data of each first kind target of duplicate removal processing are crossed, and upload the Database Systems, and by the global behavior Data are sent to the terminal device;
The terminal device, for showing the global behavior data;
The Database Systems, the global behavior data uploaded for storing the Analysis server.
A kind of particular hardware structure of Analysis server described in the present embodiment, there is provided possible tool as shown in Figure 4 Body framework, including:At least one processor 111, such as CPU, at least one network interface 114 or other users interface 113, Memory 115, at least one communication bus 112.Communication bus 112 is used to realize the connection communication between these components.It is optional , also comprising user interface 113, including display, keyboard or pointing device (for example, mouse, trace ball (trackball), Touch-sensitive plate or touch sensitive display screen).Memory 115 may include high-speed RAM memory, it is also possible to also be deposited including non-labile Reservoir (non-volatile memory), for example, at least a magnetic disk storage.Memory 115 can optionally include at least One storage device for being located remotely from aforementioned processor 111.
In some embodiments, memory 115 stores following element, can perform module or data structure, or Their subset of person, or their superset:Operating system 1151, comprising various system programs, for realizing various bases Business and the hardware based task of processing;Application program 1152, comprising various application programs, various industry is applied for realizing Business.Wherein, as shown in Figure 5, include but is not limited in application program 1152:Pretreatment module 51 and Macro or mass analysis module 52.
Pretreatment module 51, for the target in acquired video data identification video, extract in the video Target tracking result, behavior number of the target in each behavior section in the video is obtained according to the tracking result According to.Wherein, pretreatment module 51 specifically includes:Target detection and tracking submodule 511 and goal behavior analysis submodule 512.
Target detection and tracking submodule 511, for according to acquired video data, background area to be distinguished from video And moving region;And human motion region, the target obtained as identification are screened from the moving region that differentiation obtains;Tracking institute State the target that identification obtains and obtain movement locus, and record the tracking result of resulting movement locus.So as in video sequence In search out moving target, and each moving target is tracked, obtains the movement locus of each target.Goal behavior is analyzed Submodule 512, for the overall rate and motion vector of the target in the video, target in video described in cutting It is section that behavior section, which obtains sub-line, and the sub-line includes for section:The content of the act of the performance-based objective is:It is quick to walk, be completely quiet Only or original place it is static but have trunk activity time section;For a sub- behavior section:The sub-line is extracted as the picture frame in section Motion vector, motion vector and corresponding picture frame are sampled and merged, obtain each picture frame data represent;And will The data of each picture frame represent fusion, obtain the behavioral data that the sub-line is section.
Macro or mass analysis module 52, for the behavioral data exported according to pretreatment module 51, from which further follow that sales force With the global behavior data of customer and collect.Wherein, Macro or mass analysis module 52 specifically includes:Second analysis submodule 521, first Analyze submodule 522, data summarization module 523.
Second analysis submodule block 521, for differentiating result according to the behavior of each behavior section, with the second classification target The matching degree of behaviour template, the second class target, such as sales force are filtered out from the target in the video.First analysis Submodule 522, for differentiating result, the matching journey with the behaviour template of the first kind target according to the behavior of each behavior section Degree, filters out first kind target, such as customer from the target in the video.Data summarization submodule 523, obtained for basis The the second classification target global behavior data arrived, correct the obtained global behavior data of first kind target.
In the present embodiment, the Database Systems, it is additionally operable to the row associated determined by storage with the designated space For ATL;The Analysis server, specifically for accessing the Database Systems, and from the row associated with the designated space For the first kind target and the second classification target behaviour template are read in ATL respectively;And according to the behavior read Template, the first kind target and the second class target are filtered out from the target in the video;
Or the Analysis server, the behaviour template storehouse associated determined by storage with the designated space is additionally operable to, And specifically for reading the first kind target and described second respectively from the behaviour template storehouse associated with the designated space Classification target behaviour template;And according to the behaviour template read, the first kind is filtered out from the target in the video Target and the second class target.
In the present embodiment, on the basis of system as shown in Figure 1, the embodiment of the present invention also provides a kind of behavioural analysis System, wherein, relative to system as shown in Figure 1, in as shown in Figure 2, capture apparatus, it is connected with the capture apparatus Analysis server may be mounted at the region that in the designated space or the designated space adjoins, wherein, designated space Or the region adjoined of the designated space exists multiple, for example it is arranged in the scene in multiple shops, sky is specified at each It is interior all to arrange supporting capture apparatus, and the region arrangement analysis adjoined in each designated space or the designated space Server.The system also includes:Background server, each Analysis server via the background server respectively with the end End equipment is connected with the Database Systems;
The Analysis server, behavioral data and behavior specifically for each behavior section of the target in the video Analysis model, determine that the behavior of each behavior section differentiates result;And result is differentiated according to the behavior of each behavior section, with second class The matching degree of the behaviour template of target, the second class target is filtered out from the target in the video;And by the video Target each behavior section behavior differentiate result sent to the background server;
The background server, for differentiating result, the behavior with the first kind target according to the behavior of each behavior section The matching degree of template, first kind target is filtered out from the target in the video.
A kind of particular hardware structure of background server described in the present embodiment, there is provided possible tool as shown in Figure 6 Body framework includes:At least one processor 211, such as CPU, at least one network interface 214 or other users interface 213, Memory 215, at least one communication bus 212.Communication bus 212 is used to realize the connection communication between these components.It is optional , also comprising user interface 213, including display, keyboard or pointing device (for example, mouse, trace ball (trackball), Touch-sensitive plate or touch sensitive display screen).Memory 215 may include high-speed RAM memory, it is also possible to also be deposited including non-labile Reservoir (non-volatile memory), for example, at least a magnetic disk storage.Memory 215 can optionally include at least One storage device for being located remotely from aforementioned processor 211.
In some embodiments, memory 215 stores following element, can perform module or data structure, or Their subset of person, or their superset:Operating system 2151, comprising various system programs, for realizing various bases Business and the hardware based task of processing;Application program 2152, comprising various application programs, various industry is applied for realizing Business.
Wherein, as shown in Figure 7, include but is not limited in the application program 1152 of Analysis server:Pretreatment module 71.Analysis Service implement body can be as shown in Figure 4.Pretreatment module 71, for being regarded according to acquired video data identification Target in frequency, the tracking result of the target in the video is extracted, the mesh in the video is obtained according to the tracking result The behavioral data being marked in each behavior section.Wherein, pretreatment module 71 specifically includes:Target detection and tracking submodule 711, Goal behavior analysis submodule 712 and second analyzes submodule block 713.It should be noted that in the space of different fingers Analysis server can use identical framework.
Target detection and tracking submodule 711, for according to acquired video data, background area to be distinguished from video And moving region;And human motion region, the target obtained as identification are screened from the moving region that differentiation obtains;Tracking institute State the target that identification obtains and obtain movement locus, and record the tracking result of resulting movement locus.So as in video sequence In search out moving target, and each moving target is tracked, obtains the movement locus of each target.
Goal behavior analyzes submodule 712, for the overall rate and motion vector of the target in the video, cuts It is section that the behavior section for the target divided in the video, which obtains sub-line, and the sub-line includes for section:In the behavior of the performance-based objective Rong Wei:Quick walking, totally stationary or original place is static but has the time section of trunk activity;For a sub- behavior section:Extraction The sub-line is the motion vector of the picture frame in section, and motion vector and corresponding picture frame are sampled and merged, and is obtained each The data of picture frame represent;And the data of each picture frame are represented into fusion, obtain the behavioral data that the sub-line is section.
Second analysis submodule block 713, for differentiating result according to the behavior of each behavior section, with the second classification target The matching degree of behaviour template, the second class target, such as sales force are filtered out from the target in the video;And according to every Individual second classification is marked on the behavioral data in each behavior section, respectively obtains each second classification target global behavior data, and Sent to the background server.
Include but is not limited in the application program 2152 of background server:Macro or mass analysis module 72, for according to pre- place The behavioral data that module 71 exports is managed, the global behavior data of sales force and customer is from which further followed that and collects.Wherein, collect Analysis module 72 specifically includes:First analysis submodule 721, data summarization module 722.
First analysis submodule 721, for differentiating result, the row with the first kind target according to the behavior of each behavior section For the matching degree of template, first kind target, such as customer are filtered out from the target in the video.
Data summarization submodule 722, for according to obtained the second classification target global behavior data, correct to obtain the The global behavior data of a kind of target.It is additionally operable to according to obtained the second classification target global behavior data, correct to obtain the The global behavior data of a kind of target.
Behavior analysis system provided in an embodiment of the present invention, utilize the Intellectual Analysis Technology of video, there is provided for target line For the scheme analyzed, such as:The above-mentioned system as shown in Figure 1 that can be arranged in single designated space and it can arrange System as shown in Figure 2 in multiple designated spaces., can be by each designated space in system as shown in Figure 2 Behavior of each target in each behavior section carries out Macro or mass analysis, determines first and second class target, and first kind target is carried out Weight analysis are removed, obtain the global behavior data of each target.By the data for the various actions that target is obtained from video, to mesh Mark carries out duplicate removal, and the movement locus based on target obtains global behavior data.During due to being analyzed for movement locus only Background and dynamic area need to be distinguished, being not required to configure resolution ratio and the high camera of definition reduces hardware configuration requirement;Also, Relative to the existing analysis mode for the stream of people, crowd, the present embodiment can reduce for the movement locus of single target to be divided Analysis, duplicate removal, so as to more accurately obtain the intention of target, such as:By the behavior of the customer of acquisition (including with commodity and With the mutual-action behavior of sales force) data, duplicate removal, and then the effective buying intention for analyzing customer can be carried out to customer.And And it is applied to the scene such as simple gate shop, more shops simultaneously, customer is obtained respectively in short term and longer behavioral data so that acquired Customer is abundant and accurately behavioral data.So as to improve the accuracy of acquired behavioral data, cost-effective same When improve Behavior-based control data analysis accuracy.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment Divide mutually referring to what each embodiment stressed is the difference with other embodiment.It is real especially for equipment For applying example, because it is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to embodiment of the method Part explanation.The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited to This, any one skilled in the art the invention discloses technical scope in, the change that can readily occur in or replace Change, should all be included within the scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claim Enclose and be defined.

Claims (10)

  1. A kind of 1. behavior analysis method, it is characterised in that including:
    Obtain the video data captured by the capture apparatus set in designated space;
    Target in video is identified according to acquired video data, extracts the tracking result of the target in the video, and root Behavioral data of the target in each behavior section in the video is obtained according to the tracking result;
    Behavioral data of the target in each behavior section in the video, is filtered out from the target in the video A kind of target and the second class target, and the first kind target to filtering out carries out duplicate removal processing;
    According to behavioral data of the first kind target handled by duplicate removal in each behavior section, obtain by each of duplicate removal processing The global behavior data of first kind target.
  2. 2. according to the method for claim 1, it is characterised in that also include:
    It is determined that the behaviour template storehouse associated with the designated space;
    Behavioral data of the target in the video in each behavior section, is screened from the target in the video Go out first kind target and the second class target, including:
    Read the first kind target and the second class target respectively from the behaviour template storehouse associated with the designated space Behaviour template;
    According to the behaviour template read, the first kind target and second class are filtered out from the target in the video Target.
  3. 3. method according to claim 1 or 2, it is characterised in that the video data identification video acquired in the basis In target, including:
    According to acquired video data, background area and moving region are distinguished from video;
    Human motion region, the target obtained as identification are screened from the moving region that differentiation obtains;
    Track the target that the identification obtains and obtain movement locus, and record the tracking result of resulting movement locus, it is described Tracking result includes:Performance-based objective, behavior section and content of the act, the performance-based objective are included in the target for identifying and obtaining Moving person, the behavior section include the time section completely tracked to the performance-based objective, and the content of the act includes The continuous motion process of the moving person.
  4. 4. according to the method for claim 3, it is characterised in that the tracking result of the target in the extraction video, And behavioral data of the target in each behavior section in the video is obtained according to the tracking result, including:
    The overall rate and motion vector of target in the video, the behavior section of the target in video described in cutting obtain Sub-line is section, and the sub-line includes for section:The content of the act of the performance-based objective is:Quick walking, totally stationary or original place are quiet Only but have trunk activity time section;
    For a sub- behavior section:Motion vector of the sub-line for the picture frame in section is extracted, by motion vector and corresponding figure As frame is sampled and is merged, the data for obtaining each picture frame represent;And the data of each picture frame are represented into fusion, obtain the son The behavioral data of behavior section.
  5. 5. according to the method for claim 2, it is characterised in that the target in the video is in each behavior section Interior behavioral data, first kind target and the second class target are filtered out from the target in the video, including:
    The behavioral data and Analysis model of network behaviors of each behavior section of target in the video, determine the behavior of each behavior section Differentiate result;
    Result is differentiated according to the behavior of each behavior section, with the first kind target and of the second classification target behaviour template With degree, first kind target and the second class target are filtered out from the target in the video.
  6. 6. according to the method for claim 5, it is characterised in that also include:
    Behavioral data in each behavior section is marked on according to each second classification, respectively obtains each second classification target overall situation row For data;
    According to the second obtained classification target global behavior data, the obtained global behavior data of first kind target are corrected.
  7. 7. a kind of behavior analysis system, it is characterised in that the system includes:Capture apparatus, it is connected with the capture apparatus Analysis server, the terminal device being connected with the Analysis server and the Database Systems being connected with the Analysis server;
    The capture apparatus is arranged in designated space, and the video data in designated space is obtained for shooting;
    The Analysis server, for obtaining the video data captured by the capture apparatus;According to acquired video data The target in video is identified, the tracking result of the target in the video is extracted, the video is obtained according to the tracking result In behavioral data of the target in each behavior section;And behavior number of the target in the video in each behavior section According to filtering out first kind target and the second class target from the target in the video, and the first kind target to filtering out is entered The processing of row duplicate removal;Further according to behavioral data of the first kind target by duplicate removal processing in each behavior section, obtained through the past Global behavior data of each first kind target handled again, and upload the Database Systems, and by the global behavior data Sent to the terminal device;
    The terminal device, for showing the global behavior data;
    The Database Systems, the global behavior data uploaded for storing the Analysis server.
  8. 8. system according to claim 7, it is characterised in that the Database Systems, be additionally operable to storage determined by with The behaviour template storehouse of the designated space association;
    The Analysis server, specifically for accessing the Database Systems, and from the behavior mould associated with the designated space The first kind target and the second classification target behaviour template are read in plate storehouse respectively;And according to the behavior mould read Plate, the first kind target and the second class target are filtered out from the target in the video;
    Or the Analysis server, the behaviour template storehouse associated determined by storage with the designated space is additionally operable to, and have Body is used to read the first kind target and second classification respectively from the behaviour template storehouse associated with the designated space Target behaviour template;And according to the behaviour template read, the first kind target is filtered out from the target in the video With the second class target.
  9. 9. the system according to claim 7 or 8, it is characterised in that the system also includes:Background server, described point Analysis server is connected with the terminal device and the Database Systems respectively via the background server;
    The Analysis server, behavioral data and behavioural analysis specifically for each behavior section of the target in the video Model, determine that the behavior of each behavior section differentiates result;And result is differentiated according to the behavior of each behavior section, with the second class target Behaviour template matching degree, the second class target is filtered out from the target in the video;And by the mesh in the video The behavior of each behavior section of target differentiates that result is sent to the background server;
    The background server, for differentiating result, the behaviour template with the first kind target according to the behavior of each behavior section Matching degree, first kind target is filtered out from the target in the video.
  10. 10. system according to claim 9, it is characterised in that the Analysis server, be additionally operable to according to each second class Behavioral data of the target in each behavior section, respectively obtains each second classification target global behavior data, and to after described Platform server is sent;
    The background server, it is additionally operable to, according to the second obtained classification target global behavior data, correct the obtained first kind The global behavior data of target.
CN201610860255.4A 2016-09-28 2016-09-28 Behavior analysis method and system Active CN107871111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610860255.4A CN107871111B (en) 2016-09-28 2016-09-28 Behavior analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610860255.4A CN107871111B (en) 2016-09-28 2016-09-28 Behavior analysis method and system

Publications (2)

Publication Number Publication Date
CN107871111A true CN107871111A (en) 2018-04-03
CN107871111B CN107871111B (en) 2021-11-26

Family

ID=61761385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610860255.4A Active CN107871111B (en) 2016-09-28 2016-09-28 Behavior analysis method and system

Country Status (1)

Country Link
CN (1) CN107871111B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509657A (en) * 2018-04-27 2018-09-07 深圳爱酷智能科技有限公司 Data distribute store method, equipment and computer readable storage medium
CN108830644A (en) * 2018-05-31 2018-11-16 深圳正品创想科技有限公司 A kind of unmanned shop shopping guide method and its device, electronic equipment
CN108921645A (en) * 2018-06-07 2018-11-30 深圳码隆科技有限公司 A kind of commodity purchasing determination method, device and user terminal
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN111476202A (en) * 2020-04-30 2020-07-31 杨九妹 User behavior analysis method and system of financial institution security system and robot
CN111524164A (en) * 2020-04-21 2020-08-11 北京爱笔科技有限公司 Target tracking method and device and electronic equipment
CN111563438A (en) * 2020-04-28 2020-08-21 厦门市美亚柏科信息股份有限公司 Target duplication eliminating method and device for video structuring

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050071223A1 (en) * 2003-09-30 2005-03-31 Vivek Jain Method, system and computer program product for dynamic marketing strategy development
CN101021927A (en) * 2007-03-20 2007-08-22 中国移动通信集团江苏有限公司 Unified marketing supporting system based on analysis of user behaviour and habit and method thereof
CN101639922A (en) * 2008-07-31 2010-02-03 Nec九州软件株式会社 System and method for guest path analysis
CN102122346A (en) * 2011-02-28 2011-07-13 济南纳维信息技术有限公司 Video analysis-based physical storefront customer interest point acquisition method
CN102682397A (en) * 2012-05-11 2012-09-19 北京吉亚互联科技有限公司 Advertising effect proving method and system of web portals
US20130096860A1 (en) * 2011-10-12 2013-04-18 Fuji Xerox Co., Ltd. Information processing apparatus, information processing method, and computer readable medium storing program
CN103839049A (en) * 2014-02-26 2014-06-04 中国计量学院 Double-person interactive behavior recognizing and active role determining method
CN104050239A (en) * 2014-05-27 2014-09-17 重庆爱思网安信息技术有限公司 Correlation matching analyzing method among multiple objects
CN104199903A (en) * 2014-08-27 2014-12-10 上海熙菱信息技术有限公司 Vehicle data query system and method based on path correlation
CN104298974A (en) * 2014-10-10 2015-01-21 北京工业大学 Human body behavior recognition method based on depth video sequence
CN104318578A (en) * 2014-11-12 2015-01-28 苏州科达科技股份有限公司 Video image analyzing method and system
US9053589B1 (en) * 2008-10-23 2015-06-09 Experian Information Solutions, Inc. System and method for monitoring and predicting vehicle attributes
US20150345942A1 (en) * 2014-05-31 2015-12-03 3Vr Security, Inc. Calculation the duration time in a confined space
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
CN105205155A (en) * 2015-09-25 2015-12-30 珠海世纪鼎利科技股份有限公司 Big data criminal accomplice screening system and method
US20160085297A1 (en) * 2014-09-22 2016-03-24 Fuji Xerox Co., Ltd. Non-transitory computer readable medium, information processing apparatus, and position conversion method
CN105678591A (en) * 2016-02-29 2016-06-15 北京时代云英科技有限公司 Video-analysis-based commercial intelligent operation decision-making support system and method
CN105760646A (en) * 2014-12-18 2016-07-13 中国移动通信集团公司 Method and device for activity classification
CN105809714A (en) * 2016-03-07 2016-07-27 广东顺德中山大学卡内基梅隆大学国际联合研究院 Track confidence coefficient based multi-object tracking method
CN105843919A (en) * 2016-03-24 2016-08-10 云南大学 Moving object track clustering method based on multi-feature fusion and clustering ensemble

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050071223A1 (en) * 2003-09-30 2005-03-31 Vivek Jain Method, system and computer program product for dynamic marketing strategy development
CN101021927A (en) * 2007-03-20 2007-08-22 中国移动通信集团江苏有限公司 Unified marketing supporting system based on analysis of user behaviour and habit and method thereof
CN101639922A (en) * 2008-07-31 2010-02-03 Nec九州软件株式会社 System and method for guest path analysis
US9053589B1 (en) * 2008-10-23 2015-06-09 Experian Information Solutions, Inc. System and method for monitoring and predicting vehicle attributes
CN102122346A (en) * 2011-02-28 2011-07-13 济南纳维信息技术有限公司 Video analysis-based physical storefront customer interest point acquisition method
US20130096860A1 (en) * 2011-10-12 2013-04-18 Fuji Xerox Co., Ltd. Information processing apparatus, information processing method, and computer readable medium storing program
CN102682397A (en) * 2012-05-11 2012-09-19 北京吉亚互联科技有限公司 Advertising effect proving method and system of web portals
CN103839049A (en) * 2014-02-26 2014-06-04 中国计量学院 Double-person interactive behavior recognizing and active role determining method
CN104050239A (en) * 2014-05-27 2014-09-17 重庆爱思网安信息技术有限公司 Correlation matching analyzing method among multiple objects
US20150345942A1 (en) * 2014-05-31 2015-12-03 3Vr Security, Inc. Calculation the duration time in a confined space
CN104199903A (en) * 2014-08-27 2014-12-10 上海熙菱信息技术有限公司 Vehicle data query system and method based on path correlation
US20160085297A1 (en) * 2014-09-22 2016-03-24 Fuji Xerox Co., Ltd. Non-transitory computer readable medium, information processing apparatus, and position conversion method
CN104298974A (en) * 2014-10-10 2015-01-21 北京工业大学 Human body behavior recognition method based on depth video sequence
CN104318578A (en) * 2014-11-12 2015-01-28 苏州科达科技股份有限公司 Video image analyzing method and system
CN105760646A (en) * 2014-12-18 2016-07-13 中国移动通信集团公司 Method and device for activity classification
CN105184258A (en) * 2015-09-09 2015-12-23 苏州科达科技股份有限公司 Target tracking method and system and staff behavior analyzing method and system
CN105205155A (en) * 2015-09-25 2015-12-30 珠海世纪鼎利科技股份有限公司 Big data criminal accomplice screening system and method
CN105678591A (en) * 2016-02-29 2016-06-15 北京时代云英科技有限公司 Video-analysis-based commercial intelligent operation decision-making support system and method
CN105809714A (en) * 2016-03-07 2016-07-27 广东顺德中山大学卡内基梅隆大学国际联合研究院 Track confidence coefficient based multi-object tracking method
CN105843919A (en) * 2016-03-24 2016-08-10 云南大学 Moving object track clustering method based on multi-feature fusion and clustering ensemble

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MUKESH A. ZAVERI 等: "Robust Neural-Network-Based Data Association and Multiple Model-Based Tracking of Multiple Point Targets", 《IEEE TRANSACTIONS ON SYSTEMS》 *
董宏辉 等: "基于智能视频分析的铁路入侵检测技术研究", 《中国铁道科学》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509657A (en) * 2018-04-27 2018-09-07 深圳爱酷智能科技有限公司 Data distribute store method, equipment and computer readable storage medium
CN108830644A (en) * 2018-05-31 2018-11-16 深圳正品创想科技有限公司 A kind of unmanned shop shopping guide method and its device, electronic equipment
CN108921645A (en) * 2018-06-07 2018-11-30 深圳码隆科技有限公司 A kind of commodity purchasing determination method, device and user terminal
WO2019233098A1 (en) * 2018-06-07 2019-12-12 深圳码隆科技有限公司 Method and apparatus for determining commodity purchase, and user terminal
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN111524164A (en) * 2020-04-21 2020-08-11 北京爱笔科技有限公司 Target tracking method and device and electronic equipment
CN111524164B (en) * 2020-04-21 2023-10-13 北京爱笔科技有限公司 Target tracking method and device and electronic equipment
CN111563438A (en) * 2020-04-28 2020-08-21 厦门市美亚柏科信息股份有限公司 Target duplication eliminating method and device for video structuring
CN111563438B (en) * 2020-04-28 2022-08-12 厦门市美亚柏科信息股份有限公司 Target duplication eliminating method and device for video structuring
CN111476202A (en) * 2020-04-30 2020-07-31 杨九妹 User behavior analysis method and system of financial institution security system and robot

Also Published As

Publication number Publication date
CN107871111B (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN107871111A (en) A kind of behavior analysis method and system
Barris et al. A review of vision-based motion analysis in sport
CN106776619B (en) Method and device for determining attribute information of target object
US20160328604A1 (en) Systems and methods of monitoring activities at a gaming venue
CN103988232B (en) Motion manifold is used to improve images match
CN108830251A (en) Information correlation method, device and system
Patruno et al. People re-identification using skeleton standard posture and color descriptors from RGB-D data
CN106663196A (en) Computerized prominent person recognition in videos
CN106028134A (en) Detect sports video highlights for mobile computing devices
CN102122346A (en) Video analysis-based physical storefront customer interest point acquisition method
CN105608419A (en) Passenger flow video detection and analysis system
CN110532978A (en) Storage management method, device, equipment and storage medium
JP2003087771A (en) Monitoring system and monitoring method
Majd et al. Impact of machine learning on improvement of user experience in museums
Meng et al. A video information driven football recommendation system
CN113326816A (en) Offline customer behavior identification method, system, storage medium and terminal
Rahimian et al. Optical tracking in team sports: A survey on player and ball tracking methods in soccer and other team sports
US20210334758A1 (en) System and Method of Reporting Based on Analysis of Location and Interaction Between Employees and Visitors
Elharrouss et al. Mhad: multi-human action dataset
JP2017130061A (en) Image processing system, image processing method and program
Othman et al. Challenges and Limitations in Human Action Recognition on Unmanned Aerial Vehicles: A Comprehensive Survey.
CN110246280B (en) Human-cargo binding method and device, computer equipment and readable medium
CN112131477A (en) Library book recommendation system and method based on user portrait
Ding et al. A systematic survey of data mining and big data in human behavior analysis: Current datasets and models
Zeng et al. Deep learning approach to automated data collection and processing of video surveillance in sports activity prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 210000, 1-5 story, Jinshan building, 8 Shanxi Road, Nanjing, Jiangsu.

Applicant after: SUNING.COM Co.,Ltd.

Address before: 210042 Suning Headquarters, No. 1 Suning Avenue, Xuanwu District, Nanjing City, Jiangsu Province

Applicant before: SUNING COMMERCE GROUP Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant