CN111639968B - Track data processing method, track data processing device, computer equipment and storage medium - Google Patents

Track data processing method, track data processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN111639968B
CN111639968B CN202010448982.6A CN202010448982A CN111639968B CN 111639968 B CN111639968 B CN 111639968B CN 202010448982 A CN202010448982 A CN 202010448982A CN 111639968 B CN111639968 B CN 111639968B
Authority
CN
China
Prior art keywords
target
behavior
user
positioning information
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010448982.6A
Other languages
Chinese (zh)
Other versions
CN111639968A (en
Inventor
石楷弘
陈志博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010448982.6A priority Critical patent/CN111639968B/en
Publication of CN111639968A publication Critical patent/CN111639968A/en
Application granted granted Critical
Publication of CN111639968B publication Critical patent/CN111639968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application discloses a track data processing method, a device, a computer device and a storage medium, wherein the track data processing method can be applied to accurate recommendation and comprises the following steps: acquiring an original behavior track of a target user; the original behavior track is obtained by carrying out pedestrian re-identification processing on a target user on a user image acquired in a behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments, and N is a positive integer; acquiring a target positioning information set of a target user in a behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, wherein one piece of target positioning information represents the position information of a target user at one action moment; and adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user. By adopting the application, the accuracy of the behavior track can be improved.

Description

Track data processing method, track data processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a track data processing method, a track data processing device, a computer device, and a storage medium.
Background
Accurate recommendation refers to recommending different business data for each user according to the interests and the age and sex of the user. The online electronic commerce field is accurately recommended to develop more mature, but the application of offline physical stores is started just.
In order to realize accurate recommendation offline in entity scenes such as supermarkets, shopping malls and the like, the behavior track of a user in the entity scene needs to be accurately acquired. At present, a pedestrian re-recognition technology can be adopted to acquire a behavior track of a user in an entity scene, and different users possibly are judged to be the same user due to similar dressing, or the same user is judged to be two users due to the shape difference in the front and back directions, so that the accuracy of the behavior track acquired based on the pedestrian re-recognition technology is low.
Disclosure of Invention
The embodiment of the application provides a track data processing method, a track data processing device, computing equipment and a storage medium, which can improve the accuracy of a behavior track.
In one aspect, an embodiment of the present application provides a track data processing method, including:
acquiring an original behavior track of a target user; the original behavior track is obtained by carrying out pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments, and N is a positive integer;
Acquiring a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information represents the position information of the target user at one action moment;
and adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user.
The obtaining the original behavior track of the target user includes:
acquiring a user image set from a plurality of imaging devices; the plurality of imaging devices are installed at different positions in the behavior occurrence scene, and the user images in the user image set are acquired at different positions in the behavior occurrence scene by the imaging devices in the behavior time period;
selecting a plurality of target user images of the target user from the user image set;
acquiring first position information of imaging equipment corresponding to a target user image in the behavior occurrence scene, and acquiring generation time of the target user image;
and according to the generation time of each target user image, combining the first position information corresponding to each of the target user images into the original behavior track.
Wherein the plurality of imaging devices includes portal imaging devices mounted at portal locations of the behavioral occurrence scene;
the selecting a plurality of target user images associated with the target user from the set of user images includes:
acquiring a user image of the target user from the entrance imaging device, and taking the user image acquired from the entrance imaging device as an image to be retrieved;
and searching a plurality of user images matched with the image to be searched in the user image set, and taking the searched plurality of user images as target user images.
The obtaining the target positioning information set of the target user in the behavior occurrence scene includes:
acquiring the target signal intensity of a target terminal carried by the target user at the action moment;
determining target positioning information of the target user according to the target signal intensity;
and combining N pieces of target positioning information into the target positioning information set.
The method for judging whether the plurality of target user images and the target terminal have a mapping relationship or not includes the steps of:
Dividing the user image set into a plurality of unit image sets; one unit image set corresponds to one user, the plurality of target user images belong to a target unit image set, and the target unit image set is one unit image set in the plurality of unit image sets;
extracting user images with the generation time equal to the behavior time from the unit image set, and taking the extracted user images as auxiliary user images; the distance difference between the first position information corresponding to the auxiliary user image and the target position information corresponding to the behavior moment is smaller than a distance difference threshold;
counting the number of auxiliary images of the auxiliary user images contained in each unit image set;
and if the auxiliary image quantity corresponding to the target unit image set is the maximum quantity of the auxiliary image quantities corresponding to all the unit image sets, determining that the target user images and the target terminal have a mapping relation.
The target signal intensity is the signal intensity of a wifi signal, and the target signal intensity is obtained from a wifi probe device;
the determining the target positioning information of the target user according to the target signal strength includes:
Determining the distance between the target terminal and the wifi probe equipment according to the target signal intensity;
and acquiring second position information of the wifi probe equipment in the behavior occurrence scene, and determining the target positioning information according to the second position information and the distance.
Wherein the target signal strength is a signal strength of a bluetooth signal, the target signal strength being obtained from a bluetooth probe device;
the determining the target positioning information of the target user according to the target signal strength includes:
acquiring a plurality of position fingerprints, wherein each position fingerprint comprises a fingerprint signal intensity and fingerprint position information;
determining a target location fingerprint from the plurality of location fingerprints that matches the target signal strength; the difference between the fingerprint signal intensity in the target position fingerprint and the target signal intensity is smaller than a signal intensity difference threshold;
and determining the target positioning information according to the fingerprint position information in the target position fingerprint.
The original behavior track comprises a first behavior track and a second behavior track;
the step of adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user comprises the following steps:
Selecting a behavior track to be adjusted, which is matched with the target positioning information set, from the first behavior track and the second behavior track;
and adjusting the behavior track to be adjusted according to the target positioning information set to obtain the target behavior track.
The selecting the behavior track to be adjusted matched with the target positioning information set from the first behavior track and the second behavior track comprises the following steps:
acquiring a first reference position information set of the target user at the N behavior moments from the first behavior track, and acquiring a second reference position information set of the target user at the N behavior moments from the second behavior track;
determining a first degree of overlap between the first set of reference location information and the set of target location information, and determining a second degree of overlap between the second set of reference location information and the set of target location information;
if the first contact ratio is larger than a contact ratio threshold value and the second contact ratio is smaller than the contact ratio threshold value, determining the first behavior track as the behavior track to be adjusted;
and if the first contact ratio is smaller than the contact ratio threshold value and the second contact ratio is larger than the contact ratio threshold value, determining the second behavior track as the behavior track to be adjusted.
Wherein the first behavioral locus includes a plurality of original position information, and the first reference position information set includes N first reference position information;
the obtaining the first reference position information set of the target user at the N behavior moments from the first behavior track includes:
acquiring an original generation moment corresponding to each original position information;
if the plurality of original generation moments comprise the action moment, acquiring original position information corresponding to the same original generation moment as the action moment from the first action track, and taking the acquired original position information as first reference position information of the target user at the action moment;
and if the plurality of original generation moments do not comprise the action moment, setting the first reference position information of the target user at the action moment to be empty.
The behavior track to be adjusted comprises a plurality of pieces of position information to be adjusted;
the step of adjusting the behavior track to be adjusted according to the target positioning information set to obtain the target behavior track comprises the following steps:
setting polling priority for each piece of target positioning information, and determining target positioning information to be processed for current polling from the N pieces of target positioning information according to the polling priority;
Adjusting the plurality of pieces of position information to be adjusted according to the target positioning information to be processed;
and stopping polling when each piece of target positioning information is determined to be the target positioning information to be processed, and combining the adjusted plurality of pieces of position information to be adjusted into the target behavior track.
Wherein the adjusting the plurality of position information to be adjusted according to the target positioning information to be processed includes:
acquiring a to-be-adjusted generation moment corresponding to each piece of to-be-adjusted position information and acquiring a to-be-processed behavior moment corresponding to the to-be-processed target positioning information;
if the plurality of to-be-adjusted generating moments comprise the to-be-processed behavior moment, taking the to-be-adjusted generating moment which is the same as the to-be-processed behavior moment as a target to-be-adjusted generating moment, and replacing to-be-adjusted position information of the target to-be-adjusted generating moment with the to-be-processed target positioning information;
and if the plurality of generating moments to be adjusted do not comprise the behavior moment to be processed, adding the target positioning information to be processed to the plurality of position information to be adjusted.
Wherein, still include:
acquiring a face image of the target user;
acquiring a network card address of a target terminal carried by the target user;
And combining the face image, the network card address and the target behavior track into a processing result, and outputting the processing result.
Wherein, still include:
and analyzing the behavior characteristics of the target user according to the target behavior track, and outputting the recommended service object of the target user according to the behavior characteristics.
In one aspect, an embodiment of the present application provides a track data processing apparatus, including:
the first acquisition module is used for acquiring the original behavior track of the target user; the original behavior track is obtained by carrying out pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments, and N is a positive integer;
the second acquisition module is used for acquiring a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information represents the position information of the target user at one action moment;
and the adjusting module is used for adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user.
Wherein, the first acquisition module includes:
a first acquisition unit configured to acquire a set of user images from a plurality of imaging devices; the plurality of imaging devices are installed at different positions in the behavior occurrence scene, and the user images in the user image set are acquired at different positions in the behavior occurrence scene by the imaging devices in the behavior time period;
a first selection unit, configured to select a plurality of target user images of the target user from the user image set;
the first obtaining unit is further configured to obtain first position information of an imaging device corresponding to a target user image in the behavior occurrence scene, obtain a generation time of the target user image, and combine first position information corresponding to each of the plurality of target user images into the original behavior track according to the generation time of each of the target user images.
Wherein the plurality of imaging devices includes portal imaging devices mounted at portal locations of the behavioral occurrence scene;
the first selection unit is specifically configured to acquire a user image of the target user from the portal imaging device, take the user image acquired from the portal imaging device as an image to be retrieved, retrieve a plurality of user images matched with the image to be retrieved in the user image set, and take the retrieved plurality of user images as target user images.
Wherein, the second acquisition module includes:
the second acquisition unit is used for acquiring the target signal intensity of the target terminal carried by the target user at the action moment;
a determining unit, configured to determine target positioning information of the target user according to the target signal strength;
the second obtaining unit is further configured to combine N pieces of target positioning information into the target positioning information set.
Wherein the plurality of target user images and the target terminal have a mapping relationship;
the apparatus further comprises:
the judging module is used for dividing the user image set into a plurality of unit image sets; one unit image set corresponds to one user, the plurality of target user images belong to a target unit image set, and the target unit image set is one unit image set in the plurality of unit image sets; and extracting the user image with the generation moment equal to the action moment from the unit image set, and taking the extracted user image as an auxiliary user image; the distance difference between the first position information corresponding to the auxiliary user image and the target position information corresponding to the behavior moment is smaller than a distance difference threshold; and counting the number of auxiliary images of the auxiliary user images contained in each unit image set, and if the number of auxiliary images corresponding to the target unit image set is the maximum number of the plurality of auxiliary images corresponding to all unit image sets, determining that the plurality of target user images and the target terminal have a mapping relation.
The target signal intensity is the signal intensity of a wifi signal, and the target signal intensity is obtained from a wifi probe device;
the determining unit is specifically configured to determine, according to the target signal strength, a distance between the target terminal and the wifi probe device, obtain second location information of the wifi probe device in the behavior occurrence scene, and determine, according to the second location information and the distance, the target positioning information.
Wherein the target signal strength is a signal strength of a bluetooth signal, the target signal strength being obtained from a bluetooth probe device;
the determining unit is specifically configured to obtain a plurality of location fingerprints, where each location fingerprint includes a fingerprint signal strength and a fingerprint location information; and determining a target location fingerprint from the plurality of location fingerprints that matches the target signal strength; and determining the target positioning information according to the fingerprint position information in the target position fingerprint, wherein the difference between the fingerprint signal intensity in the target position fingerprint and the target signal intensity is smaller than a signal intensity difference threshold value.
The original behavior track comprises a first behavior track and a second behavior track;
The adjustment module comprises:
the second selection unit is used for selecting a behavior track to be adjusted, which is matched with the target positioning information set, from the first behavior track and the second behavior track;
and the adjusting unit is used for adjusting the behavior track to be adjusted according to the target positioning information set to obtain the target behavior track.
Wherein the second selecting unit includes:
the first acquisition subunit is used for acquiring a first reference position information set of the target user at the N behavior moments from the first behavior track;
a second obtaining subunit, configured to obtain a second set of reference position information of the target user at the N behavior moments from the second behavior trace, determine a first degree of coincidence between the first set of reference position information and the target positioning information set, and determine a second degree of coincidence between the second set of reference position information and the target positioning information set, and if the first degree of coincidence is greater than a threshold value of degree of coincidence and the second degree of coincidence is less than the threshold value of degree of coincidence, determine the first behavior trace as the behavior trace to be adjusted; and if the first contact ratio is smaller than the contact ratio threshold value and the second contact ratio is larger than the contact ratio threshold value, determining the second behavior track as the behavior track to be adjusted.
Wherein the first behavioral locus includes a plurality of original position information, and the first reference position information set includes N first reference position information;
the first obtaining subunit is specifically configured to obtain an original generation time corresponding to each original position information, and if a plurality of original generation times include a behavior time, obtain, from the first behavior track, original position information corresponding to an original generation time that is the same as the behavior time, and use the obtained original position information as first reference position information of the target user at the behavior time; and if the plurality of original generation moments do not comprise the action moment, setting the first reference position information of the target user at the action moment to be empty.
The behavior track to be adjusted comprises a plurality of pieces of position information to be adjusted;
the adjusting unit includes:
a setting subunit, configured to set a polling priority for each piece of target positioning information, and determine target positioning information to be processed for current polling from the N pieces of target positioning information according to the polling priority;
an adjusting subunit, configured to adjust the plurality of position information to be adjusted according to the target positioning information to be processed;
The setting subunit is further configured to stop polling when each piece of target positioning information is determined to be target positioning information to be processed, and combine the adjusted plurality of pieces of position information to be adjusted into the target behavior track.
The adjustment subunit is specifically configured to obtain a to-be-adjusted generation time corresponding to each to-be-adjusted position information, and obtain a to-be-processed behavior time corresponding to the to-be-processed target positioning information, and if a plurality of to-be-adjusted generation times include the to-be-processed behavior time, take the to-be-adjusted generation time identical to the to-be-processed behavior time as a target to-be-adjusted generation time, and replace to-be-adjusted position information of the target to-be-adjusted generation time with the to-be-processed target positioning information; and if the plurality of to-be-adjusted generating moments do not include the to-be-processed behavior moment, adding the to-be-processed target positioning information to the plurality of to-be-adjusted position information.
Wherein, still include:
and the third acquisition module is used for acquiring the face image of the target user, acquiring the network card address of the target terminal carried by the target user, combining the face image, the network card address and the target behavior track into a processing result, and outputting the processing result.
Wherein, still include:
and the recommendation module is used for analyzing the behavior characteristics of the target user according to the target behavior track and outputting the recommended service object of the target user according to the behavior characteristics.
In one aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the computer program when executed by the processor causes the processor to perform the method in the foregoing embodiments.
An aspect of an embodiment of the present application provides a computer storage medium storing a computer program comprising program instructions which, when executed by a processor, perform the method of the above embodiments.
The method and the device obtain the target behavior track by acquiring the original behavior track of the target user and acquiring the target positioning information set of the target user in the behavior occurrence scene and adjusting the original behavior track based on the target positioning information set. According to the application, the positioning information at a plurality of behavior moments is acquired, the original behavior track is adjusted based on the plurality of positioning information, and because the positioning information is more accurate than the position information in the original behavior track determined based on pedestrian re-identification processing, the plurality of positioning information can correct the original behavior track, the corrected target behavior track has higher accuracy than the original behavior track before correction, and the accurate behavior track can provide data support for accurate recommendation in a subsequent entity scene, so that the recommendation accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a system architecture diagram for trace data processing according to an embodiment of the present application;
FIGS. 2 a-2 c are schematic diagrams of a scenario for track data processing according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of track data processing according to an embodiment of the present application;
fig. 4 is a schematic diagram of establishing a mapping relationship between a terminal and a user image according to an embodiment of the present application;
FIG. 5 is a hardware architecture diagram of trace data processing according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a track data processing method according to an embodiment of the present application;
FIGS. 7 a-7 b are schematic diagrams illustrating an adjustment behavior trace according to embodiments of the present application;
FIGS. 8 a-8 b are schematic diagrams illustrating an adjustment behavior trace according to embodiments of the present application;
FIGS. 9 a-9 b are schematic diagrams illustrating an adjustment behavior trace according to embodiments of the present application;
FIGS. 10 a-10 b are schematic diagrams illustrating an adjustment behavior trace according to embodiments of the present application;
FIG. 11 is a schematic diagram of a track data processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The scheme provided by the embodiment of the application belongs to Computer Vision technology (CV) which belongs to the field of artificial intelligence. The computer vision technology is a science for researching how to make a machine "look at", and further refers to machine vision that a camera and a computer are used to replace human eyes to recognize, track and measure targets.
The application mainly relates to cross-equipment retrieval of user images of the same user so as to determine the positions of the user at different moments in the same scene, and the positions at different moments are connected in series to trace the behavior of the user.
Cloud technology (Cloud technology) is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on Cloud computing business model application, and can form a resource pool, so that the Cloud computing business model application system is flexible and convenient as required. Background services of technical networking systems currently require a large amount of computing and storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
At present, cloud technologies are mainly divided into cloud base technology classes and cloud application classes; the cloud base technology class can be further subdivided into: cloud computing, cloud storage, databases, big data, and the like; cloud application classes can be further subdivided into: medical clouds, cloud internet of things, cloud security, cloud calling, private clouds, public clouds, hybrid clouds, cloud games, cloud education, cloud conferences, cloud social interactions, artificial intelligence cloud services, and the like.
The data processing method of the application can relate to cloud computing and cloud storage under cloud technology:
cloud computing (clouding) is a computing model that distributes computing tasks across a large pool of computers, enabling various application systems to acquire computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed.
Because the pedestrian re-recognition processing involves retrieving the user image of the specific user from a large number of user images and requires huge calculation power and storage space, in the application, the server can acquire enough calculation power and storage space through a cloud computing technology, and further the original behavior track of the target user is acquired.
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of various types in a network to work cooperatively through application software or application interfaces through functions such as cluster application, grid technology, and a distributed storage file system, so as to provide data storage and service access functions for the outside.
In the application, the original behavior track of the target user and the target positioning information set can be stored on the cloud by the server through the cloud storage technology, and when the original behavior track needs to be adjusted, the original behavior track and the target positioning information set can be pulled from the cloud storage equipment so as to reduce the storage pressure of the server.
The track data processing method of the application can be applied to the following scenes: in order to realize accurate marketing of off-line stores, the original behavior track of the user in a mall or a supermarket is acquired based on pedestrian re-identification processing, a plurality of positioning information of the user in the mall or the supermarket is acquired based on an indoor positioning technology, and the original behavior track is adjusted based on the positioning information, so that a more accurate target behavior track is obtained. Subsequently, the behavior characteristics of the user can be determined based on the accurate target behavior track so as to realize accurate marketing.
In the application, the related data (such as user images, position information and the like) are collected and processed, when the example is applied, the informed consent or independent consent of the personal information body is obtained strictly according to the requirements of relevant national laws and regulations, and the subsequent data use and processing behaviors are developed within the authorized range of the laws and regulations and the personal information body.
Fig. 1 is a system architecture diagram for track data processing according to an embodiment of the present application. The present application relates to a server 10d, a terminal device 10a and an imaging device cluster, which may include: imaging device 10b,..and imaging device 10c, etc.
Multiple imaging devices in the imaging device cluster can acquire multiple user images of different positions of the user at different moments in time, and the imaging device cluster sends the multiple user images to the server 10d. The server 10d may determine the original behavior trace of the user in the behavior time period and behavior occurrence scenario.
The terminal device 10a may acquire a set of target location information of the target user in the behavior time period and the behavior occurrence scene, and the terminal device may also send the set of target location information to the server 10d. The server 10d adjusts the original behavior trace according to the target positioning information to obtain a target behavior trace of the target user.
The server 10d shown in fig. 1 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms.
The terminal device 10a shown in fig. 1 may be a smart phone, tablet, notebook, desktop computer, smart watch, or other smart device. The terminal device 10a, the imaging device cluster, and the server 10d may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
The following figures 2 a-2 c illustrate how the server determines the behavior trace of the user in a mall scenario. Referring to fig. 2 a-2 b, which are schematic views of a scenario of track data processing provided in an embodiment of the present application, the image 20a in fig. 2a is a market plan, it can be seen from the market plan image 20a that a plurality of cameras are installed in the market, once a user enters a camera area, the cameras will capture a user image of the user, and the position of the user at different moments can be determined by determining the generation time stamp of the user image and the position of the cameras. The server may connect all the positions in series according to the sequence of different moments, so as to obtain the original behavior trace 20b of the user. As shown in fig. 2a, the original behavior trace 20b includes 6 positions, each corresponding to a different time, and the time period of the original behavior trace 20b is within the interval 08:00-10:20.
Besides installing the camera in the market, the wifi probe can be installed, the wifi signal intensity of the terminal equipment carried by the user can be captured at regular time by the wifi probe, and then the positioning information of the terminal equipment carried by the user can be determined according to the wifi signal intensity and the position of the wifi probe in the market, and the positioning information of the terminal equipment is the positioning information of the user at the moment. Determining that the positioning information of the user belongs to an indoor positioning technology based on the signal intensity, wherein the signal intensity of a Bluetooth signal can be adopted in addition to the signal intensity of a wifi signal; the positioning information of the user at a certain moment can even be determined based on indoor positioning techniques such as ultrasound, RFID (radio frequency identification ) or infrared.
Assume that the server obtains the set of location information 20c in the mall for the user in the 08:00-10:20 time period based on the wifi probe. As can be seen from the set of positioning information 20c, one piece of positioning information indicates the positioning of the user at one moment in time.
It can be seen from the original behavior trace 20b and the set of positioning information 20c that the set of positioning information 20c is more sparse than the original behavior trace 20b, but the set of positioning information 20c is more accurate than the original behavior trace 20 b. The server can therefore adjust the original behavior trace 20b based on the set of positioning information 20c, the adjustment process being as follows:
As shown in fig. 2b, the set of positioning information 20c contains 4 pieces of positioning information, and the server processes the 4 pieces of positioning information in sequence. First, for positioning information of time "08:22", since there is no position information of 08:22 in the original behavior trace 20b, "08:22- (x 7, y 7)" can be directly added to the original behavior trace 20b.
Second, for positioning information of time "09:10", since there is no position information of 09:10 in the original behavior trace 20b, it is possible to directly add "09:10- (x 8, y 8)" to the original behavior trace 20b.
For the positioning information with time "09:40", there is 09:40 position information in the original behavior trace 20b, and the 09:40 position information in the original behavior trace 20b and the 09:40 positioning information in the positioning information set 20c are the same (x 4, y 4), so the 09:40 position information in the original behavior trace 20b can be kept unchanged.
Finally, for positioning information of time "10:00", there is 10:00 of position information in the original behavior trace 20b, and 10:00 of position information in the original behavior trace 20b is (x 5, y 5), but 10:00 of positioning information in the positioning information set 20c is (x 9, y 9). That is, the 10:00 position information in the original behavior trace 20b is different from the 10:00 position information in the positioning information set 20c, and since the positioning information in the positioning information set 20c is more accurate than the position information in the original behavior trace 20b, the 10:00 position information in the positioning information set 20c is replaced with the 10:00 position information in the original behavior trace 20b, i.e., the position information of the 10:00 in the original behavior trace 20b is modified from (x 5, y 5) to (x 9, y 9).
The server finishes adjusting the original behavior trace 20b based on the positioning information set 20c, and refers to the adjusted original behavior trace as a target behavior trace, as shown in fig. 2b, after the adjustment is finished, the server may obtain the target behavior trace 20d, and the number of positions included in the target behavior trace 20d is more and more accurate than the number of positions included in the original behavior trace 20 b.
As shown in fig. 2c, based on the target behavior trace 20d, it can be determined which counters and the residence time at the counters the user passes in the 08:00-10:20 time period and in the market, and then based on these information, accurate commodity recommendation can be provided for the user, so as to improve the recommendation accuracy of commodity recommendation.
The specific process of acquiring the original behavior trace (such as the original behavior trace 20b in the above embodiment) and acquiring the target positioning information set (such as the positioning information set 20c in the above embodiment) to adjust the original behavior trace according to the target positioning information set may be referred to the embodiments corresponding to fig. 3-10 b below.
Referring to fig. 3, a schematic flow chart of track data processing provided by the embodiment of the application is shown, and the scheme of the application can be applied to a server (for example, a background server of shopping software or a map server). The following embodiments describe embodiments with a server as an execution subject, and the track data processing may include the following steps:
Step S101, obtaining an original behavior track of a target user; the original behavior track is obtained by carrying out pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments, and N is a positive integer.
Specifically, a server (e.g., the server in the corresponding embodiment of fig. 2 a-2 b described above) obtains an original behavior trace (e.g., the original behavior trace 20b in the corresponding embodiment of fig. 2 a-2 c described above) of a currently pending user (referred to as a target user).
The original behavior trace is a moving trace of the target user in a behavior time period (e.g., 08:00-10:20 time period in the corresponding embodiment of fig. 2 a-2 c) and in a behavior occurrence scene (e.g., mall scene in the corresponding embodiment of fig. 2 a-2 c); the server acquires the original behavior track by performing pedestrian re-identification processing on the user image acquired in the behavior time period and the behavior occurrence scene with respect to the target user.
The Person re-identification (Person-identification) process is to determine whether a specific pedestrian exists in an image or a video sequence by using a computer vision technology, and then determine the appearance position of the pedestrian at different moments according to the image containing the specific pedestrian. In the application, the user image (called a target user image) of the target user is judged from the mass user images acquired in the behavior time period and the behavior occurrence scene, then the occurrence positions of the target user at different moments can be determined according to the generation moments of the target user image, and then the occurrence positions are connected in series to obtain the original behavior track of the target user.
The following describes in detail how the server obtains the original behavior trace:
a plurality of imaging devices (for example, the imaging devices may be cameras, such as the cameras in the corresponding embodiments of fig. 2 a-2 c described above) are installed at different positions in the behavior occurrence scene, and the server acquires a set of user images from the plurality of imaging devices, the set of user images including a plurality of user images, and the plurality of user images being images acquired by the imaging devices at different positions in the behavior occurrence scene during a period of time.
The user image may be a physical image of the user, a face image of the user, or the like.
The plurality of imaging devices include portal imaging devices (the portal imaging devices may be gun-shaped cameras) installed at portal positions of the behavior occurrence scene, and user images (referred to as images to be retrieved) of the target users are acquired from the portal imaging devices. And retrieving a plurality of user images matched with the images to be retrieved from the user image set, wherein the image similarity between the retrieved user images and the images to be retrieved is larger than a similarity threshold. The server can take the retrieved images as target user images, the images to be retrieved are target user images, and the target user images are user images containing target users.
In brief, in a behavior time period and in a behavior occurrence scene, a plurality of imaging devices acquire a massive user image set, and a target user image is retrieved from the user image set based on a pedestrian re-recognition technology of 'searching for images in a map'.
After the server determines a plurality of target user images from the user image set, for one target user image, the server acquires the generation time of the target user image, and acquires the position information (called first position information) of the imaging device acquiring the target user image in the behavior occurrence scene. For the rest target user images, the generation time and the first behavior information of each target user image can be determined in the same way, and the first position information of a plurality of target user images is combined into an original behavior track according to the mode that the generation time is from small to large.
For example, there are 3 target user images, namely, a target user image 1, a target user image 2 and a target user image 3, and if the first position information of the imaging device for acquiring the target user image 1 is: (x 1, y 1), and the generation time of the target user image is t1; the first position information of the imaging device that acquires the target user image 2 is: (x 2, y 2), and the generation time of the target user image is t2; the first position information of the imaging device that acquires the target user image 3 is: (x 3, y 3), and the generation timing of the target user image is t3. If t1 > t3 > t2 on the time axis, therefore, (x 1, y 1), (x 3, y 3) and (x 2, y 2) can be combined into the original behavior trace of the target user.
Further, the entrance imaging device may be a gun-shaped camera, the rest of the imaging devices may be spherical cameras, the gun-shaped camera may collect face images and body images of the user at the same time, the spherical cameras may collect body images of the user, so that the image to be retrieved is the body image of the target user, and the target user images are all body images of the target user.
Step S102, acquiring a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information is used for representing the position information of the target user at one action moment.
Specifically, the server obtains N signal strengths (referred to as target signal strengths) of the terminal carried by the target user at N behavior moments (referred to as target terminal), determines N pieces of target positioning information of the target user at the N behavior moments according to the N pieces of target signal strengths, and combines the N pieces of target positioning information into a target positioning information set (e.g., the positioning information set 20c in the corresponding embodiment of fig. 2 a-2 c described above). It can be known that the target positioning information set includes N pieces of target positioning information, one piece of target positioning information represents position information of the target user at one behavior time, and N behavior times all belong to a behavior time period.
For example, the behavioral time period is 08:00-09:00, assuming that every 20 minutes is one behavioral time, so the behavioral time period may include 3 behavioral times, where 08:20 is one behavioral time, 08:40 is one behavioral time, and 09:00 is one behavioral time.
The target signal strength may be a cellular communication network signal strength, and the server determines the target positioning information set by using a cellular communication network positioning technology; the target signal intensity can be infrared signal intensity, and the server adopts the infrared positioning technology to determine the target positioning information set; the target signal strength can be mechanical wave signal strength, and the server adopts mechanical wave signal positioning technology to determine a target positioning information set; the target signal strength can be wifi signal strength, and the server determines a target positioning information set by using a wifi positioning technology; the target signal strength may be a bluetooth signal strength, in which case the server uses a bluetooth positioning technique to determine the target positioning information set,
because wifi positioning technology and bluetooth positioning technology have low cost, the high characteristics of degree of accuracy, below regard a target signal strength of action moment as the demonstration how to confirm a target positioning information based on wifi positioning technology and bluetooth positioning technology:
The mode of determining the target positioning information based on the wifi positioning technology can be subdivided into a wifi distance positioning mode and a wifi fingerprint positioning mode, (1) the wifi distance positioning mode is as follows:
when the target signal strength is wifi signal strength, the target signal strength acquired by the server may be acquired from the wifi probe device. And determining the distance between the target terminal and the wifi probe equipment at the current action moment according to the target signal intensity and the signal attenuation model, acquiring the position information (called second position information) of the wifi probe equipment in the action occurrence scene, and determining the target positioning information of the target terminal at the current action moment according to the second position information and the distance.
It can be known that if the wifi probe device has only one piece, only the target positioning information can be determined at this time on the circumference with the second position information as the center and the distance as the radius, in order to further accurately target positioning information, the target positioning information can be determined by adopting triangular positioning, and at this time, the target signal intensity comprises a first signal intensity, a second signal intensity and a third signal intensity, correspondingly, the wifi probe device also comprises a first wifi probe device, a second wifi probe device and a third wifi probe device, the first signal intensity is acquired by the first wifi probe device, the second signal intensity is acquired by the second wifi probe device, the third signal intensity is acquired by the third wifi probe device, and the distance also comprises a first distance corresponding to the first signal intensity, a second distance corresponding to the second signal intensity, and a third distance corresponding to the third signal intensity. The server takes the position information of the first wifi probe equipment as a circle center, the first distance is a radius, the first circumference is determined, the position information of the second wifi probe equipment is a circle center, the second distance is a radius, the second circumference is determined, the position information of the third wifi probe equipment is a circle center, the third distance is a radius, the third circumference is determined, and the position information of the intersection point of the first circumference, the second circumference and the third circumference is taken as target position information.
The wifi fingerprint positioning mode is:
when the target signal strength is wifi signal strength, the server acquires a fingerprint library, the fingerprint library comprising a plurality of auxiliary fingerprints, one auxiliary fingerprint comprising a wifi signal strength and one piece of location information in a behavior occurrence scene. And determining wifi signal intensity matched with the target signal intensity from a plurality of wifi signal intensities contained in the fingerprint library, taking the auxiliary fingerprint corresponding to the determined wifi signal intensity as a target auxiliary fingerprint, and taking the average value of the position information in the target auxiliary fingerprint as target positioning information. The fingerprint library is set off-line in advance.
The mode of determining the target positioning information based on the Bluetooth positioning technology can be divided into a Bluetooth distance positioning mode and a Bluetooth fingerprint positioning mode, and (1) the Bluetooth distance positioning mode is as follows:
when the target signal strength is bluetooth signal strength, the target signal strength acquired by the server may be acquired from the bluetooth probe apparatus. And determining the distance between the target terminal and the Bluetooth probe device at the current action moment according to the target signal intensity and the signal attenuation model, acquiring the position information (called third position information) of the Bluetooth probe device in the action occurrence scene, and determining the target positioning information at the current action moment according to the third position information and the distance.
(2) Bluetooth fingerprint positioning mode:
when the target signal strength is bluetooth signal strength, the server acquires a plurality of location fingerprints, each including one bluetooth signal strength (referred to as fingerprint signal strength) and one location information in a behavior occurrence scene (referred to as fingerprint location information). A location fingerprint (referred to as a target location fingerprint) that matches the target signal strength is determined from a plurality of location fingerprints, wherein an amount of difference between the fingerprint signal strength in the target location fingerprint and the target signal strength is less than a preset signal difference amount threshold. Taking the average value of the fingerprint position information in the target position fingerprint as target positioning information.
From the foregoing, the original behavior trace is obtained based on a plurality of imaging devices, the target positioning information set is obtained based on a wifi probe device (or a bluetooth probe device), if there are a plurality of users in a behavior occurrence time period and a behavior occurrence scene, the imaging devices installed at different positions can collect user images of the plurality of users, and the wifi probe device can also obtain signal intensities of a plurality of terminals, so that a plurality of target user images of a target user and the target terminal have a mapping relationship, and the manner of judging that the plurality of target user images and the target terminal have the mapping relationship is as follows:
Based on the pedestrian re-recognition technology, dividing a plurality of user images in a user image set into a plurality of unit image sets, wherein one unit image set corresponds to one user, the unit image set to which the plurality of target user images belong is called a target unit image set, and the target unit image set is one unit image set in the plurality of unit image sets.
For 1 single image set, extracting user images with the generation time equal to the action time from the unit image set, taking the extracted user images as auxiliary user images, wherein the distance difference between the first position information corresponding to the auxiliary user images and the target position information corresponding to the action time is smaller than a preset distance difference threshold (the distance difference can be equal to 8 m). In brief, the generation time of the auxiliary user image is equal to the behavior time, and the distance between the first position information of the imaging device that acquired the auxiliary user image and the target positioning information of the behavior time is less than 8m.
According to the above mode, the auxiliary user images in each unit image set are respectively determined, the image number (called auxiliary image number) of the auxiliary user images in each unit image set is counted, and if the auxiliary image number corresponding to the target unit image set is the maximum number of the auxiliary image numbers corresponding to all the unit image sets, the mapping relationship between the target user images and the target terminal is determined.
In this same manner, a mapping relationship can be established for the user image of each user and the terminal carried by each user.
Fig. 4 is a schematic diagram of a mapping relationship between a terminal and a user image, where network card address 1 corresponds to 1 terminal, and a server captures positioning information of the terminal at times t1, t2, t3.. Searching for user images 5-8m around the positioning information at the moment ti (1.ltoreq.i.ltoreq.n). As shown in fig. 4, in the user image set, there is a user image of user 1 (i.e., user image 1 in fig. 4), a user image of user 2 (i.e., user image 2 in fig. 4), a user image of user 3 (i.e., user image 3 in fig. 4), and a user image of user 4 (i.e., user image 4 in fig. 4) around the positioning information at time t1 and 5-8 m. In the user image set, there is a user image of the user 3 (i.e., user image 3 in fig. 4), a user image of the user 4 (i.e., user image 4 in fig. 4), a user image of the user 5 (i.e., user image 5 in fig. 4), and a user image of the user 6 (i.e., user image 6 in fig. 4) around the positioning information at time t2 and at time t2 by 5-8 m.
Combining the network card address 1 and the user image i can generate a large number of (network card address 1, user image i) pairs, count the occurrence number of (network card address 1, user image i) pairs, and bind the (network card address 1, user image i) pair with the largest occurrence number. In addition to counting the number of occurrences, the percentage ci of each (network card address 1, user image i) pair may be calculated, ci=mi/n x 100%, where mi represents the number of occurrences of the (network card address 1, user image i) pair and n represents the total number of occurrences of the network card address 1.
As shown in fig. 4, (network card address 1, user image 3) occurs most often, so that a mapping relationship can be established for the network card address 1 and the user image of the user 3.
Step S103, the original behavior track is adjusted according to the target positioning information set, and the target behavior track of the target user is obtained.
Specifically, the server adjusts the position information in the original behavior track according to the N pieces of target positioning information in the target positioning information set, wherein the adjustment principle is as follows: if the position information in the original behavior track is different from the target positioning information, taking the target positioning information as the reference, namely, replacing the position information in the original behavior track by the target positioning information to correct the original behavior track; if the position information in the original behavior track does not contain the target positioning information, the target positioning information is added to the original behavior track, so that the position information contained in the original behavior track is denser.
The server may use the adjusted original behavior trace as the target behavior trace of the target user (e.g. the target behavior trace 20d in the corresponding embodiment of fig. 2 a-2 c)
Optionally, the server acquires a face image of the target user from the entrance imaging device, acquires a network card address (Media Access Control Address, MAC) of the target terminal carried by the target user, combines the face image of the target user, the network card address of the target terminal and the determined target behavior trace into a processing result, and outputs the processing result.
Subsequently, a user account of the target user in the target terminal can be determined according to the face image (or the network card address), and accurate marketing is performed based on the user account, for example, commodity or recommendation data, music and the like are recommended to the user account.
Referring to fig. 5, a hardware architecture diagram for track data processing provided by the embodiment of the present application is that a gun-shaped camera is installed at an entry position of a behavior occurrence scene, and a target user once entering a gate of the behavior occurrence scene, the gun-shaped camera acquires a body image and a face image of the target user, establishes a mapping relationship between the body image and the face image, and assigns a body id and a face id to the target user. And installing a spherical camera in the behavior occurrence scene, photographing a physical image of the target user by the spherical camera, and connecting the positions of the target user at different moments in series to obtain the original behavior track of the target user.
And deploying wifi probe equipment near the spherical camera, acquiring the positioning coordinates of the mobile phone of the target user by utilizing triangular positioning or fingerprint positioning, setting a mapping relation between the network card address of the mobile phone of the target user and the body id of the target user as well as the face id of the target user, and assisting in adjusting the original behavior track through the positioning coordinates of the mobile phone of the target user. Because each body id corresponds to a plurality of user images, setting the mapping relationship between the network card address of the mobile phone of the target user and the body id of the target user is equivalent to setting the mapping relationship between the target terminal and a plurality of target user images of the target user.
As can be seen from the above, since the positioning information is more accurate than the position information in the original behavior trace determined based on the pedestrian re-recognition processing, the plurality of positioning information can correct the original behavior trace, and the corrected target behavior trace has higher accuracy than the original behavior trace before correction; furthermore, the application adopts wifi positioning or Bluetooth positioning to determine the positioning information of the user, and the wifi or Bluetooth cost is low, which is beneficial to expanding the application range of the application; and the face image of the user, the network card address of the terminal and the regulated behavior track are output together, so that the convenience of recommending service data for the user in the follow-up process can be improved.
Fig. 6 is a schematic flow chart of a track data processing method according to an embodiment of the present application, where the track data processing method includes the following steps:
step S201, obtaining an original behavior track of a target user; the original behavior track is obtained by carrying out pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments, and N is a positive integer.
Step S202, acquiring a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information represents the position information of the target user at one action moment.
The specific process of step S201 to step S202 may refer to step S101 to step S102 in the corresponding embodiment of fig. 3, which is not described herein.
Step S203, selecting a behavior track to be adjusted matching the target positioning information set from the first behavior track and the second behavior track.
Specifically, if the original behavior track only contains one behavior track, whether the target positioning information set is matched with the original behavior track is judged first, adjustment is performed only when the original behavior track is matched with the target positioning information set, and if the original behavior track is not matched with the target positioning information set, the process can be stopped. At this time, the error of the original behavior trace is too large, or the target positioning information is not the positioning information of the target user.
Similarly, if the original behavior trace includes a plurality of behavior traces, taking two behavior traces as an example, the two behavior traces included in the original behavior trace are referred to as a first behavior trace and a second behavior trace, a behavior trace matching with the target positioning information is selected from the first behavior trace and the second behavior trace, the selected behavior trace is used as a behavior trace to be adjusted, and the server adjusts the behavior trace to be adjusted based on the target positioning information set.
It should be noted that, the original behavior trace includes a plurality of behavior traces, because in the process of performing the pedestrian re-recognition processing, the user image that is not the target user may be determined as the target user image, and thus the target user may appear at a plurality of positions at the same time, that is, a plurality of behavior traces appear.
The following describes how to determine whether the first behavior trace and the second behavior trace match the target positioning information set:
the server acquires position information (called first reference position information) of the target user at N behavior moments from the first behavior track, and combines the N first reference position information into a first reference position information set. The server acquires the position information (called second reference position information) of the target user at N behavior moments from the second behavior track, and combines the N second reference position information into a second reference position information set. A degree of overlap between the first set of reference position information and the set of target positioning information (referred to as a first degree of overlap) is determined, and a degree of overlap between the second set of reference position information and the set of target positioning information (referred to as a second degree of overlap) is determined.
If the first contact ratio is larger than the contact ratio threshold value and the second contact ratio is smaller than the contact ratio threshold value, determining that the first behavior track is a behavior track to be adjusted, which is matched with the target positioning information set.
If the first contact ratio is smaller than the contact ratio threshold value and the second contact ratio is larger than the contact ratio threshold value, determining that the second behavior track is a behavior track to be adjusted, which is matched with the target positioning information set.
The specific process of acquiring the reference position information from the first behavior track is as follows:
the first action track includes a plurality of pieces of position information (referred to as original position information, where the original position information may correspond to the first position information in the foregoing), and each piece of original position information has a corresponding generation time (referred to as original generation time) of the target user image, that is, each piece of original position information has a corresponding original generation time. For 1 action time in N action times, if a plurality of original generation times comprise the action time, acquiring original position information corresponding to the same original generation time as the action time from a first action track, and taking the original position information as first reference position information of a target user at the action time; if the plurality of original generation moments do not include the behavior moment, the first reference position information of the target user at the behavior moment is set to be empty.
For the remaining N-1 behavior moments, N-1 first reference position information can be determined in the same manner, and the N first reference position information can be combined into a first reference position information set.
The manner of acquiring the second reference position information set from the second behavior trace is the same, and will not be described herein.
For example, assume that the first behavior trace includes 4 pieces of position information, respectively: (1, 1), (2, 2), (3, 3), (4, 4), and the generation times of the 4 position information are t1, t2, t3 and t4 respectively; first, theThe two behavior tracks comprise 4 pieces of position information, which are respectively: (1, 1), (2, 2), (8, 3), (5, 2), and the generation times of the 4 position information are t1, t2, t3, and t4, respectively; the target positioning information set comprises 3 pieces of target positioning information, which are respectively: (3, 2), (4, 2) and (7, 7), and the behavior moments of the 3 pieces of target positioning information are t3, t4 and t5, respectively. The server may obtain a first set of reference location information from the first behavioral locus: (3, 3), (4, 4), empty; the server may obtain a second set of reference location information from the second behavior trace: (8, 3), (5, 2), empty. Based on the first set of reference position information and the set of target positioning information, Based on the second set of reference position information and the set of target positioning information, a +.>
If the first reference position information (or the second reference position information) is empty, the first reference position information (or the second reference position information) has a predetermined overlap ratio, for example, a predetermined overlap ratio of
Step S204, the behavior track to be adjusted is adjusted according to the target positioning information set, and the target behavior track is obtained.
Specifically, the behavior trace to be adjusted includes a plurality of pieces of position information (referred to as position information to be adjusted), and each piece of position information to be adjusted has a generation time (referred to as generation time to be adjusted) of the target user image corresponding to the position information to be adjusted.
The server sets polling priorities for N pieces of target positioning information in the target positioning information set, wherein the smaller the behavior time is, the higher the polling priorities are. The server determines target positioning information (called target positioning information to be processed) for current polling from N pieces of target positioning information according to polling priority, adjusts a plurality of pieces of position information to be adjusted in a behavior track to be adjusted according to the target positioning information to be processed, takes the adjusted plurality of pieces of position information to be adjusted as new pieces of position information to be adjusted, determines the target positioning information to be processed for current polling from the N pieces of target positioning information according to polling priority, adjusts the new pieces of position information to be adjusted again, and continuously loops. And stopping polling when each piece of target positioning information is determined to be the target positioning information to be processed, and combining the adjusted plurality of pieces of position information to be adjusted into a target behavior track.
In short, the plurality of pieces of position information to be adjusted in the behavior track to be adjusted are sequentially adjusted according to each piece of target positioning information, and after the adjustment is finished, the adjusted plurality of pieces of position information to be adjusted are used as the target behavior track.
The specific process of adjusting the plurality of pieces of position information to be adjusted in the behavior track to be adjusted according to the target positioning information to be processed comprises the following steps:
the server acquires the behavior time (called as the behavior time to be processed) of the target positioning information to be processed, and if a plurality of to-be-adjusted generating time corresponding to the behavior track to be adjusted comprises the behavior time to be processed, the to-be-adjusted generating time identical to the behavior time to be processed is taken as the target to-be-adjusted generating time, and the to-be-adjusted position information of the target to-be-adjusted generating time is replaced by the target positioning information to be processed; and if the plurality of to-be-adjusted generating moments corresponding to the to-be-adjusted behavior track do not include the to-be-processed behavior moment, adding the to-be-processed target positioning information to the plurality of to-be-adjusted position information.
For example, the behavior trace to be adjusted includes 4 pieces of position information to be adjusted, which are respectively: (1, 1), (2, 2), (3, 3), (4, 4), and the to-be-adjusted generation timings of the 4 pieces of to-be-adjusted position information are 08:00,08:10,08:15,08:20, respectively. The target positioning information set comprises 2 pieces of target positioning information, which are respectively: (5, 5), (6, 6), and the behavioral moments of the 2 pieces of target positioning information are 08:12,08:20, respectively.
Firstly, for the target positioning information with the behavior time of 08:12, since the 4 generation times to be adjusted do not contain the behavior time of ' 08:12 ', the target positioning information (5, 5) ' is added to the behavior track to be adjusted: (1, 1), (2, 2), (5, 5), (3, 3), (4, 4), the behavior trace to be adjusted after addition includes 5 pieces of position information.
For the target positioning information with the behavior time of 08:20, since the 4 generation time to be adjusted contains the behavior time of ' 08:20 ', the target positioning information (6, 6) ' is used for replacing the position information to be adjusted with the generation time to be adjusted equal to 08:20, namely the target positioning information (6, 6) ' is used for replacing the position information (4, 4) ' and the replaced target behavior track is as follows: (1,1), (2,2), (5,5), (3,3), (6,6).
Step S205, analyzing the behavior characteristics of the target user according to the target behavior track, and outputting the recommended service object of the target user according to the behavior characteristics.
Specifically, the server may determine a behavior feature of the target user according to the target behavior track, and output a recommended service object based on the behavior feature, where the recommended service object may be a commodity, may be a book, may be music, or the like.
For example, when the behavior scene is a mall, the server can analyze which counters the target user likes to stay on based on the target behavior track, and can recommend the commodities sold by the counters to the target user in a targeted manner.
Referring to fig. 7 a-7 b, a schematic diagram of an adjustment behavior trace provided by an embodiment of the present application is shown in fig. 7a, where it is determined that a plurality of behavior traces exist for the same user (i.e. the behavior trace 1 and the behavior trace 2 in fig. 7 a) based on the pedestrian re-recognition processing, and the behavior trace matching the target positioning information set can be determined to be the behavior trace 1 from the behavior trace 1 and the behavior trace 2 based on 5 target positioning information in the target positioning information set, as shown in fig. 7b, so that the behavior trace 2 can be further excluded.
Referring to fig. 8 a-8 b, a schematic diagram of an adjustment behavior trace provided by an embodiment of the present application is shown in fig. 8a, where it is determined that a plurality of behavior traces exist for the same user (i.e. the behavior trace 1 and the behavior trace 2 in fig. 8 a) based on the pedestrian re-recognition processing, and it may be determined that the behavior trace 1 and the behavior trace 2 are not matched with the target positioning information set based on 5 pieces of target positioning information in the target positioning information set, and as shown in fig. 8b, the behavior trace 1 and the behavior trace 2 may be excluded. At this time, it is indicated that both the behavior trace 1 and the behavior trace 2 determined based on the pedestrian re-recognition processing are erroneous, or that the network card address (or terminal) and the user image binding are erroneous.
Fig. 9 a-9 b are schematic diagrams of an adjustment behavior trace provided in an embodiment of the present application, as shown in fig. 9a, it is determined that a behavior trace exists in a user based on pedestrian re-recognition processing, and the behavior trace is sparse (i.e. the number of location information included in the behavior trace is smaller), and the behavior trace of the user includes a behavior trace 1 and a behavior trace 2. The target positioning information set comprises 6 pieces of target positioning information, the target positioning information 1 and the target positioning information 2 are matched with the behavior track 1, and the target positioning information 5 and the target positioning information 6 are matched with the behavior track 2. The object localization information 3 and the object localization information 4 are combined into a behavior trace of the user, as shown in fig. 9b, which supplements a new behavior trace on the basis of the behavior trace 1 and the behavior trace 2.
Fig. 10a to fig. 10b are schematic diagrams of an adjustment behavior trace provided in an embodiment of the present application, as shown in fig. 10a, it is determined that 1 behavior trace exists in the user (i.e. the behavior trace 1 in fig. 10 a) based on the pedestrian re-recognition processing, and it may be determined that the behavior trace 1 does not match with the target positioning information set based on 5 pieces of target positioning information in the target positioning information set, as shown in fig. 10b, and the behavior trace 1 may be excluded. At this time, it is indicated that the behavior trace 1 determined based on the pedestrian re-recognition processing is erroneous, or that the network card address (or terminal) and the user image binding are erroneous.
The above figures 7 a-10 b describe 4 cases, the four above cases algorithm level description is:
at time Ti, the server acquires target position information (X, Y) of the terminal, and acquires position coordinates (Xi, yi) corresponding to the user image bound to the terminal at time Ti:
for case 1 corresponding to fig. 7 a-7 b, behavior trace 2 that does not match the target location information is excluded, and behavior trace 1 that matches the target location information is retained.
For case 2 corresponding to fig. 8 a-8 b, behavior trace 1 and behavior trace 2 that do not match the target position information are excluded.
For case 3 corresponding to fig. 9 a-9 b, the behavior trace is supplemented, so that the number of position coordinates included in the supplemented behavior trace is greater.
For case 4 corresponding to fig. 10 a-10 b, the behavior trace 1 that does not match the target position information is excluded.
As can be seen from the above, since the positioning information is more accurate than the position information in the original behavior trace determined based on the pedestrian re-recognition processing, the plurality of positioning information can correct the original behavior trace, and the corrected target behavior trace has higher accuracy than the original behavior trace before correction; further, based on the accurate target behavior track, distinguishing behavior characteristics can be extracted, and further the output recommended service object can meet the requirements of users, so that the accuracy of the recommended service object is improved.
Further, please refer to fig. 11, which is a schematic diagram illustrating a track data processing apparatus according to an embodiment of the present application. As shown in fig. 11, the trajectory data processing device 1 may be applied to the servers in the corresponding embodiments of fig. 3-10 b described above. The track data processing means may be a computer program (comprising program code) running in the computer device, for example the track data processing means is an application software; the device can be used for executing corresponding steps in the method provided by the embodiment of the application.
The trajectory data processing device 1 may include: a first acquisition module 11, a second acquisition module 12 and an adjustment module 13.
A first obtaining module 11, configured to obtain an original behavior trace of a target user; the original behavior track is obtained by carrying out pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments, and N is a positive integer;
a second obtaining module 12, configured to obtain a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information represents the position information of the target user at one action moment;
And the adjusting module 13 is used for adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user.
The specific functional implementation manner of the first acquiring module 11, the second acquiring module 12, and the adjusting module 13 may refer to step S101 to step S103 in the corresponding embodiment of fig. 3, which are not described herein.
Referring to fig. 11, the first acquisition module 11 may include: a first acquisition unit 111 and a first selection unit 112.
A first acquisition unit 111 for acquiring a set of user images from a plurality of imaging devices; the plurality of imaging devices are installed at different positions in the behavior occurrence scene, and the user images in the user image set are acquired at different positions in the behavior occurrence scene by the imaging devices in the behavior time period;
a first selecting unit 112, configured to select a plurality of target user images of the target user from the user image set;
the first obtaining unit 111 is further configured to obtain first position information of an imaging device corresponding to a target user image in the behavior occurrence scene, obtain a generation time of the target user image, and combine first position information corresponding to each of the plurality of target user images into the original behavior trace according to the generation time of each of the target user images.
The plurality of imaging devices includes an ingress imaging device mounted at an ingress location of the behavioral occurrence scene;
the first selecting unit 111 is specifically configured to acquire a user image of the target user from the portal imaging device, take the user image acquired from the portal imaging device as an image to be retrieved, retrieve a plurality of user images matched with the image to be retrieved in the user image set, and take all the retrieved plurality of user images as target user images.
The specific process of the first obtaining unit 111 and the first selecting unit 112 may refer to step S101 in the corresponding embodiment of fig. 3, which is not described herein.
Referring to fig. 11, the second acquisition module 12 may include: the second acquisition unit 121 and the determination unit 122.
A second obtaining unit 121, configured to obtain a target signal strength of a target terminal carried by the target user at a behavior moment;
a determining unit 122, configured to determine target positioning information of the target user according to the target signal strength;
the second obtaining unit 121 is further configured to combine N pieces of object positioning information into the object positioning information set.
The target signal intensity is the signal intensity of a wifi signal, and the target signal intensity is obtained from a wifi probe device;
The determining unit 122 is specifically configured to determine a distance between the target terminal and the wifi probe device according to the target signal strength, obtain second location information of the wifi probe device in the behavior occurrence scene, and determine the target positioning information according to the second location information and the distance.
The target signal strength is a signal strength of a bluetooth signal, the target signal strength being obtained from a bluetooth probe device;
the determining unit 122 is specifically configured to obtain a plurality of location fingerprints, where each location fingerprint includes a fingerprint signal strength and a fingerprint location information; and determining a target location fingerprint from the plurality of location fingerprints that matches the target signal strength; and determining the target positioning information according to the fingerprint position information in the target position fingerprint, wherein the difference between the fingerprint signal intensity in the target position fingerprint and the target signal intensity is smaller than a signal intensity difference threshold value.
The specific process of the second obtaining unit 121 and the determining unit 122 may refer to step S102 in the corresponding embodiment of fig. 3, which is not described herein.
The plurality of target user images and the target terminal have a mapping relationship;
the trajectory data processing device 1 may include: a first acquisition module 11, a second acquisition module 12, and an adjustment module 13; may further include: a judgment module 14.
A judging module 14 for dividing the user image set into a plurality of unit image sets; one unit image set corresponds to one user, the plurality of target user images belong to a target unit image set, and the target unit image set is one unit image set in the plurality of unit image sets; and extracting the user image with the generation moment equal to the action moment from the unit image set, and taking the extracted user image as an auxiliary user image; the distance difference between the first position information corresponding to the auxiliary user image and the target position information corresponding to the behavior moment is smaller than a distance difference threshold; and counting the number of auxiliary images of the auxiliary user images contained in each unit image set, and if the number of auxiliary images corresponding to the target unit image set is the maximum number of the plurality of auxiliary images corresponding to all unit image sets, determining that the plurality of target user images and the target terminal have a mapping relation.
The specific process of the determining module 14 may refer to step S102 in the corresponding embodiment of fig. 3, which is not described herein.
Referring to fig. 11, the original behavior trace includes a first behavior trace and a second behavior trace;
the adjustment module 13 may include: the second selecting unit 131 and the adjusting unit 132.
A second selecting unit 131, configured to select a behavior trace to be adjusted, which is matched with the target positioning information set, from the first behavior trace and the second behavior trace;
the adjusting unit 132 is configured to adjust the behavior trace to be adjusted according to the target positioning information set, so as to obtain the target behavior trace.
The specific process of the second selecting unit 131 and the adjusting unit 132 may refer to step S203-step S204 in the corresponding embodiment of fig. 6, which is not described herein.
Referring to fig. 11, the second selecting unit 131 may include: a first acquisition subunit 1311 and a second acquisition subunit 1312.
A first obtaining subunit 1311, configured to obtain a first set of reference position information of the target user at the N behavioral moments from the first behavioral track;
a second obtaining subunit 1312, configured to obtain a second set of reference position information of the target user at the N behavior moments from the second behavior trace, determine a first degree of coincidence between the first set of reference position information and the target positioning information set, and determine a second degree of coincidence between the second set of reference position information and the target positioning information set, and if the first degree of coincidence is greater than a threshold of degree of coincidence and the second degree of coincidence is less than the threshold of degree of coincidence, determine the first behavior trace as the behavior trace to be adjusted; and if the first contact ratio is smaller than the contact ratio threshold value and the second contact ratio is larger than the contact ratio threshold value, determining the second behavior track as the behavior track to be adjusted.
The first behavior track comprises a plurality of original position information, and the first reference position information set comprises N pieces of first reference position information;
the first obtaining subunit 1311 is specifically configured to obtain an original generation time corresponding to each original position information, and if a plurality of original generation times include a behavior time, obtain, from the first behavior trace, original position information corresponding to an original generation time that is the same as the behavior time, and use the obtained original position information as first reference position information of the target user at the behavior time; and if the plurality of original generation moments do not comprise the action moment, setting the first reference position information of the target user at the action moment to be empty.
The specific process of the first acquiring subunit 1311 and the second acquiring subunit 1312 may refer to step S203 in the corresponding embodiment of fig. 6, which is not described herein.
Referring to fig. 11, the behavior trace to be adjusted includes a plurality of position information to be adjusted;
the adjustment unit 132 may include: the setting subunit 1321 and the adjusting subunit 1322.
A setting subunit 1321, configured to set a polling priority for each piece of target positioning information, and determine target positioning information to be processed for current polling from the N pieces of target positioning information according to the polling priority;
An adjustment subunit 1322, configured to adjust the plurality of location information to be adjusted according to the target location information to be processed;
the setting subunit 1321 is further configured to stop polling when each piece of target positioning information is determined to be target positioning information to be processed, and combine the adjusted plurality of pieces of position information to be adjusted into the target behavior trace.
The adjustment subunit 1322 is specifically configured to obtain a to-be-adjusted generation time corresponding to each to-be-adjusted position information, and obtain a to-be-processed behavior time corresponding to the to-be-processed target positioning information, and if a plurality of to-be-adjusted generation times include the to-be-processed behavior time, take the to-be-adjusted generation time that is the same as the to-be-processed behavior time as a target to-be-adjusted generation time, and replace to-be-adjusted position information of the target to-be-adjusted generation time with the to-be-processed target positioning information; and if the plurality of to-be-adjusted generating moments do not include the to-be-processed behavior moment, adding the to-be-processed target positioning information to the plurality of to-be-adjusted position information.
The specific process of the setting subunit 1321 and the adjusting subunit 1322 may refer to step S204 in the corresponding embodiment of fig. 6, which is not described herein.
Referring to fig. 11, the trajectory data processing device 1 may include: a first acquisition module 11, a second acquisition module 12, and an adjustment module 13; may further include: a third acquisition module 15.
And the third obtaining module 15 is configured to obtain a face image of the target user, obtain a network card address of a target terminal carried by the target user, combine the face image, the network card address and the target behavior trace into a processing result, and output the processing result.
The specific process of the third obtaining module 15 may refer to step S103 in the corresponding embodiment of fig. 3, which is not described herein.
Referring to fig. 11, the trajectory data processing device 1 may include: a first acquisition module 11, a second acquisition module 12, and an adjustment module 13; may further include: a recommendation module 16.
And the recommendation module 16 is configured to analyze the behavior characteristics of the target user according to the target behavior track, and output a recommended service object of the target user according to the behavior characteristics.
The specific process of the recommendation module 16 may refer to step S205 in the corresponding embodiment of fig. 6, which is not described herein.
Further, please refer to fig. 12, which is a schematic diagram of a computer device according to an embodiment of the present invention. The server in the corresponding embodiment of fig. 3-10 b may be a computer device 1000, as shown in fig. 12, where the computer device 1000 may include: a user interface 1002, a processor 1004, an encoder 1006, and a memory 1008. Signal receiver 1016 is used to receive or transmit data via cellular interface 1010, WIFI interface 1012, a. The encoder 1006 encodes the received data into a computer-processed data format. The memory 1008 has stored therein a computer program, by which the processor 1004 is arranged to perform the steps of any of the method embodiments described above. The memory 1008 may include volatile memory (e.g., dynamic random access memory, DRAM) and may also include non-volatile memory (e.g., one-time programmable read only memory, OTPROM). In some examples, memory 1008 may further include memory located remotely from processor 1004, which may be connected to computer device 1000 via a network. The user interface 1002 may include: a keyboard 1018 and a display 1020.
In the computer device 1000 shown in fig. 12, the processor 1004 may be configured to invoke the storage of a computer program in the memory 1008 to implement:
acquiring an original behavior track of a target user; the original behavior track is obtained by carrying out pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments, and N is a positive integer;
acquiring a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information represents the position information of the target user at one action moment;
and adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user.
In one embodiment, the processor 1004, when executing the acquisition of the original behavior trace of the target user, specifically performs the following steps:
acquiring a user image set from a plurality of imaging devices; the plurality of imaging devices are installed at different positions in the behavior occurrence scene, and the user images in the user image set are acquired at different positions in the behavior occurrence scene by the imaging devices in the behavior time period;
Selecting a plurality of target user images of the target user from the user image set;
acquiring first position information of imaging equipment corresponding to a target user image in the behavior occurrence scene, and acquiring generation time of the target user image;
and according to the generation time of each target user image, combining the first position information corresponding to each of the target user images into the original behavior track.
In one embodiment, the plurality of imaging devices includes an ingress imaging device mounted at an ingress location of the behavioral occurrence scene;
the processor 1004, when executing the selection of a plurality of target user images associated with the target user from the set of user images, specifically executes the following steps:
acquiring a user image of the target user from the entrance imaging device, and taking the user image acquired from the entrance imaging device as an image to be retrieved;
and searching a plurality of user images matched with the image to be searched in the user image set, and taking the searched plurality of user images as target user images.
In one embodiment, the processor 1004, when executing the acquisition of the target positioning information set of the target user in the behavior occurrence scene, specifically performs the following steps:
Acquiring the target signal intensity of a target terminal carried by the target user at the action moment;
determining target positioning information of the target user according to the target signal intensity;
and combining N pieces of target positioning information into the target positioning information set.
In one embodiment, the plurality of target user images and the target terminal have a mapping relationship;
the processor 1004, when executing the determination of whether the plurality of target user images and the target terminal have a mapping, specifically executes the following steps:
dividing the user image set into a plurality of unit image sets; one unit image set corresponds to one user, the plurality of target user images belong to a target unit image set, and the target unit image set is one unit image set in the plurality of unit image sets;
extracting user images with the generation time equal to the behavior time from the unit image set, and taking the extracted user images as auxiliary user images; the distance difference between the first position information corresponding to the auxiliary user image and the target position information corresponding to the behavior moment is smaller than a distance difference threshold;
counting the number of auxiliary images of the auxiliary user images contained in each unit image set;
And if the auxiliary image quantity corresponding to the target unit image set is the maximum quantity of the auxiliary image quantities corresponding to all the unit image sets, determining that the target user images and the target terminal have a mapping relation.
In one embodiment, the target signal strength is a signal strength of a wifi signal, the target signal strength being obtained from a wifi probe device;
the processor 1004, when executing the determination of the target positioning information of the target user according to the target signal strength, specifically executes the following steps:
determining the distance between the target terminal and the wifi probe equipment according to the target signal intensity;
and acquiring second position information of the wifi probe equipment in the behavior occurrence scene, and determining the target positioning information according to the second position information and the distance.
In one embodiment, the target signal strength is a signal strength of a bluetooth signal, the target signal strength being obtained from a bluetooth probe device;
the processor 1004, when executing the determination of the target positioning information of the target user according to the target signal strength, specifically executes the following steps:
Acquiring a plurality of position fingerprints, wherein each position fingerprint comprises a fingerprint signal intensity and fingerprint position information;
determining a target location fingerprint from the plurality of location fingerprints that matches the target signal strength; the difference between the fingerprint signal intensity in the target position fingerprint and the target signal intensity is smaller than a signal intensity difference threshold;
and determining the target positioning information according to the fingerprint position information in the target position fingerprint.
In one embodiment, the original behavior trace includes a first behavior trace and a second behavior trace;
the processor 1004, when executing the adjustment of the original behavior track according to the target positioning information set to obtain the target behavior track of the target user, specifically executes the following steps:
selecting a behavior track to be adjusted, which is matched with the target positioning information set, from the first behavior track and the second behavior track;
and adjusting the behavior track to be adjusted according to the target positioning information set to obtain the target behavior track.
In one embodiment, the processor 1004, when executing the selection of the behavior trace to be adjusted matching the target positioning information set from the first behavior trace and the second behavior trace, specifically executes the following steps:
Acquiring a first reference position information set of the target user at the N behavior moments from the first behavior track, and acquiring a second reference position information set of the target user at the N behavior moments from the second behavior track;
determining a first degree of overlap between the first set of reference location information and the set of target location information, and determining a second degree of overlap between the second set of reference location information and the set of target location information;
if the first contact ratio is larger than a contact ratio threshold value and the second contact ratio is smaller than the contact ratio threshold value, determining the first behavior track as the behavior track to be adjusted;
and if the first contact ratio is smaller than the contact ratio threshold value and the second contact ratio is larger than the contact ratio threshold value, determining the second behavior track as the behavior track to be adjusted.
In one embodiment, the first behavioral locus includes a plurality of raw position information, and the first set of reference position information includes N first reference position information;
the processor 1004, when executing the first reference position information set of the target user at the N behavior moments from the first behavior track, specifically executes the following steps:
Acquiring an original generation moment corresponding to each original position information;
if the plurality of original generation moments comprise the action moment, acquiring original position information corresponding to the same original generation moment as the action moment from the first action track, and taking the acquired original position information as first reference position information of the target user at the action moment;
and if the plurality of original generation moments do not comprise the action moment, setting the first reference position information of the target user at the action moment to be empty.
In one embodiment, the behavior trace to be adjusted includes a plurality of position information to be adjusted;
when the processor 1004 executes to adjust the behavior trace to be adjusted according to the target positioning information set to obtain the target behavior trace, the following steps are specifically executed:
setting polling priority for each piece of target positioning information, and determining target positioning information to be processed for current polling from the N pieces of target positioning information according to the polling priority;
adjusting the plurality of pieces of position information to be adjusted according to the target positioning information to be processed;
and stopping polling when each piece of target positioning information is determined to be the target positioning information to be processed, and combining the adjusted plurality of pieces of position information to be adjusted into the target behavior track.
In one embodiment, the processor 1004, when executing the adjustment of the plurality of position information to be adjusted according to the target positioning information to be processed, specifically executes the following steps:
acquiring a to-be-adjusted generation moment corresponding to each piece of to-be-adjusted position information and acquiring a to-be-processed behavior moment corresponding to the to-be-processed target positioning information;
if the plurality of to-be-adjusted generating moments comprise the to-be-processed behavior moment, taking the to-be-adjusted generating moment which is the same as the to-be-processed behavior moment as a target to-be-adjusted generating moment, and replacing to-be-adjusted position information of the target to-be-adjusted generating moment with the to-be-processed target positioning information;
and if the plurality of generating moments to be adjusted do not comprise the behavior moment to be processed, adding the target positioning information to be processed to the plurality of position information to be adjusted.
In one embodiment, the processor 1004 also performs the steps of:
acquiring a face image of the target user;
acquiring a network card address of a target terminal carried by the target user;
and combining the face image, the network card address and the target behavior track into a processing result, and outputting the processing result.
In one embodiment, the processor 1004 also performs the steps of:
And analyzing the behavior characteristics of the target user according to the target behavior track, and outputting the recommended service object of the target user according to the behavior characteristics.
It should be understood that the computer device 1000 described in the embodiment of the present invention may perform the description of the track data processing method in the embodiment corresponding to fig. 3 to 10b, and may also perform the description of the track data processing apparatus 1 in the embodiment corresponding to fig. 11, which is not repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
Furthermore, it should be noted here that: the embodiment of the present invention further provides a computer storage medium, in which the aforementioned computer program executed by the track data processing apparatus 1 is stored, and the computer program includes program instructions, when executed by a processor, can execute the description of the track data processing method in the embodiment corresponding to fig. 3 to 10b, and therefore, the description will not be repeated here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer storage medium according to the present invention, please refer to the description of the method embodiments of the present invention. As an example, the program instructions may be deployed on one computer device or executed on multiple computer devices at one site or, alternatively, distributed across multiple sites and interconnected by a communication network, where the multiple computer devices distributed across multiple sites and interconnected by the communication network may constitute a blockchain system.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of computer programs, which may be stored on a computer-readable storage medium, and which, when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The foregoing disclosure is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (15)

1. A track data processing method, comprising:
acquiring an original behavior track of a target user; the original behavior track is obtained by performing pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments; the original behavior track comprises position information corresponding to each acquired user image, wherein N is a positive integer;
Acquiring a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information represents the position information of the target user at one action moment;
the original behavior track is adjusted according to the target positioning information set, and a target behavior track of the target user is obtained; the adjustment comprises the adjustment of the position information in the original behavior track, and the adjustment principle comprises the following steps: if the position information in the original behavior track is different from the target positioning information, the target positioning information is adopted to replace the position information in the original behavior track; and if the position information in the original behavior track does not contain the target positioning information, adding the target positioning information to the original behavior track.
2. The method of claim 1, wherein the obtaining the original behavior trace of the target user comprises:
acquiring a user image set from a plurality of imaging devices; the plurality of imaging devices are installed at different positions in the behavior occurrence scene, and the user images in the user image set are acquired at different positions in the behavior occurrence scene by the imaging devices in the behavior time period;
Selecting a plurality of target user images of the target user from the user image set;
acquiring first position information of imaging equipment corresponding to a target user image in the behavior occurrence scene, and acquiring generation time of the target user image;
and according to the generation time of each target user image, combining the first position information corresponding to each of the target user images into the original behavior track.
3. The method of claim 2, wherein the plurality of imaging devices includes an ingress imaging device mounted to an ingress location of the behavioral occurrence scene;
the selecting a plurality of target user images associated with the target user from the set of user images includes:
acquiring a user image of the target user from the entrance imaging device, and taking the user image acquired from the entrance imaging device as an image to be retrieved;
and searching a plurality of user images matched with the image to be searched in the user image set, and taking the searched plurality of user images as target user images.
4. The method of claim 2, wherein the obtaining the set of target location information for the target user in the behavioral occurrence scenario comprises:
Acquiring the target signal intensity of a target terminal carried by the target user at the action moment;
determining target positioning information of the target user according to the target signal intensity;
and combining N pieces of target positioning information into the target positioning information set.
5. The method of claim 4, wherein the plurality of target user images and the target terminal have a mapping relationship, and determining whether the plurality of target user images and the target terminal have a mapping relationship comprises:
dividing the user image set into a plurality of unit image sets; one unit image set corresponds to one user, the plurality of target user images belong to a target unit image set, and the target unit image set is one unit image set in the plurality of unit image sets;
extracting user images with the generation time equal to the behavior time from the unit image set, and taking the extracted user images as auxiliary user images; the distance difference between the first position information corresponding to the auxiliary user image and the target position information corresponding to the behavior moment is smaller than a distance difference threshold;
counting the number of auxiliary images of the auxiliary user images contained in each unit image set;
And if the auxiliary image quantity corresponding to the target unit image set is the maximum quantity of the auxiliary image quantities corresponding to all the unit image sets, determining that the target user images and the target terminal have a mapping relation.
6. The method of claim 4, wherein the target signal strength is a signal strength of a wifi signal, the target signal strength being obtained from a wifi probe device;
the determining the target positioning information of the target user according to the target signal strength includes:
determining the distance between the target terminal and the wifi probe equipment according to the target signal intensity;
and acquiring second position information of the wifi probe equipment in the behavior occurrence scene, and determining the target positioning information according to the second position information and the distance.
7. The method of claim 4, wherein the target signal strength is a signal strength of a bluetooth signal, the target signal strength being obtained from a bluetooth probe device;
the determining the target positioning information of the target user according to the target signal strength includes:
Acquiring a plurality of position fingerprints, wherein each position fingerprint comprises a fingerprint signal intensity and fingerprint position information;
determining a target location fingerprint from the plurality of location fingerprints that matches the target signal strength; the difference between the fingerprint signal intensity in the target position fingerprint and the target signal intensity is smaller than a signal intensity difference threshold;
and determining the target positioning information according to the fingerprint position information in the target position fingerprint.
8. The method of claim 1, wherein the original behavioral trajectory comprises a first behavioral trajectory and a second behavioral trajectory;
the step of adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user comprises the following steps:
selecting a behavior track to be adjusted, which is matched with the target positioning information set, from the first behavior track and the second behavior track;
and adjusting the behavior track to be adjusted according to the target positioning information set to obtain the target behavior track.
9. The method of claim 8, wherein the selecting a behavior trace to be adjusted from the first behavior trace and the second behavior trace that matches the set of target location information comprises:
Acquiring a first reference position information set of the target user at the N behavior moments from the first behavior track, and acquiring a second reference position information set of the target user at the N behavior moments from the second behavior track;
determining a first degree of overlap between the first set of reference location information and the set of target location information, and determining a second degree of overlap between the second set of reference location information and the set of target location information;
if the first contact ratio is larger than a contact ratio threshold value and the second contact ratio is smaller than the contact ratio threshold value, determining the first behavior track as the behavior track to be adjusted;
and if the first contact ratio is smaller than the contact ratio threshold value and the second contact ratio is larger than the contact ratio threshold value, determining the second behavior track as the behavior track to be adjusted.
10. The method of claim 9, wherein the first behavioral locus includes a plurality of raw location information, and the first set of reference location information includes N first set of reference location information;
the obtaining the first reference position information set of the target user at the N behavior moments from the first behavior track includes:
Acquiring an original generation moment corresponding to each original position information;
if the plurality of original generation moments comprise the action moment, acquiring original position information corresponding to the same original generation moment as the action moment from the first action track, and taking the acquired original position information as first reference position information of the target user at the action moment;
and if the plurality of original generation moments do not comprise the action moment, setting the first reference position information of the target user at the action moment to be empty.
11. The method of claim 8, wherein the behavior trace to be adjusted comprises a plurality of position information to be adjusted;
the step of adjusting the behavior track to be adjusted according to the target positioning information set to obtain the target behavior track comprises the following steps:
setting polling priority for each piece of target positioning information, and determining target positioning information to be processed for current polling from the N pieces of target positioning information according to the polling priority;
adjusting the plurality of pieces of position information to be adjusted according to the target positioning information to be processed;
and stopping polling when each piece of target positioning information is determined to be the target positioning information to be processed, and combining the adjusted plurality of pieces of position information to be adjusted into the target behavior track.
12. The method of claim 11, wherein said adjusting the plurality of position information to be adjusted according to the object positioning information to be processed comprises:
acquiring a to-be-adjusted generation moment corresponding to each piece of to-be-adjusted position information and acquiring a to-be-processed behavior moment corresponding to the to-be-processed target positioning information;
if the plurality of to-be-adjusted generating moments comprise the to-be-processed behavior moment, taking the to-be-adjusted generating moment which is the same as the to-be-processed behavior moment as a target to-be-adjusted generating moment, and replacing to-be-adjusted position information of the target to-be-adjusted generating moment with the to-be-processed target positioning information;
and if the plurality of generating moments to be adjusted do not comprise the behavior moment to be processed, adding the target positioning information to be processed to the plurality of position information to be adjusted.
13. A track data processing apparatus, comprising:
the first acquisition module is used for acquiring the original behavior track of the target user; the original behavior track is obtained by performing pedestrian re-identification processing on the target user on the user image acquired in the behavior occurrence scene within a behavior time period, wherein the behavior time period comprises N behavior moments; the original behavior track comprises position information corresponding to each acquired user image, wherein N is a positive integer;
The second acquisition module is used for acquiring a target positioning information set of the target user in the behavior occurrence scene; the target positioning information set comprises N pieces of target positioning information, and one piece of target positioning information represents the position information of the target user at one action moment;
the adjustment module is used for adjusting the original behavior track according to the target positioning information set to obtain a target behavior track of the target user; the adjustment comprises the adjustment of the position information in the original behavior track, and the adjustment principle comprises the following steps: if the position information in the original behavior track is different from the target positioning information, the target positioning information is adopted to replace the position information in the original behavior track; and if the position information in the original behavior track does not contain the target positioning information, adding the target positioning information to the original behavior track.
14. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1-12.
15. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method of any of claims 1-12.
CN202010448982.6A 2020-05-25 2020-05-25 Track data processing method, track data processing device, computer equipment and storage medium Active CN111639968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010448982.6A CN111639968B (en) 2020-05-25 2020-05-25 Track data processing method, track data processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010448982.6A CN111639968B (en) 2020-05-25 2020-05-25 Track data processing method, track data processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111639968A CN111639968A (en) 2020-09-08
CN111639968B true CN111639968B (en) 2023-11-03

Family

ID=72332952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010448982.6A Active CN111639968B (en) 2020-05-25 2020-05-25 Track data processing method, track data processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111639968B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112533031B (en) * 2020-11-23 2022-10-04 北京爱笔科技有限公司 Track video recommendation method and device, computer equipment and storage medium
CN112560702B (en) * 2020-12-17 2024-08-20 北京赢识科技有限公司 User interest portrait generation method, device, electronic equipment and medium
CN112749638A (en) * 2020-12-28 2021-05-04 深兰人工智能(深圳)有限公司 Error screening method for visual recognition track and visual recognition method for sales counter
CN112735030B (en) * 2020-12-28 2022-08-19 深兰人工智能(深圳)有限公司 Visual identification method and device for sales counter, electronic equipment and readable storage medium
CN112859125B (en) * 2021-01-07 2022-03-11 腾讯科技(深圳)有限公司 Entrance and exit position detection method, navigation method, device, equipment and storage medium
CN112887911B (en) * 2021-01-19 2022-03-15 湖南大学 Indoor positioning method based on WIFI and vision fusion
CN117830958B (en) * 2024-03-04 2024-06-11 罗普特科技集团股份有限公司 Passenger clearance and weather detection management method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN109102531A (en) * 2018-08-21 2018-12-28 北京深瞐科技有限公司 A kind of target trajectory method for tracing and device
CN109684896A (en) * 2018-12-29 2019-04-26 Tcl通力电子(惠州)有限公司 Infrared positioning method, terminal and computer readable storage medium
CN109743541A (en) * 2018-12-15 2019-05-10 深圳壹账通智能科技有限公司 Intelligent control method, device, computer equipment and storage medium
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium
CN111091057A (en) * 2019-11-15 2020-05-01 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN109102531A (en) * 2018-08-21 2018-12-28 北京深瞐科技有限公司 A kind of target trajectory method for tracing and device
CN109743541A (en) * 2018-12-15 2019-05-10 深圳壹账通智能科技有限公司 Intelligent control method, device, computer equipment and storage medium
CN109684896A (en) * 2018-12-29 2019-04-26 Tcl通力电子(惠州)有限公司 Infrared positioning method, terminal and computer readable storage medium
CN111091057A (en) * 2019-11-15 2020-05-01 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Visual abnormal behavior detection based on trajectory sparse reconstruction analysis;Ce Li et al.;《Neurocomputing》;第94-100页 *
一种基于行为和航迹特征的室内定位方法研究;李宝林;马东群;李都;江艳;;计算机应用与软件(第08期);全文 *
基于多信息融合的室内定位系统;牛建伟;齐之平;吕卫锋;李辉勇;盛浩;;物联网学报(第01期);全文 *
多视图行人重识别算法和数据采集研究;姜海强;;电脑编程技巧与维护(第01期);全文 *

Also Published As

Publication number Publication date
CN111639968A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN111639968B (en) Track data processing method, track data processing device, computer equipment and storage medium
US11670058B2 (en) Visual display systems and method for manipulating images of a real scene using augmented reality
CN111476306A (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN112069414A (en) Recommendation model training method and device, computer equipment and storage medium
CN113228064A (en) Distributed training for personalized machine learning models
CN110298240B (en) Automobile user identification method, device, system and storage medium
CN113515942A (en) Text processing method and device, computer equipment and storage medium
CN103530649A (en) Visual searching method applicable mobile terminal
CN112163428A (en) Semantic tag acquisition method and device, node equipment and storage medium
JP2021515321A (en) Media processing methods, related equipment and computer programs
CN114881711B (en) Method for carrying out exception analysis based on request behaviors and electronic equipment
CN111126358B (en) Face detection method, device, storage medium and equipment
CN110097004B (en) Facial expression recognition method and device
CN113343069B (en) User information processing method, device, medium and electronic equipment
CN113128526B (en) Image recognition method and device, electronic equipment and computer-readable storage medium
CN114281936A (en) Classification method and device, computer equipment and storage medium
CN117953581A (en) Method and device for identifying actions, electronic equipment and readable storage medium
KR102533512B1 (en) Personal information object detection method and device
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
US20190311184A1 (en) High Accuracy and Volume Facial Recognition on Mobile Platforms
CN116188887A (en) Attribute recognition pre-training model generation method and attribute recognition model generation method
CN114827702B (en) Video pushing method, video playing method, device, equipment and medium
CN111291640B (en) Method and apparatus for recognizing gait
CN113628272A (en) Indoor positioning method and device, electronic equipment and storage medium
CN108960014B (en) Image processing method, device and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028562

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant