CN115147934B - Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium - Google Patents

Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium Download PDF

Info

Publication number
CN115147934B
CN115147934B CN202211067658.5A CN202211067658A CN115147934B CN 115147934 B CN115147934 B CN 115147934B CN 202211067658 A CN202211067658 A CN 202211067658A CN 115147934 B CN115147934 B CN 115147934B
Authority
CN
China
Prior art keywords
entity
space
target
target entity
entities
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211067658.5A
Other languages
Chinese (zh)
Other versions
CN115147934A (en
Inventor
孟伟灿
陈小卫
时信华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Star Map Co ltd
Original Assignee
Zhongke Star Map Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Star Map Co ltd filed Critical Zhongke Star Map Co ltd
Priority to CN202211067658.5A priority Critical patent/CN115147934B/en
Publication of CN115147934A publication Critical patent/CN115147934A/en
Application granted granted Critical
Publication of CN115147934B publication Critical patent/CN115147934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The embodiment of the disclosure provides a behavior analysis method, a behavior analysis device, behavior analysis equipment and a computer-readable storage medium, which are applied to the technical field of communication. The method comprises the following steps: detecting that a target entity has a preset behavior; determining a spatial grid code of the target entity; searching the space-time data of the target entity according to the space grid code; and acquiring the space movement track of the target entity carrying the time stamp according to the space-time data of the target entity. In this way, a more comprehensive and more three-dimensional spatial movement trajectory is obtained instead of a two-dimensional trajectory, which also facilitates a more accurate behavior analysis afterwards.

Description

Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium
Technical Field
Embodiments of the present disclosure relate generally to the field of communications, and more particularly, to a behavior analysis method, apparatus, device, and computer-readable storage medium.
Background
When a user has a certain specified behavior (such as a suspicious dangerous behavior), the behavior trajectory of the user is often tracked, so as to perform further operations such as behavior analysis. The tracks tracked are two-dimensional, because the traditional spatiotemporal data are used in the tracks, namely the positions of the users are determined by adopting a traditional longitude and latitude method, the tracks are inaccurate to obtain, and further, the behavior analysis is inaccurate.
In addition, the spatio-temporal data is usually associated with the identification of the entity, but the identification of the entity often has different naming modes and naming results in different systems, so that the unification and interaction among different systems are difficult to realize, and the spatio-temporal data is split.
Disclosure of Invention
According to an embodiment of the present disclosure, a behavior analysis scheme is provided.
In a first aspect of the disclosure, a behavior analysis method is provided. The method comprises the following steps: detecting that a target entity has a preset behavior;
determining a spatial grid code of the target entity;
searching the space-time data of the target entity according to the space grid code;
and acquiring the space movement track of the target entity carrying the time stamp according to the space-time data of the target entity.
The above-described aspects and any possible implementations further provide an implementation, and the method further includes:
acquiring a target shooting video associated with the space movement track;
and generating a behavior analysis result aiming at the target entity according to the preset behavior, the target shooting video and the space movement track.
The above aspect and any possible implementation manner further provide an implementation manner, where generating a behavior analysis result for the target entity according to the preset behavior, the target captured video, and the spatial movement trajectory includes:
identifying the behavior of the target entity in the target shooting video to obtain a behavior identification result and the credibility of the behavior identification result;
and generating a behavior analysis result aiming at the target entity according to the preset behavior, the behavior recognition result, the credibility of the behavior recognition result and the space movement track.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where the acquiring a target captured video associated with the spatial movement trajectory includes:
calling a historical shooting video associated with the space movement track of the target entity, and taking the historical shooting video as the target shooting video; or alternatively
Determining shooting equipment matched with the space movement track of the target entity;
and sending a shooting instruction to the matched shooting equipment to control the matched shooting equipment to shoot the target entity to obtain a target shooting video.
The above aspect and any possible implementation manner further provide an implementation manner, where the determining the spatial grid code of the target entity includes:
identifying the target entity to obtain an entity identification result;
determining the IDcode of the target entity according to the entity identification result;
and obtaining the spatial grid code of the target entity according to the indexes established for the entities and the IDcode of the target entity.
The above-described aspects and any possible implementations further provide an implementation, and the method further includes:
allocating IDcodes for each entity;
acquiring the spatiotemporal data of each entity;
obtaining space grid codes corresponding to the entities according to the space-time data of the entities;
and establishing indexes for the entities according to the IDcodes of the entities and the spatial grid codes corresponding to the entities.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where obtaining the spatial trellis code corresponding to each entity according to the spatio-temporal data of each entity includes:
determining the level of the space grid code corresponding to each entity according to a preset rule;
determining the space grid code corresponding to each entity according to the hierarchy of the space grid code corresponding to each entity and the space-time data of each entity;
obtaining spatiotemporal data of the entities, comprising:
after the spatiotemporal data of different entities are obtained, the same entity in the different spatiotemporal data is fused to obtain the spatiotemporal data fused with the entities.
In a second aspect of the present disclosure, a behavior analysis device is provided. The device includes:
the detection module is used for detecting that a target entity has a preset behavior;
a determining module for determining a spatial grid code of the target entity;
the searching module is used for searching the space-time data of the target entity according to the space grid code;
and the acquisition module is used for acquiring the space movement track of the target entity carrying the time stamp according to the space-time data of the target entity.
In a third aspect of the disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
In a fourth aspect of the present disclosure, a computer readable storage medium is provided, having stored thereon a computer program, which when executed by a processor, implements a method as in accordance with the first aspect of the present disclosure.
By the behavior analysis method provided by the embodiment of the disclosure, a more comprehensive and more three-dimensional space movement track rather than a two-dimensional track can be obtained, so that behavior analysis can be performed more accurately afterwards.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an exemplary operating environment in which embodiments of the present disclosure can be implemented;
FIG. 2 shows a flow diagram of a behavior analysis method according to an embodiment of the present disclosure;
FIG. 3 shows a block diagram of a behavior analysis device according to an embodiment of the present disclosure;
FIG. 4 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
FIG. 1 illustrates a schematic diagram of an exemplary operating environment 100 in which embodiments of the present disclosure can be implemented. Included in the runtime environment 100 are a client 101, a network 102, and a server 103.
It should be understood that the number of clients 101, networks 102, and servers 103 in fig. 1 is merely illustrative. There may be any number of clients 101, as desired for an implementation.
FIG. 2 shows a flow diagram of a behavior analysis method 200 according to an embodiment of the present disclosure. As shown in fig. 2, the method includes the following steps:
step 210, detecting that a target entity has a preset behavior;
the predetermined behavior may be a suspicious dangerous behavior such as a behavior of hitting a person, a behavior of taking a dangerous device, etc. Of course, it may also be determined by the characteristics of the target entity, such as specifying a preset behavior according to the work of the target entity, for example: if the target entity is engaged in driving work, the preset behaviors can be sudden braking, sudden acceleration and the like; if the target entity is a student, the preset action may be to make and receive calls in a classroom, etc.
Step 220, determining the spatial grid code of the target entity;
step 230, searching the spatio-temporal data of the target entity according to the spatial grid code;
the space Grid Code (Beidou Grid Code, BGC for short), also called Beidou Grid position Code, sometimes called Beidou navigation Grid Code, is a global area position identification Code. The difference between the spatial grid code and the longitude and latitude code is that the longitude and latitude code represents the position of a point by a pair of coordinates, and the spatial grid code represents the position of an area by a shaping number. The earth partitioning model of the space grid code adopts a GeoSOT earth partitioning model, the earth partitioning model is divided into billions of large to global spaces and centimeter-level mesh bodies from the space of more than 6 kilometers of space to the periphery of the earth and from the space to the center of the earth, the earth partitioning model is composed of 32-level partition body elements, and when the height is zero, the model is the mesh on the surface of the earth. Each grid and the network body are filled in the whole space, and the programming of the space grid code is that under the 32-level grid division system, the first level is 1: the image frame is divided into 100 ten thousand, a 9-level grid section and grid coding are formed under the image frame, the coding result is the string of codes, the first level is N32G,1: the map with 100 ten thousand frames is coded by a grid, and the 15-digit number represents a grid of a square one meter on the earth. The 19 digits represent a 1.5 cm grid that is objectively present on the earth, which is the coordinates of a region's location.
The space-time data is multi-element heterogeneous space-time data; the method comprises the steps of acquiring global space-time data such as vectors, images, terrain, oblique photography and the like through satellite, aviation and other modes; street view, monitoring and other data acquired by means of an unmanned aerial vehicle, a fixed camera and the like; also included is location data sent by the client 101 in fig. 1, for example, the client 101 is a navigation software client, which can send its navigation data, including its user ID and its location data at various points in time, to the server 103 via the network 102.
In addition, the multi-element heterogeneous space-time data is stored in a corresponding database or a storage center according to a preset storage rule.
Finally, it is to be noted that: in the precondition storage stage, corresponding space grid codes are distributed to the space-time data according to the longitude and latitude data and the height data contained in the space-time data, and then the space grid codes are correspondingly stored.
Step 240, obtaining a spatial movement track carrying a timestamp of the target entity according to the spatio-temporal data of the target entity, where the spatial movement track includes a height of the target entity in the space.
After the target entity has the preset behavior, the spatial grid code of the target entity can be determined, and then the corresponding three-dimensional space-time data can be quickly searched based on the spatial grid code, so that the spatial movement track, namely the three-dimensional movement track of the target entity can be accurately obtained.
In one embodiment, the method further comprises:
acquiring a target shooting video associated with the space movement track;
and generating a behavior analysis result aiming at the target entity according to the preset behavior, the target shooting video and the space movement track.
By acquiring the target shooting video associated with the space movement track, a behavior analysis result for a target entity can be comprehensively analyzed based on the three items of information, namely the preset behavior, the target shooting video and the space movement track, and the space movement track is three-dimensional, so that the target shooting video associated with the space movement track is more and more comprehensive, and the obtained behavior analysis result is more comprehensive and the accuracy is higher.
In one embodiment, the generating a behavior analysis result for the target entity according to the preset behavior, the target shooting video and the spatial movement track includes:
identifying the behavior of the target entity in the target shooting video to obtain a behavior identification result and the credibility of the behavior identification result;
and generating a behavior analysis result aiming at the target entity according to the preset behavior, the behavior recognition result, the credibility of the behavior recognition result and the space movement track.
The target entity in the target shooting video is locked, so that the behavior of the target entity in the video can be further identified, a behavior analysis result and the reliability of the behavior analysis result are obtained, and the four items of information can be combined to generate a more accurate and more comprehensive behavior analysis result.
In addition, when the behaviors are identified, the behaviors can be analyzed by using a large number of convolutional neural network models which are trained by various behaviors and can be used for behavior analysis, so that a behavior identification result and the reliability of the behavior analysis result are generated. Or, firstly, adding an image segmentation frame in the target shooting video to set an initial image recognition area, further, searching an incomplete edge of the target entity appearing on a frame of the initial image recognition area, then expanding the initial image recognition area based on the incomplete edge to obtain a final image recognition area, and further, completing the behavior analysis of the target entity based on the final image recognition area.
And if the credibility of a certain behavior recognition result is low, the behavior recognition result can be abandoned.
For example: the preset behavior of the target entity can be sudden braking behavior of a driver, after the target shooting video corresponding to the space moving track of the driver in a certain time is identified, 2 sudden braking behaviors, 1 sudden acceleration behavior and 1 red light running behavior are obtained as behavior identification results, and the credibility of the 4 behaviors is more than 80%, any behavior identification result is not required to be abandoned at first, and then the behavior analysis result that the driver is in dangerous driving can be generated according to the preset behavior of the driver on the space moving track and the 4 behavior identification results.
In one embodiment, the acquiring a target shooting video associated with the spatial movement trajectory includes:
calling a historical shooting video associated with the space movement track of the target entity, and taking the historical shooting video as the target shooting video; or
Determining a shooting device matched with the space movement track of the target entity;
and sending a shooting instruction to the matched shooting equipment to control the matched shooting equipment to shoot the target entity to obtain a target shooting video.
The acquisition mode of the target shooting video can be related historical shooting video called according to the space movement track; certainly, since the spatial movement trajectory is also updated in real time and has real-time performance, the matching shooting devices can be called in real time according to the spatial movement trajectory, and then the target entity is shot through the shooting devices, so that the target shooting video tracked in real time is obtained, and thus, the flexibility of the acquisition mode of the target shooting video can be improved.
In one embodiment, the determining the spatial grid code of the target entity includes:
identifying the target entity to obtain an entity identification result;
identifying a target entity, wherein the target entity can be identified through a photo or a video containing the target entity, so as to obtain one or more entity identification results; the entity identification result may be an identifier of the target entity, such as a name, an identity card number, an account number, and other user identifiers.
Determining the IDcode of the target entity according to the entity identification result (and the corresponding relation between the entity identification result and the IDcode);
of course, the IDcode of the present disclosure may be replaced with MA coding.
Because different entity recognition results of the same target entity correspond to the same IDcode, the problem that the unification and interaction among different systems are difficult to realize and the splitting of space-time data is caused due to the fact that different naming modes and naming results of the entity identification often exist in different systems can be avoided.
And obtaining the spatial grid code of the target entity according to the indexes established for the entities and the IDcode of the target entity.
After determining the IDcode of the target entity, the spatial grid code of the target entity can be accurately and uniquely determined in combination with the index established for each entity in advance.
In one embodiment, the method further comprises:
allocating IDcodes for each entity;
the IDcode is an international two-dimension code object identification system, analysis of different standard coding systems and different standard coding systems is realized, and the coding structure is a tree-shaped structure and is divided into three parts: unit root, object category and customization. The first part is a unit root, the second part is used for object classification, and the third part is a unit which is self-defined according to the application requirement; furthermore, each part is separated by a "/" symbol, and nodes inside each part are separated by a ". Multidot.n" symbol; in addition, when identifying an object, the identifier is a character string composed of nodes sequentially combined on all paths from the root to the leaf.
Acquiring the spatiotemporal data of each entity;
obtaining space grid codes corresponding to the entities according to the space-time data of the entities;
and establishing indexes for the entities according to the IDcodes of the entities and the space grid codes corresponding to the entities.
By allocating the IDcode to each entity, a uniform and unique identifier can be allocated to each entity, for example, an IDcode unique identifier is allocated to each entity, such as a motor vehicle, a non-motor vehicle, a pedestrian and the like; the uniqueness of the IDcode also enables information integration and sharing among multiple heterogeneous data storage and information platforms to be more convenient, and can avoid the problem that the unification and interaction among different systems are difficult to realize and the splitting of space-time data is caused due to the fact that different naming modes and naming results of entity identifications often exist in different systems. And the space grid code corresponding to each entity can be accurately obtained according to the space-time data of each entity, and then indexes can be established for each entity according to the IDcode of each entity and the corresponding space grid code, so that the association of heterogeneous space-time data is more effectively carried out.
In addition, after the index is established, the user can search through the index, for example, the index input by the user can be a spatial grid code index or an IDcode index.
If the user inputs the spatial grid code index, the IDcode of the entity appearing in the Beidou grid and the corresponding fusion spatio-temporal data can be inquired;
if the user inputs the IDcode index, the space grid code corresponding to the IDcode and the corresponding fusion spatio-temporal data can be obtained through query; and obtaining the motion trail of the IDcode according to the corresponding fused spatio-temporal data.
In one embodiment, the obtaining the spatial grid code corresponding to each entity according to the spatio-temporal data of each entity includes:
determining the levels of the space grid codes corresponding to the entities according to a preset rule;
determining the space grid code corresponding to each entity according to the hierarchy of the space grid code corresponding to each entity and the space-time data of each entity;
for example, the level of the corresponding spatial grid code is determined according to the size, the moving speed and the like of each entity; and then determining the spatial grid code of each entity according to the hierarchy division.
Obtaining spatiotemporal data of the entities, comprising:
after the spatiotemporal data of different entities are obtained, the same entity in the different spatiotemporal data is fused to obtain the spatiotemporal data fused with the entities.
After the spatio-temporal data of different entities are obtained, the spatio-temporal data are more, so that the same entity can be fused according to the difference of the entities, the fused spatio-temporal data of the entities are obtained, and the integration of the spatio-temporal data of the entities is realized.
It should be noted that for simplicity of description, the above-mentioned method embodiments are described as a series of acts, but those skilled in the art should understand that the present disclosure is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules are not necessarily required for the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 3 shows a block diagram of a behavior analysis device 300 according to an embodiment of the present disclosure. As shown in fig. 3, the behavior analysis device 300 includes:
the detection module 310 is configured to detect that a preset behavior occurs in a target entity;
a determining module 320 for determining a spatial grid code of the target entity;
a searching module 330, configured to search for the spatiotemporal data of the target entity according to the spatial grid code;
an obtaining module 340, configured to obtain, according to the spatio-temporal data of the target entity, a spatial movement trajectory of the target entity that carries a timestamp.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 4 illustrates a block diagram of an exemplary electronic device 400 capable of implementing embodiments of the present disclosure. As shown, a Central Processing Unit (CPU) 401 is included that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 402 or loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 can also be stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in the electronic device 400 are connected to the I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the electronic device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Processing unit 401 performs various methods and processes described above, such as method 200. For example, in some embodiments, the method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by CPU 401, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, the CPU 401 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (5)

1. A method of behavioral analysis, comprising:
detecting that a target entity has a preset behavior;
determining a spatial grid code of the target entity, comprising: identifying the target entity to obtain an entity identification result; determining the IDcode of the target entity according to the entity identification result; obtaining a spatial grid code of the target entity according to the indexes established for the entities and the IDcode of the target entity;
searching the space-time data of the target entity according to the space grid code;
acquiring a space moving track of the target entity carrying a time stamp according to the space-time data of the target entity;
acquiring a target shooting video associated with the space movement track;
identifying the behavior of the target entity in the target shooting video to obtain a behavior identification result and the credibility of the behavior identification result;
generating a behavior analysis result aiming at the target entity according to the preset behavior, the behavior recognition result, the credibility of the behavior recognition result and the space movement track, wherein the method further comprises the following steps:
allocating IDcodes for each entity; acquiring the spatiotemporal data of each entity; obtaining space grid codes corresponding to the entities according to the space-time data of the entities; establishing indexes for the entities according to the IDcodes of the entities and the space grid codes corresponding to the entities;
the obtaining the spatial grid code corresponding to each entity according to the spatio-temporal data of each entity includes: determining the level of the space grid code corresponding to each entity according to a preset rule; determining the space grid code corresponding to each entity according to the hierarchy of the space grid code corresponding to each entity and the space-time data of each entity;
obtaining spatiotemporal data of the entities, comprising: after the spatiotemporal data of different entities are obtained, the same entity in the different spatiotemporal data is fused to obtain the spatiotemporal data fused with the entities.
2. The method of claim 1,
the acquiring of the target shooting video associated with the spatial movement track comprises:
calling a historical shooting video associated with the space movement track of the target entity, and taking the historical shooting video as the target shooting video; or
Determining shooting equipment matched with the space movement track of the target entity;
and sending a shooting instruction to the matched shooting equipment to control the matched shooting equipment to shoot the target entity to obtain a target shooting video.
3. A behavior analysis device, comprising:
the detection module is used for detecting that a target entity has a preset behavior;
a determining module, configured to determine a spatial grid code of the target entity, including: identifying the target entity to obtain an entity identification result; determining the IDcode of the target entity according to the entity identification result; obtaining a spatial grid code of the target entity according to the indexes established for the entities and the IDcode of the target entity;
the searching module is used for searching the space-time data of the target entity according to the space grid code;
the acquisition module is used for acquiring a space movement track of the target entity carrying a time stamp according to the space-time data of the target entity;
acquiring a target shooting video associated with the space movement track;
identifying the behavior of the target entity in the target shooting video to obtain a behavior identification result and the credibility of the behavior identification result;
generating a behavior analysis result aiming at the target entity according to the preset behavior, the behavior recognition result, the reliability of the behavior recognition result and the space movement track, and allocating an IDcode to each entity; acquiring the space-time data of each entity; obtaining space grid codes corresponding to the entities according to the space-time data of the entities; establishing indexes for the entities according to the IDcodes of the entities and the space grid codes corresponding to the entities;
the obtaining the spatial grid code corresponding to each entity according to the spatio-temporal data of each entity comprises: determining the levels of the space grid codes corresponding to the entities according to a preset rule; determining the space grid code corresponding to each entity according to the hierarchy of the space grid code corresponding to each entity and the space-time data of each entity;
obtaining spatiotemporal data of the entities, comprising: after the spatiotemporal data of different entities are obtained, the same entity in the different spatiotemporal data is fused to obtain the spatiotemporal data fused with the entities.
4. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor when executing the program implements the method of any one of claims 1-2.
5. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 2.
CN202211067658.5A 2022-09-01 2022-09-01 Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium Active CN115147934B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211067658.5A CN115147934B (en) 2022-09-01 2022-09-01 Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211067658.5A CN115147934B (en) 2022-09-01 2022-09-01 Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN115147934A CN115147934A (en) 2022-10-04
CN115147934B true CN115147934B (en) 2022-12-23

Family

ID=83415890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211067658.5A Active CN115147934B (en) 2022-09-01 2022-09-01 Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115147934B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7974461B2 (en) * 2005-02-11 2011-07-05 Deltasphere, Inc. Method and apparatus for displaying a calculated geometric entity within one or more 3D rangefinder data sets
US20150134379A1 (en) * 2013-11-14 2015-05-14 International Business Machines Corporation Singularity of Presence
CN109484934B (en) * 2017-09-11 2022-06-28 奥的斯电梯公司 Tracking of maintenance trajectories for elevator systems
CN112685407A (en) * 2020-12-22 2021-04-20 北京旋极伏羲科技有限公司 Spatial data indexing method based on GeoSOT global subdivision grid code
CN113115229A (en) * 2021-02-24 2021-07-13 福建德正智能有限公司 Personnel trajectory tracking method and system based on Beidou grid code
CN113821539A (en) * 2021-09-07 2021-12-21 丰图科技(深圳)有限公司 Region query method and device, electronic equipment and readable storage medium
CN113946575B (en) * 2021-09-13 2022-10-14 中国电子科技集团公司第十五研究所 Space-time trajectory data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115147934A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
US10417816B2 (en) System and method for digital environment reconstruction
US11313684B2 (en) Collaborative navigation and mapping
CN108921200B (en) Method, apparatus, device and medium for classifying driving scene data
CN111797187B (en) Map data updating method and device, electronic equipment and storage medium
Lynen et al. Get out of my lab: Large-scale, real-time visual-inertial localization.
US10983217B2 (en) Method and system for semantic label generation using sparse 3D data
CN112419494B (en) Obstacle detection and marking method and device for automatic driving and storage medium
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
Ahmad et al. {CarMap}: Fast 3d feature map updates for automobiles
US20210097103A1 (en) Method and system for automatically collecting and updating information about point of interest in real space
CN111638528B (en) Positioning method, positioning device, electronic equipment and storage medium
WO2018041475A1 (en) Driver assistance system for determining a position of a vehicle
CN111829532B (en) Aircraft repositioning system and method
KR102308456B1 (en) Tree species detection system based on LiDAR and RGB camera and Detection method of the same
US11724721B2 (en) Method and apparatus for detecting pedestrian
CN110674711A (en) Method and system for calibrating dynamic target of urban monitoring video
KR20220001274A (en) 3D map change area update system and method
CN116469079A (en) Automatic driving BEV task learning method and related device
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN115147934B (en) Behavior analysis method, behavior analysis device, behavior analysis equipment and computer readable storage medium
CN113496163B (en) Obstacle recognition method and device
CN115082690B (en) Target recognition method, target recognition model training method and device
CN116259043A (en) Automatic driving 3D target detection method and related device
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN110781730A (en) Intelligent driving sensing method and sensing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant