CN109547748B - Object foot point determining method and device and storage medium - Google Patents

Object foot point determining method and device and storage medium Download PDF

Info

Publication number
CN109547748B
CN109547748B CN201811494254.8A CN201811494254A CN109547748B CN 109547748 B CN109547748 B CN 109547748B CN 201811494254 A CN201811494254 A CN 201811494254A CN 109547748 B CN109547748 B CN 109547748B
Authority
CN
China
Prior art keywords
activity
target
information
queue
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811494254.8A
Other languages
Chinese (zh)
Other versions
CN109547748A (en
Inventor
汪宁宁
黄佳旺
姜莎
晋兆龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN201811494254.8A priority Critical patent/CN109547748B/en
Publication of CN109547748A publication Critical patent/CN109547748A/en
Application granted granted Critical
Publication of CN109547748B publication Critical patent/CN109547748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The application relates to a method, a device and a storage medium for determining an object foothold, belonging to the technical field of computers, wherein the method comprises the following steps: acquiring activity information of a target object acquired by a first camera; the activity information comprises a unique identifier of a media access control address device of a device used by the target object, the activity time and the activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment; determining the activity duration of the unique identifier of each device in each activity place according to the activity information; determining a foot placement point of the target object according to the activity duration and the corresponding activity place; the problem that the target object can not determine the target object foot falling point when the target object avoids the camera intentionally can be solved; because the camera can detect the unique identification of the equipment used by the target object, the landing point of the target object can be determined according to the time and the place of the unique identification of the equipment, and the case detection difficulty is reduced.

Description

Object foot point determining method and device and storage medium
Technical Field
The application relates to a method and a device for determining an object foothold and a storage medium, belonging to the technical field of computers.
Background
With the development of social modernization, criminal crime incidents occur at all times. When criminal crimes occur in cities, suspects can be hidden after working, and often can be changed to another place to continue working, so that liquidity is high.
At present, a method for determining a foothold of a suspect includes: acquiring characteristic images of the suspect by using a plurality of cameras; and determining the foothold of the suspect according to the position of at least one camera acquiring the characteristic image.
However, if the suspect specifically avoids the monitoring camera, does not appear in the monitoring picture, or has too dark light in the monitoring picture, and the like, the problem that the foothold of the suspect cannot be determined and the difficulty in solving a case is increased when the characteristic image of the suspect cannot be captured by the camera is caused.
Disclosure of Invention
The application provides a method and a device for determining a target foothold and a storage medium, which can solve the problem that the foothold of a target object cannot be determined when the target object avoids a camera intentionally. The application provides the following technical scheme:
in a first aspect, a method for determining a landing point of an object is provided, where the method includes:
acquiring activity information of a target object acquired by a first camera; wherein the activity information comprises a unique identifier of a device used by the target object, an activity time and an activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment;
determining the activity duration of the unique identifier of each device in each activity place according to the activity information;
and determining the foot placement point of the target object according to the activity duration and the corresponding activity place.
Optionally, the determining, according to the activity information, an activity duration of the unique identifier of each device at each activity location includes:
for target activity information with the unique identifier of the same equipment, sequencing the target activity information in a first queue according to the sequence of the activity time from front to back;
for the nth item marking activity information in the first queue, storing the nth item marking activity information into a second queue; n is a positive integer;
sequentially determining whether the activity place in the mth item label activity information in the first queue is the same as the activity place in the nth item label activity information; the m is sequentially a positive integer larger than the n;
when the activity place of the mth item marked activity information is the same as the activity place in the nth item marked activity information, storing the mth item marked activity information into a second queue;
and when the activity place of the mth item activity marking information is different from the activity place in the nth item activity marking information, and at least two pieces of target activity information are in the second queue, acquiring the time difference of the activity time in the head and tail target activity information in the second queue to obtain the activity duration.
Optionally, the method further comprises:
when the activity place of the mth item marked activity information is different from the activity place in the nth item marked activity information and the target activity information in the second queue is one item, emptying the second queue and storing the mth item marked activity information into the emptied second queue.
Optionally, the obtaining a time difference between activity times in two head and tail target activity messages in the second queue, and after obtaining the activity duration, further includes:
and when the first queue is not completely stored in the second queue, emptying the second queue, and executing the step of storing the nth item marking activity information into the second queue again, wherein the value of n is m.
Optionally, the determining a foothold of the target object according to the activity duration and the corresponding activity location includes:
acquiring target activity duration reaching a preset foot-drop point duration threshold from the determined activity duration;
when the number of the target activity duration is not zero, acquiring the stay times of the activity place corresponding to each target activity duration;
and determining the activity place with the stay times reaching a preset time threshold as the foot falling point of the target object.
Optionally, when the number of the target activity durations is not zero, the method further comprises:
acquiring target activity duration reaching a preset foot-drop point duration threshold from the determined activity duration;
and when the number of the target activity duration is not zero, determining an activity place corresponding to the maximum value of the target activity duration as a foot point of the target object.
Optionally, before the obtaining the activity information of the target object acquired by the first camera, the method further includes:
acquiring a plurality of pieces of sample data acquired by a second camera; at least one target sample data in the plurality of sample data comprises an object image and a unique identifier of the equipment acquired when the sample data is acquired;
determining the at least one piece of target sample data of the target object according to the object image;
and determining the unique identifier of the equipment with the occurrence frequency greater than the preset frequency in the at least one piece of target sample data as the unique identifier of the equipment used by the target object.
Optionally, after the obtaining of the activity information of the target object acquired by the first camera, the method further includes:
and for target activity information with the unique identifier of the same equipment, drawing the action track of the target object in a map according to the sequence of the activity time from front to back.
In a second aspect, there is provided an apparatus for determining a landing point of an object, the apparatus comprising:
the information acquisition module is used for acquiring the activity information of the target object acquired by the first camera; wherein the activity information comprises a unique identifier of a media access control address device of a device used by the target object, an activity time and an activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment;
the duration determining module is used for determining the activity duration of the unique identifier of each device in each activity place according to the activity information;
and the foot placement point determining module is used for determining the foot placement point of the target object according to the activity duration and the corresponding activity place.
In a third aspect, an apparatus for determining a landing point of an object is provided, the apparatus comprising a processor and a memory; the memory has stored therein a program that is loaded and executed by the processor to implement the object foothold determination method of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which a program is stored, the program being loaded and executed by the processor to implement the method for determining an object footprint point according to the first aspect.
The beneficial effect of this application lies in: acquiring activity information of a target object acquired by a first camera; the activity information comprises a unique identifier of a media access control address device of a device used by the target object, the activity time and the activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment; determining the activity duration of the unique identifier of each device in each activity place according to the activity information; determining a foot placement point of the target object according to the activity duration and the corresponding activity place; the problem that the target object can not determine the target object foot falling point when the target object avoids the camera intentionally can be solved; because the camera can detect the unique identification of the equipment used by the target object, the landing point of the target object can be determined according to the time and the place of the unique identification of the equipment, and the case detection difficulty is reduced.
In addition, compared with the traditional mode of manually integrating case information and analyzing the action rule of the target object to search the foothold, the method provided by the application can save the time for determining the foothold of the target object and can avoid the problem of information omission in manual analysis of the foothold.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of an object foothold determination system according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an object foothold determination method according to an embodiment of the present application;
fig. 3 is a flowchart of an object foothold determination method according to another embodiment of the present application;
FIG. 4 is a block diagram of an apparatus for determining a landing point of an object according to an embodiment of the present application;
fig. 5 is a block diagram of an object foothold determination apparatus according to an embodiment of the present application.
Detailed Description
The following detailed description of embodiments of the present application will be described in conjunction with the accompanying drawings and examples. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Fig. 1 is a schematic structural diagram of an object foothold determination system according to an embodiment of the present application, and as shown in fig. 1, the system at least includes: a plurality of cameras 110 and a server 120.
The plurality of cameras 110 are distributed at different geographic locations. The camera 110 is configured to collect a monitoring picture in real time, and send the monitoring picture, a camera identifier of the camera 110, and a collection time to the server 120. In the present application, the camera 110 has a function of detecting a unique identifier of a nearby device, such as: the device has a function of detecting a Media Access Control Address (MAC Address) and an International Mobile Equipment Identity (IMEI) (fig. 1 illustrates an example in which a unique identifier of the device is a MAC Address). The MAC address is used for uniquely marking a network card in a network, and if one or more network cards exist in one device, each network card has a unique MAC address. The IMEI is used for identifying mobile communication devices such as individual mobile phones in a mobile phone network, and corresponds to an identification card of a mobile phone.
Optionally, a detector is installed in the camera 110, and is used to automatically scan the unique identifier of the nearby device, such as: scanning MAC addresses of 100 meters (m) or 50m nearby, etc., and the present embodiment does not limit the range in which the camera 110 scans the unique identifier of the device.
The camera identifier is used to uniquely identify the corresponding camera 110 in the server 120, and the camera identifier may be a device number, a code, and the like of the camera 110, and this embodiment does not limit the representation manner of the camera identifier.
Optionally, the camera 110 is communicatively coupled to the server 120 by wire or wirelessly.
The server 120 may be a standalone server host; alternatively, a server cluster including a plurality of server hosts may be used, and the configuration of the server 120 is not limited in this embodiment. Optionally, the server 120 is configured to determine a landing point of the target object according to the data collected by the camera 110.
The target object may be a person or a vehicle, and the present embodiment does not limit the type of the target object.
Optionally, in the present application, a second camera in the multiple cameras 110 is configured to collect multiple pieces of sample data; and transmits the pieces of sample data to the server 120. Accordingly, the server 120 is configured to receive a plurality of pieces of sample data; determining at least one piece of target sample data of the target object according to the object image; and determining the unique identifier of the equipment with the occurrence frequency greater than the preset frequency in at least one piece of target sample data as the unique identifier of the equipment used by the target object.
At least one piece of target sample data in the plurality of pieces of sample data comprises an object image of the target object and a unique identifier of the equipment acquired when the sample data is acquired.
Of course, the server 120 may also obtain the unique identifier of the device of the target object by other manners, such as: the present embodiment does not limit the way in which the server 120 determines the unique identifier of the device of the target object, such as through transmission by other devices or direct input by the user.
Then, a first camera in the multiple cameras 110 is used for acquiring shooting time of a monitoring picture, camera identification and unique identification of nearby equipment; and transmits the collected data to the server 120. Correspondingly, the server 120 is configured to obtain activity information of the target object from the data sent by the first camera, and determine an activity duration of the unique identifier of each device in each activity location according to the activity information; and determining the foot placement point of the target object according to the activity duration and the corresponding activity place.
The activity information comprises a unique identifier of a media access control address device of a device used by the target object, the activity time and the activity place of the target object; the first camera has the function of detecting the unique identification of the device.
Optionally, the first camera and the second camera are all the same; or the first camera is the same as the second camera; or the first camera and the second camera are all different; the number of the first camera and the second camera can be at least one.
It should be added that, in the present application, each target object may have a unique identifier of one or more devices.
Fig. 2 is a flowchart of an object foothold determination method according to an embodiment of the present application, and this embodiment takes the method applied to the object foothold determination system shown in fig. 1, and an execution subject of each step is described as an example of the server 120 in the system. The method at least comprises the following steps:
step 201, acquiring activity information of a target object acquired by a first camera; the activity information comprises a unique identifier of equipment used by the target object, activity time and activity place of the target object; the first camera has the function of detecting the unique identification of the device.
Optionally, the first camera sends the shooting time of the monitoring picture, the camera identifier of the first camera and the collected unique identifier of the nearby device to the server in real time or at intervals; the server detects whether the unique identifier of the equipment sent by the first camera comprises the unique identifier of the equipment of the target object or not, and determines the data comprising the unique identifier of the equipment of the target object as the activity information of the target object; determining an activity place in the activity information according to the camera identification; the photographing time is determined as the activity time in the activity information.
The server determines an activity place in the activity information according to the camera identification, and the method comprises the following steps: and searching the geographical position corresponding to the camera identification of the first camera in the pre-stored corresponding relation between the camera identification and the geographical position to obtain the activity place. The camera identifier is used to uniquely identify a corresponding camera in the server, and the camera identifier may be a device number, a code, and the like of the camera. The correspondence of the camera identification to the geographic location is obtained when the first camera is deployed.
Of course, the server may also obtain the activity location in other ways, such as: the monitoring picture sent by the first camera is subjected to image analysis by using an image recognition algorithm to obtain an activity site and the like, and the mode of acquiring the activity site by the server is not limited in this embodiment.
Alternatively, the server may select the activity information in the target area and/or the target time from the acquired activity information of the target object.
Illustratively, the activity information of the target object acquired by the server is shown in the following table one, wherein the target object has unique identifiers of 2 devices.
Table one:
Figure BDA0001896467700000071
step 202, determining the activity duration of the unique identifier of each device at each activity place according to the activity information.
Because the camera may not acquire the object image of the target object, at this time, the activity duration of the target object in each activity place can be indirectly determined by determining the activity duration of the unique identifier of the device in each activity place.
In one example, for target activity information with unique identification of the same device, the server sorts in the first queue according to the sequence of activity time from front to back; for the nth item marking activity information in the first queue, storing the nth item marking activity information into a second queue; and sequentially determining whether the activity place in the mth item label activity information in the first queue is the same as the activity place in the nth item label activity information.
And when the activity place of the mth item activity marking information is the same as the activity place in the nth item activity marking information, the step of storing the nth item activity marking information into the second queue is executed again, wherein n is m.
And when the activity place of the mth item activity marking information is different from the activity place in the nth item activity marking information, and at least two pieces of target activity information are in the second queue, acquiring the time difference of the activity time in the head and tail target activity information in the second queue to obtain the activity duration.
And when the activity place of the mth item marked activity information is different from the activity place in the nth item marked activity information and the target activity information in the second queue is one item, emptying the second queue and storing the mth item marked activity information into the emptied second queue.
And when the first queue is not completely stored in the second queue, emptying the second queue, and executing the step of storing the nth item marking activity information into the second queue again, wherein the value of n is m.
Wherein n is a positive integer; and m is a positive integer larger than n in sequence.
Optionally, the server stores the target activity message in the first queue in the second queue, and deletes the target activity message in the first queue. At this time, m and n are both indications of the number of pieces of target activity information in the first queue before deletion.
Taking the activity information shown in table one as an example, the target activity information having the unique identifier 00e0.fe01.2345 of the same device is the activity information indicated by 1, 2, 4 and 5. The activity information 1, 2, 4 and 5 are sorted in the first queue in the order of activity time from front to back to obtain the following table two. And for the 1 st item marking activity information in the first queue, storing the 1 st item marking activity information into the second queue. It is determined whether the activity place in the 2 nd entry targeting information in the first queue is the same as the activity place in the 1 st entry targeting information. According to the second table, if the activity location in the 2 nd entry mark activity information is the same as the activity location in the 1 st entry mark activity information, the 2 nd entry mark activity information is stored in the second queue. Then, it is determined whether the activity place in the 3 rd entry targeting information in the first queue is the same as the activity place in the 1 st entry targeting information. According to the second table, if the activity location in the 3 rd item activity marking information is the same as the activity location in the 1 st item activity marking information, the 3 rd item activity marking information is stored in the second queue. Then, whether the activity location in the 4 th item label activity information in the first queue is the same as the activity location in the 1 st item label activity information is determined, and according to the second table, if the activity location in the 4 th item label activity information is different from the activity location in the 1 st item label activity information, the time difference of the activity time in the head and tail target activity messages in the second queue shown in the third table is used to obtain the activity time length, wherein the activity time length is 40 minutes and 44 seconds.
And then, because the 4 th item activity marking information in the second table is not stored in the second queue, emptying the second queue shown in the third table, and storing the 4 th item activity marking information in the second queue so as to enable the server to use the 4 th item activity marking information to compare activity places with subsequent target activity information.
Table two:
Figure BDA0001896467700000091
table three:
serial number Time of activity Location of activity Unique identification of a device
1 2018/11/1 15:05:15 xx route of xx zone 00e0.fe01.2345
2 2018/11/1 15:30:20 xx route of xx zone 00e0.fe01.2345
4 2018/11/1 15:45:59 xx route of xx zone 00e0.fe01.2345
Of course, the server may also use other ways to determine the activity duration of the unique identifier of each device at each activity location based on the activity information. Such as: and for the target activity information with the unique identifier of the same equipment and the activity place, determining the maximum time difference between the target activity information with the time span within a preset time range as the activity duration. The preset time range may be 12 hours, 1 hour, and the like, and the value of the preset time range is not limited in this embodiment.
And step 203, determining a foot point of the target object according to the activity duration and the corresponding activity place.
In one example, the server obtains a target activity duration reaching a preset foothold duration threshold from the determined activity durations; when the number of the target activity duration is not zero, acquiring the stay times of the activity place corresponding to each target activity duration; and determining the activity place with the stay times reaching the preset time threshold value as the foot falling point of the target object.
The time threshold of the foot-down point may be 12 hours, 8 hours, etc., and the value of the time threshold of the foot-down point is not limited in this embodiment. The number threshold may be 3 times, 5 times, etc., and the value of the number threshold is not limited in this embodiment
In another example, a target activity duration reaching a preset foothold duration threshold is obtained from the determined activity durations; and when the number of the target activity duration is not zero, determining the activity place corresponding to the maximum value of the target activity duration as the foot-landing point of the target object.
Of course, the server may also determine the foot point of the target object according to the activity duration and the corresponding activity location in other manners, which is not limited in this embodiment.
In summary, in the method for determining the object foothold provided by the embodiment, the activity information of the target object acquired by the first camera is acquired; the activity information comprises a unique identifier of a media access control address device of a device used by the target object, the activity time and the activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment; determining the activity duration of the unique identifier of each device in each activity place according to the activity information; determining a foot placement point of the target object according to the activity duration and the corresponding activity place; the problem that the target object can not determine the target object foot falling point when the target object avoids the camera intentionally can be solved; because the camera can detect the unique identification of the equipment used by the target object, the landing point of the target object can be determined according to the time and the place of the unique identification of the equipment, and the case detection difficulty is reduced.
In addition, compared with the traditional mode of manually integrating case information and analyzing the action rule of the target object to search the foothold, the method provided by the embodiment can save the time for determining the foothold of the target object and can avoid the problem of information omission in manual analysis of the foothold.
Optionally, based on the above embodiments, after step 201, the target activity information server with the unique identifier of the same device may also draw the action track of the target object in the map in the order from front to back of the activity time. Therefore, the case solving personnel can conveniently predict the position of the target object which is possibly located in the future according to the action track, and the speed of capturing the target object by the case solving personnel is improved.
Optionally, based on the foregoing embodiments, before step 201, the server needs to obtain the unique identifier of the device of the target object.
In one example, the obtaining, by the server, the unique identifier of the device of the target object includes: acquiring a plurality of pieces of sample data acquired by a second camera; determining at least one piece of target sample data of the target object according to the object image; and determining the unique identifier of the equipment with the occurrence frequency greater than the preset frequency in at least one piece of target sample data as the unique identifier of the equipment used by the target object.
At least one target sample data in the plurality of sample data comprises an object image and a unique identifier of the equipment acquired when the sample data is acquired.
Optionally, the number of the second cameras is at least one, and the second cameras may be all the same as the first cameras; alternatively, the second camera may be the same as the first camera; alternatively, the second camera may be entirely different from the first camera.
In another example, the server obtains a unique identification of the device of the target object sent by the user.
Of course, the server may also obtain the unique identifier of the device of the target object in other manners, and this embodiment does not limit the manner in which the server obtains the unique identifier of the device of the target object.
Schematically, in order to more clearly understand the method for determining the object foothold provided by the present application, an example is given, and in this example, a unique identifier of a device is taken as an example of a MAC address. Referring to fig. 3, a flowchart of a method for determining a landing point of an object is provided, where the method at least includes the following steps:
step 301, selecting target activity information with the same MAC address from the activity information of the target object according to the target area and the target time.
Step 302, sorting the selected target activity information in a first queue according to the sequence of the activity time from front to back.
Step 303, reading the nth entry activity marking information from the first queue, and storing the nth entry activity marking information into the second queue. n is a positive integer.
Step 304, determining whether all the target activity information in the first queue is stored in the second queue; if yes, ending the process; if not, go to step 305.
Step 305, reading the (n + 1) th item mark activity information, and comparing whether the activity place in the (n) th item mark activity information is the same as the activity place in the (n + 1) th item mark activity information; if yes, go to step 306; if not, go to step 307;
step 306, store the n +1 th entry activity marking information into the second queue, where n is n +1, and execute step 305 again.
Step 307, detecting whether the number of the target activity information in the second queue is greater than 1; if yes, go to step 308; if not, go to step 310.
Step 308, detecting whether the time difference of the activity time in the first and last target activity information in the second queue is greater than or equal to a preset foot-drop point time length threshold value; if yes, go to step 309; if not, go to step 310.
Step 309, inputting the activity place indicated by the target activity information in the second queue into a result queue; and selecting the landing point of the target object from the result queue, and ending the process.
Step 310, the second queue is emptied and step 306 is performed again.
Fig. 4 is a block diagram of an object foothold determination apparatus according to an embodiment of the present application, and this embodiment takes the application of the apparatus to the server 120 in the object foothold determination system shown in fig. 1 as an example for explanation. The device at least comprises the following modules: an information acquisition module 410, a duration determination module 420, and a foothold determination module 430.
The information acquisition module 410 is configured to acquire activity information of the target object acquired by the first camera; wherein the activity information comprises a unique identifier of a media access control address device of a device used by the target object, an activity time and an activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment;
a duration determining module 420, configured to determine, according to the activity information, an activity duration of the unique identifier of each device in each activity location;
and a foot placement point determining module 430, configured to determine a foot placement point of the target object according to the activity duration and the corresponding activity location.
For relevant details reference is made to the above-described method embodiments.
It should be noted that: the object foot-drop point determining device provided in the above embodiments is only illustrated by the division of the above functional modules when determining the object foot-drop point, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the object foot-drop point determining device is divided into different functional modules to complete all or part of the above described functions. In addition, the object foot-placement point determining apparatus and the object foot-placement point determining method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 5 is a block diagram of an object foothold determination apparatus according to an embodiment of the present application, which may be the server 120 in the object foothold determination system shown in fig. 1. The apparatus comprises at least a processor 501 and a memory 502.
Processor 501 may include one or more processing cores such as: 4 core processors, 5 core processors, etc. The processor 501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the object foothold determination methods provided by method embodiments herein.
In some embodiments, the device for determining the object foothold may further include: a peripheral interface and at least one peripheral. The processor 501, memory 502 and peripheral interfaces may be connected by buses or signal lines. Each peripheral may be connected to the peripheral interface via a bus, signal line, or circuit board. Illustratively, peripheral devices include, but are not limited to: radio frequency circuit, touch display screen, audio circuit, power supply, etc.
Of course, the object foothold determination apparatus may also include fewer or more components, which is not limited in this embodiment.
Optionally, the present application further provides a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the method for determining an object footfall point according to the above method embodiment.
Optionally, the present application further provides a computer product, which includes a computer-readable storage medium, where a program is stored in the computer-readable storage medium, and the program is loaded and executed by a processor to implement the method for determining an object footfall point according to the foregoing method embodiment.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method for determining a landing point of an object, the method comprising:
acquiring activity information of a target object acquired by a first camera; wherein the activity information comprises a unique identifier of a device used by the target object, an activity time and an activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment;
determining the activity duration of the unique identifier of each device in each activity place according to the activity information;
determining a foot placement point of the target object according to the activity duration and the corresponding activity place;
the determining the activity duration of the unique identifier of each device in each activity place according to the activity information includes:
for target activity information with the unique identifier of the same equipment, sequencing the target activity information in a first queue according to the sequence of the activity time from front to back;
for the nth item marking activity information in the first queue, storing the nth item marking activity information into a second queue; n is a positive integer;
sequentially determining whether the activity place in the mth item label activity information in the first queue is the same as the activity place in the nth item label activity information; the m is sequentially a positive integer larger than the n;
when the activity place of the mth item marked activity information is the same as the activity place in the nth item marked activity information, storing the mth item marked activity information into a second queue;
when the activity place of the mth item activity marking information is different from the activity place in the nth item activity marking information, and at least two pieces of target activity information are in the second queue, acquiring the time difference of the activity time in the head and tail target activity information in the second queue to obtain the activity duration;
before the obtaining of the activity information of the target object collected by the first camera, the method further includes:
acquiring a plurality of pieces of sample data acquired by a second camera; at least one target sample data in the plurality of sample data comprises an object image and a unique identifier of the equipment acquired when the sample data is acquired;
determining the at least one piece of target sample data of the target object according to the object image;
and determining the unique identifier of the equipment with the occurrence frequency greater than the preset frequency in the at least one piece of target sample data as the unique identifier of the equipment used by the target object.
2. The method of claim 1, further comprising:
when the activity place of the mth item marked activity information is different from the activity place in the nth item marked activity information and the target activity information in the second queue is one item, emptying the second queue and storing the mth item marked activity information into the emptied second queue.
3. The method according to claim 1, wherein after obtaining the time difference between the activity times of the head and the tail of the two target activity messages in the second queue and obtaining the activity duration, the method further comprises:
and when the first queue is not completely stored in the second queue, emptying the second queue, and executing the step of storing the nth item marking activity information into the second queue again, wherein the value of n is m.
4. The method according to any one of claims 1 to 3, wherein the determining the landing point of the target object according to the activity duration and the corresponding activity location comprises:
acquiring target activity duration reaching a preset foot-drop point duration threshold from the determined activity duration;
when the number of the target activity duration is not zero, acquiring the stay times of the activity place corresponding to each target activity duration;
and determining the activity place with the stay times reaching a preset time threshold as the foot falling point of the target object.
5. A method according to any one of claims 1 to 3, wherein when the number of target activity durations is not zero, the method further comprises:
acquiring target activity duration reaching a preset foot-drop point duration threshold from the determined activity duration;
and when the number of the target activity duration is not zero, determining an activity place corresponding to the maximum value of the target activity duration as a foot point of the target object.
6. The method according to any one of claims 1 to 3, wherein after acquiring the activity information of the target object acquired by the first camera, the method further comprises:
and for target activity information with the unique identifier of the same equipment, drawing the action track of the target object in a map according to the sequence of the activity time from front to back.
7. An apparatus for determining a landing point of an object, the apparatus comprising:
the information acquisition module is used for acquiring the activity information of the target object acquired by the first camera; wherein the activity information comprises a unique identifier of a device used by the target object, an activity time and an activity place of the target object; the first camera has the function of detecting the unique identifier of the equipment;
the duration determining module is used for determining the activity duration of the unique identifier of each device in each activity place according to the activity information;
the foot placement point determining module is used for determining the foot placement point of the target object according to the activity duration and the corresponding activity place;
the duration determining module is configured to:
for target activity information with the unique identifier of the same equipment, sequencing the target activity information in a first queue according to the sequence of the activity time from front to back;
for the nth item marking activity information in the first queue, storing the nth item marking activity information into a second queue; n is a positive integer;
sequentially determining whether the activity place in the mth item label activity information in the first queue is the same as the activity place in the nth item label activity information; the m is sequentially a positive integer larger than the n;
when the activity place of the mth item marked activity information is the same as the activity place in the nth item marked activity information, storing the mth item marked activity information into a second queue;
when the activity place of the mth item activity marking information is different from the activity place in the nth item activity marking information, and at least two pieces of target activity information are in the second queue, acquiring the time difference of the activity time in the head and tail target activity information in the second queue to obtain the activity duration;
the device further comprises:
the second camera is used for acquiring a plurality of pieces of sample data acquired by the second camera; at least one target sample data in the plurality of sample data comprises an object image and a module of a unique identifier of the equipment acquired when the sample data is acquired;
means for determining the at least one piece of target sample data of the target object from the object image;
and the module is used for determining the unique identifier of the equipment with the occurrence frequency greater than the preset frequency in the at least one piece of target sample data as the unique identifier of the equipment used by the target object.
8. An apparatus for determining a footprint point of an object, said apparatus comprising a processor and a memory; the memory has stored therein a program that is loaded and executed by the processor to implement the object footprint determination method of any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the storage medium has stored therein a program which, when being executed by a processor, is adapted to carry out the object footprint determination method of any one of claims 1 to 6.
CN201811494254.8A 2018-12-07 2018-12-07 Object foot point determining method and device and storage medium Active CN109547748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811494254.8A CN109547748B (en) 2018-12-07 2018-12-07 Object foot point determining method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811494254.8A CN109547748B (en) 2018-12-07 2018-12-07 Object foot point determining method and device and storage medium

Publications (2)

Publication Number Publication Date
CN109547748A CN109547748A (en) 2019-03-29
CN109547748B true CN109547748B (en) 2021-06-11

Family

ID=65853098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811494254.8A Active CN109547748B (en) 2018-12-07 2018-12-07 Object foot point determining method and device and storage medium

Country Status (1)

Country Link
CN (1) CN109547748B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705480B (en) * 2019-09-30 2022-12-02 重庆紫光华山智安科技有限公司 Target object stop point positioning method and related device
CN111079033A (en) * 2019-11-29 2020-04-28 武汉烽火众智数字技术有限责任公司 Personnel positioning analysis method based on intelligent community data
CN111372196A (en) * 2020-02-20 2020-07-03 杭州海康威视系统技术有限公司 Data processing method and device, electronic equipment and machine-readable storage medium
CN111985452B (en) * 2020-09-04 2024-01-02 山东合天智汇信息技术有限公司 Automatic generation method and system for personnel movement track and foot drop point
CN113779171A (en) * 2021-09-26 2021-12-10 浙江大华技术股份有限公司 Method and device for determining object foot placement point, storage medium and electronic device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296724B (en) * 2015-05-12 2020-04-03 杭州海康威视数字技术股份有限公司 Method and system for determining track information of target person and processing server
CN106534798A (en) * 2016-12-06 2017-03-22 武汉烽火众智数字技术有限责任公司 Integrated multidimensional data application system for security monitoring and method thereof
CN108540747B (en) * 2017-03-01 2021-03-23 中国电信股份有限公司 Video monitoring method, device and system
CN108897777B (en) * 2018-06-01 2022-06-17 深圳市商汤科技有限公司 Target object tracking method and device, electronic equipment and storage medium
CN108875835B (en) * 2018-06-26 2021-06-22 北京旷视科技有限公司 Object foot-landing point determination method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN109547748A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109547748B (en) Object foot point determining method and device and storage medium
CN109858371B (en) Face recognition method and device
CN110363076B (en) Personnel information association method and device and terminal equipment
CN109753928B (en) Method and device for identifying illegal buildings
CN109559336B (en) Object tracking method, device and storage medium
US20160098636A1 (en) Data processing apparatus, data processing method, and recording medium that stores computer program
US9947105B2 (en) Information processing apparatus, recording medium, and information processing method
CN111127508B (en) Target tracking method and device based on video
CN109656973B (en) Target object association analysis method and device
CN110263680B (en) Image processing method, device and system and storage medium
US9854208B2 (en) System and method for detecting an object of interest
CN108563651B (en) Multi-video target searching method, device and equipment
CN112100461B (en) Questionnaire data processing method, device, server and medium based on data analysis
CN110781711A (en) Target object identification method and device, electronic equipment and storage medium
CN108399782A (en) Method, apparatus, system, equipment and the storage medium of outdoor reversed guide-car
CN111368619A (en) Method, device and equipment for detecting suspicious people
CN111898581A (en) Animal detection method, device, electronic equipment and readable storage medium
CN108229289B (en) Target retrieval method and device and electronic equipment
EP2495696A1 (en) Management server, population information calculation management server, zero population distribution area management method, and population information calculation method
CN112116556A (en) Passenger flow volume statistical method and device and computer equipment
US20150347953A1 (en) Kpi specification apparatus and kpi specification method
CN108629310B (en) Engineering management supervision method and device
CN110505438B (en) Queuing data acquisition method and camera
CN110647595B (en) Method, device, equipment and medium for determining newly-added interest points
CN111680175B (en) Face database construction method, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant