CN110191424B - Specific suspect track generation method and apparatus - Google Patents

Specific suspect track generation method and apparatus Download PDF

Info

Publication number
CN110191424B
CN110191424B CN201910407230.2A CN201910407230A CN110191424B CN 110191424 B CN110191424 B CN 110191424B CN 201910407230 A CN201910407230 A CN 201910407230A CN 110191424 B CN110191424 B CN 110191424B
Authority
CN
China
Prior art keywords
suspect
cameras
moving path
camera
target person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910407230.2A
Other languages
Chinese (zh)
Other versions
CN110191424A (en
Inventor
冯亮
胡卫东
胡晗
吴明浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Digital Mining Technology Co ltd
Original Assignee
Wuhan Digital Mining Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Digital Mining Technology Co ltd filed Critical Wuhan Digital Mining Technology Co ltd
Priority to CN201910407230.2A priority Critical patent/CN110191424B/en
Publication of CN110191424A publication Critical patent/CN110191424A/en
Application granted granted Critical
Publication of CN110191424B publication Critical patent/CN110191424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Abstract

The invention relates to the technical field of public safety, in particular to a track generation method and a track generation device for a specific suspect, wherein the method comprises the following steps: determining a target person who appears in the emergency but does not carry the mobile phone by combining the electronic fence and the camera of the emergency; tracking each target person by using a plurality of cameras around the hair-cutting place, and acquiring a moving path of each target person after the hair-cutting; respectively grouping the cameras on each moving path according to the shooting characteristics of the cameras to obtain a preferred group camera and a secondary group camera corresponding to each target person; and inquiring the portrait information on each moving path according to the priority sequence from the preferred group to the secondary group until the identity or the action track of the suspect is determined. The invention can screen out the suspect who does not carry the mobile phone at the case-issuing place, thereby carrying out special investigation and analysis, and grouping the plurality of cameras on the moving path in advance, thereby improving the efficiency and the accuracy of case situation study and judgment.

Description

Specific suspect track generation method and apparatus
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of public safety, in particular to a track generation method and device for a specific suspect.
[ background of the invention ]
With the rapid development of the internet of things technology, in the actual environment of the internet of things, various sensing devices such as a face camera, a video camera, an electronic fence, and WIFI sniffing are already available, so that sensing data such as personnel activity tracks, vehicle driving track information, mobile phone positioning information and the like can be recorded, and the data are accumulated to form huge data, so that in the actual public security business, when a suspected target is checked and clues are searched, the workload becomes extremely large and complex, if the clues are simply mined from the massive data manually according to cases, the situation is not different from undersea needle fishing, much time and police force are needed, the effect is not ideal, and the best opportunity for case detection is delayed. How to find out valuable clues for case study and judgment in the data, quickly locate a suspected target and improve the working efficiency of investigation becomes a problem to be solved urgently.
Because most people can carry the mobile phone when going out, according to the characteristic, the police can track the mobile phone information of the suspect through the electronic fence equipment in the case study and judgment process to determine the action track of the suspect after the case is sent, and lock the identity of the suspect after searching and examining a suspect target. However, some suspects may have a certain anti-reconnaissance consciousness, and do not carry a mobile phone when doing a case intentionally in order to avoid tracking of an police, so that the suspects cannot be tracked through mobile phone information, and quick and effective investigation is difficult to perform, which undoubtedly directly affects efficiency and accuracy of case study and judgment of the police.
In view of the above, it is an urgent problem in the art to overcome the above-mentioned drawbacks of the prior art.
[ summary of the invention ]
The technical problems to be solved by the invention are as follows:
when the police track the suspect through the mobile phone information, some suspects have certain anti-reconnaissance consciousness, and the suspects do not intentionally carry the mobile phone to avoid tracking in case making, so that the suspects are difficult to rapidly investigate, and the efficiency and accuracy of case study and judgment of the police are influenced.
The invention achieves the above purpose by the following technical scheme:
in a first aspect, the present invention provides a method for generating a trajectory of a specific suspect, including:
determining one or more target persons who appear in the case place but do not carry the mobile phone at the case time by combining the electronic fence equipment and the camera at the case place;
combining the image characteristics of each target person, carrying out target tracking on each target person by using a plurality of cameras around the case location, and further respectively obtaining the moving path of each target person in a preset time period after the case;
according to the shooting characteristics of the cameras, grouping the cameras on each moving path respectively to obtain a preferred group camera and a secondary group camera corresponding to each target person;
and inquiring the portrait information acquired by the corresponding cameras on each moving path according to the priority sequence from the preferred group to the secondary group until the identity of the suspect is determined or the action track of the suspect is determined from a plurality of moving paths.
Preferably, the determining, by using the electronic fence device and the camera in combination with the location of the incident, one or more target persons who have appeared in the location of the incident but do not carry the mobile phone at the time of the incident specifically includes:
determining mobile data corresponding to IMSI numbers of a plurality of mobile phones appearing in a case place at the case place by electronic fence equipment at the case place;
determining movement data corresponding to a plurality of people appearing in the case place at the case time through camera equipment at the case place;
matching the mobile data of the IMSI numbers of the mobile phones with the mobile data of the people, and screening out one or more target people who appear in a case place but do not carry the mobile phones at the time of the case;
wherein the movement data comprises a movement direction and/or a movement speed at the incident location.
Preferably, the shooting characteristics of the camera include: one or more of a photographing direction, a photographing range, a photographing resolution, a photographing height, and a photographing angle of the camera.
Preferably, the grouping the plurality of cameras on each moving path according to the shooting characteristics of the cameras, so as to obtain a preferred group camera and a secondary group camera corresponding to each target person, specifically includes:
acquiring the shooting direction of each camera on each moving path, and matching the shooting direction with the moving path where each camera is located;
for each camera on each moving path, the cameras with the shooting direction facing to the front of the person are classified into a preferred group, and the cameras with the shooting direction facing to the non-front of the person are classified into a secondary group.
Preferably, when the plurality of cameras on each moving path are grouped, the grouping further includes an alternative group of cameras; compared with the preferred group of cameras and the secondary group of cameras, the alternative group of cameras are special in installation position, concealed in shooting range and lowest in priority sequence of being inquired in the three groups of cameras.
Preferably, the querying portrait information collected by the corresponding cameras on each moving path according to the priority order from the preferred group to the secondary group until the identity of the suspect is determined or the action track of the suspect is determined from a plurality of moving paths specifically includes:
for each moving path, preferentially inquiring the portrait information acquired by the corresponding preferred group of cameras to acquire the face characteristics of the target person on the corresponding moving path;
if the face features cannot be acquired through the preferred group of cameras, continuously inquiring the face information acquired by the corresponding secondary selected group of cameras;
if the face features cannot be obtained through the secondary selection group of cameras, the corresponding target person is preliminarily listed as a suspect, and the corresponding moving path is preliminarily determined as a moving track of the suspect after the case happens.
Preferably, for any moving path, after the corresponding target person is preliminarily listed as a suspect if the face features cannot be obtained through the secondary group camera, the method further includes:
acquiring vehicle information corresponding to a suspect according to a camera on a corresponding moving path, further determining the identity of the corresponding suspect according to the vehicle information, and matching the identity with a person with a pre-criminal department in an police information base; if the matching is successful, the identity of the corresponding suspect is verified.
Preferably, for any moving path, if the face features are acquired through the preferred group camera or the secondary group camera, the face features are matched with the persons with the pre-criminal disciplines in the police information base; and if the matching is successful, the person corresponding to the pre-criminal department is listed as the suspect, and the identity of the suspect is finally determined.
Preferably, for any moving path, the method further comprises:
acquiring historical motion tracks of target figures on corresponding moving paths through historical data acquired by a camera;
determining the relevance between the current moving path of the corresponding target person and the historical movement track; and if the relevance does not meet the preset requirement, the corresponding target person is listed as a suspect, and the identity of the suspect is determined through the face features.
In a second aspect, the present invention provides a track generation apparatus for a specific suspect, including at least one processor and a memory, where the at least one processor and the memory are connected through a data bus, and the memory stores instructions executable by the at least one processor, and the instructions are used in the track generation method for a specific suspect according to the first aspect after being executed by the processor.
The invention has the beneficial effects that:
according to the track generation method for the specific suspect, the suspect which is possibly provided with the anti-reconnaissance consciousness and does not carry a mobile phone can be screened out by combining the electronic fence equipment and the camera equipment of the accident site, and then special path analysis and identity investigation are carried out on the part of the suspect; meanwhile, the cameras on the moving path are grouped in advance, the priority sequence of the inquiry of each group of cameras is set, the inquiry of portrait information can be carried out according to the priority sequence when case study and judgment are carried out, and the efficiency and the accuracy of case study and judgment are improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flowchart of a method for analyzing perception data for determining a suspect according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a suspect's action track and a perception device along the way in an police map according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a time-space studying and judging area constructed on a police map according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating determining a movement trajectory after a suspect is sent on an police map according to an embodiment of the present invention;
fig. 5 is another flowchart illustrating determining a trajectory of a suspected person after the suspect has occurred on an police map according to an embodiment of the present invention;
fig. 6 is a flowchart of determining the identity of a suspect through perceptual data cross-collision comparison according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a sensing data list obtained by cross-collision comparison according to an embodiment of the present invention;
fig. 8 is a flowchart of establishing a "person-vehicle-IMSI" correspondence relationship through sensing data according to an embodiment of the present invention;
FIG. 9 is a flowchart of a method for determining a track of a suspect according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating moving paths of a plurality of target characters appearing on a circumstantial region after the circumstantial region according to an embodiment of the present invention;
fig. 11 is a flowchart of a grouping method for cameras on various moving paths according to an embodiment of the present invention;
fig. 12 is a flowchart of a method for generating a track of a specific suspect according to an embodiment of the present invention;
fig. 13 is a flowchart of a method for determining a target person who is present in a case and does not carry a mobile phone according to an embodiment of the present invention;
fig. 14 is an architecture diagram of a trajectory generation device for a specific suspect according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, the terms "inside", "outside", "longitudinal", "lateral", "upper", "lower", "top", "bottom", "left", "right", "front", "rear", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention but do not require that the present invention must be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In the embodiments of the present invention, the symbol "/" indicates the meaning of having both functions, and the symbol "a and/or B" indicates that the combination between the preceding and following objects connected by the symbol includes three cases of "a", "B", "a and B".
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other. The invention will be described in detail below with reference to the figures and examples.
Example 1:
the embodiment of the invention firstly provides a perception data analysis method for determining a suspect, which can help police personnel to quickly determine the identity of the suspect after a case and is beneficial to tracking and arresting the suspect. As shown in fig. 1, the method for analyzing perceptual data according to the embodiment of the present invention specifically includes:
step 201, determining the action track of the suspected person after the incident on the police map by comprehensively studying and judging the sensing data acquired by the sensing devices around the incident place.
The invention carries out tracking analysis based on the police map, and point location information of various sensing devices in the related range can be displayed on the police map. The sensing equipment comprises a camera and/or an electronic fence, the camera comprises one or more of a portrait camera, a video camera and a vehicle camera, and accordingly the sensing data comprises one or more of portrait data, vehicle data and electronic fence data.
Corresponding sensing devices are usually installed on all streets around a case place, and each sensing device acquires sensing data in a corresponding sensing range in real time and stores the sensing data in a sensing data system, so that massive view structured data (acquired by various cameras) and electronic fence data are stored in the sensing data system. After the case is sent, the action route of the suspect can be obtained by comprehensively studying and judging mass data in a preset time period after the case sending time point in the sensing data system, and then the action track of the suspect can be marked and depicted by using a track marking function on an police map, as shown by an arrow line in fig. 2. After the action track of the suspect is determined, the method has important significance for follow-up tracking and capturing of the suspect. The preset time period can be set according to the research and judgment requirements of the police, for example, if the police needs to acquire the activity condition of the suspect within 2 hours after the incident, the preset time period is set to be 2 hours.
Step 202, according to the action track of the suspect, a plurality of sensing devices along the way are matched and marked on the police map, and the time-space range condition required by case study and judgment is calculated.
A plurality of sensing devices are usually installed along the movement track of a suspect, and after the movement track is determined, a related platform (such as a cloud server where each sensing device is located) can automatically acquire and match various sensing devices along the movement track, so that a plurality of sensing devices required by case study and judgment, point location information, sensing range, sensing direction and the like of each sensing device are acquired. Alternatively, the police officer can search related front-end sensing devices, such as electronic fences, in the police map according to the keywords along the action track, filter out a sensing device list conforming to the keywords, and then quickly locate the sensing devices on the police map by selecting the devices in the list.
Taking fig. 2 as an example, A, B, C, D, E five sensing devices (i.e. solid small triangles in the figure) are matched in sequence on the action track corresponding to the arrow. Therefore, the spatial range of case study and judgment can be determined according to the status information, the sensing range and the like of each sensing device. Based on the action mode of the suspect, the time required by the action of the suspect in the corresponding space range can be calculated, and accordingly, the time range of case study and judgment can be determined. According to the space range and the time range obtained by studying and judging, the space-time range condition required by case studying and judging can be calculated.
Step 203, based on the time-space range condition and the sensing devices, determining a plurality of time-space studying and judging areas on the police map along the moving track frame, and setting nodes of each time-space studying and judging area; wherein each space-time studying and judging area comprises at least one sensing device.
A corresponding study and judgment tool box is usually arranged on the police map, and when a time and space study and judgment area is framed, a plurality of time and space study and judgment areas and corresponding sensing equipment can be framed on the police map along the action track by directly utilizing a plurality of study and judgment area selection modes such as 'circle selection', 'quadrilateral frame selection' and 'polygonal frame selection', and a quadrilateral frame selection is taken as an example as shown in fig. 3; of course, the police personnel can also directly perform manual frame selection according to the sensing range of the sensing device, and the method is not limited here. Fig. 3 illustrates an example of each area including a sensing device, but not limiting the invention; in practice, two or more sensing devices may be included in a region, for example, at least two sensing devices that are relatively close to each other or at least two sensing devices on the same street may be framed into the same space-time study region.
After each time-space studying and judging area and the corresponding sensing equipment are determined, more node settings are needed to be carried out on each time-space studying and judging area so as to carry out sensing data statistics on the sensing equipment in each area in the following process, such as studying and judging time range of each area, sensing data (such as mobile phone number attribution and the like) needed to be selected by studying and judging, associated case numbers and the like; meanwhile, in order to distinguish and identify the police map conveniently, the node colors of the time-space studying and judging areas can be set to be different.
And step 204, determining perception data information corresponding to the suspect by performing cross collision comparison on perception data acquired in each time-space studying and judging area, and further determining the identity of the suspect.
The corresponding persons can be judged as suspicious objects by performing corresponding cross collision comparison among the same sensing data on portrait data, electronic fence data and/or vehicle data and the like in a plurality of space-time studying and judging areas, for example, performing cross comparison to calculate the sensing data which commonly appear in the areas. This is because, in general, in certain times (crime and escape times), certain routes and certain spaces, there are not particularly many objects captured by the perceiving device at the same time. Therefore, the suspect obtained through data collision is accurate, and the identity of the suspect can be determined through analyzing the perception data obtained through collision; and the method can also be matched by combining with the existing information base of the police, and can be used for further screening and then positioning the suspect.
The identity of the suspect is determined through the sensing data obtained through collision, specifically, the identity of the suspect can be determined through analyzing and matching portrait data acquired by a camera to obtain identity card information of the suspect; or the corresponding vehicle information, such as license plate number, vehicle color, vehicle model and the like, is obtained by analyzing the vehicle data acquired by the camera, and the identity of the suspect is determined according to the vehicle information matching; corresponding mobile phone information, such as the IMSI number of the mobile phone, the attribution of the mobile phone and the like, can be obtained through analysis of electronic fence data acquired by the electronic fence, and the identity of the suspect can be determined according to the mobile phone information matching. Of course, the identity of the suspect can also be determined after comprehensive analysis by combining two or three kinds of perception data.
In the perception data analysis method for determining the suspect provided by the embodiment of the invention, a plurality of time-space comparison study and judgment nodes are constructed by flexibly framing a plurality of time-space study and judgment areas on an police map based on a plurality of perception devices on the action track of the suspect, and the suspect is determined by performing cross collision comparison on the perception data of each study and judgment area, so that the efficiency and the accuracy of case study and judgment are improved, police personnel can be helped to quickly and accurately investigate a suspect target, and the identity of the suspect is determined.
As can be seen from the above, the sensing device includes the camera and/or the electronic fence, so that the action track after the suspect has a incident can be obtained according to the sensing data (such as portrait data and vehicle data) collected by the cameras around the incident location, and can also be obtained according to the sensing data (electronic fence data) collected by the electronic fences around the incident location. When the action track is obtained according to the camera, the action track after the suspect has been sent on the police map is determined by comprehensively studying and judging the perception data collected by the perception devices around the location of the incident (i.e., step 201), which may refer to fig. 4, and specifically includes:
step 2011, the image characteristics of the suspect are determined by combining the information collected by the camera at the time and the place of the incident and/or the information provided by the victim and/or the witness.
Based on the popularization of the current monitoring camera, a camera is usually installed at a case place or near the case place, and the camera can be a portrait camera and a video camera. The image characteristics of the suspect can be quickly determined by calling the camera monitoring of the case time point and/or the suspect information provided by the victim and/or the witness. The image features mainly refer to physical features of a suspect, such as height, body type, hair style, dressing and the like; in addition, if the behavior of the suspect is taking a car or driving a car, the image features may further include vehicle features of the suspect, such as license plate number, vehicle model number, vehicle color, and the like.
Step 2012, inquiring the multiple view structured data in the perception data system according to the image characteristics of the suspect, and further exploring the action track of the suspect after the incident.
In combination with the image characteristics of the suspect, in the perception data system, a perception data fast-check function is used, and the action track of the suspect after the case-sending time is searched in millisecond-level response time from massive view structured data (namely data collected by various cameras) correspondingly collected in a preset time period after the case-sending time point. The method can be operated as follows: firstly, necessary query conditions such as image features, case time, case place, query time range (namely a preset time period, for example, within 3 hours after the case time is set) and the like of a suspect are input into a perception data system by police personnel, then automatic query is carried out by a relevant platform according to the input conditions, secondary artificial judgment is carried out by the police personnel according to query results after the query is finished, and finally the action track of the suspect is determined. Besides, the related platform can directly track the target of the suspect with the corresponding image characteristics based on the view structured data in the perception data system, and further determine the action track of the suspect after the incident.
And 2013, marking the detected action track of the suspected person after the case is sent on an police map. Specifically, the track marking function in the judgment toolbox can be used to mark and depict the action track of the suspect on the police map, as shown by the arrow line in fig. 2 and 3.
Furthermore, in a financial case like mobile phone robbery, a suspect cannot be in time to steal the mobile phone within a short time after robbing the mobile phone, and must carry the mobile phone of the victim, and corresponding mobile phone information and mobile phone moving conditions can be obtained by electronic fences around a case location, so that the action track of the suspect after the case is sent can also be determined by the electronic fences. At this time, the comprehensive study and judgment of the sensing data collected by the sensing devices around the location of the incident is performed to determine the action track of the suspected person after the incident on the police map (i.e. step 201), which may specifically include, with reference to fig. 5:
in step 2011', the IMSI number corresponding to the mobile phone of the victim is obtained according to the mobile phone information provided by the victim, or the electronic fence acquisition information of the case time and the case place.
Specifically, the corresponding IMSI number of the mobile phone can be obtained through conversion by combining mobile phone information such as a mobile phone number, a mobile phone model and a user identity bound with the mobile phone number provided by a victim. Or when the electronic fence is just arranged at the accident site, the mobile phone data in the corresponding perception range can be recorded at the accident time, so that the IMSI number of the mobile phone corresponding to the victim can be directly acquired through the electronic fence.
2012', according to the IMSI number of the mobile phone, a plurality of electronic fence data in the sensing data system are queried, and then the action track after the suspect is found out.
The method comprises the steps of combining mobile phone IMSI numbers of victims, using a sensing data fast-checking function in a sensing data system, and probing the movement of the corresponding mobile phone IMSI numbers after the scheduled time from massive electronic fence data correspondingly collected in a preset time period after the scheduled time point in millisecond-level response time, so as to quickly determine the movement track of the suspects after the scheduled time. The method can be operated as follows: firstly, necessary query conditions such as the IMSI number of a mobile phone of a victim, the time of a case, the place of the case, the time period range of query (namely the preset time period, such as within 3 hours after the time of the case is set) and the like are input into a perception data system by police personnel, then the relevant platform automatically queries according to the input conditions, and after the query is completed, the police personnel carry out secondary artificial research and judgment according to the query result, and finally determine the action track of the suspect. Besides, the related platform can directly track the target of the suspect of the corresponding IMSI number of the mobile phone based on the electronic fence data in the perception data system, and further determine the action track of the suspect after the incident.
And 2013', marking the detected action track of the suspected person after the case is sent on an police map. Specifically, the track marking function in the judgment toolbox can be used to mark and depict the action track of the suspect on the police map, as shown by the arrow line in fig. 2 and 3.
In step 202, the space-time range required for case study may be specifically determined according to the following method: after matching a plurality of sensing devices along the action track of the suspect and determining the point location information and the sensing range of each sensing device, the area from the case place or the first sensing device (namely, the sensing device A) on the action track to the last sensing device (namely, the sensing device E) on the action track can be used as the space range for case study and judgment. After the action mode of the suspect is determined, the time T required by the action of the suspect in the corresponding space range, namely the time T required by the action of the suspect from the case-sending place or the sensing device A to the sensing device E, can be calculated according to the running speed of the corresponding action mode (such as walking, bicycle riding, electric vehicle riding, driving and the like), and the time T required by the action of the suspect can be used as the time range for case study and judgment in a time period T (namely T-T + T) after the case-sending time point T. In addition, considering that there may be some error in determining the time range according to the action mode, the time range may be determined as follows: in step 201, after the action track of the suspect is determined by performing comprehensive research and judgment on the sensing data, the time point T 'when the suspect passes through the last sensing device (i.e., sensing device E) on the action track is queried, and the time period (i.e., T-T') from the time point T when the case is sent to the time point when the suspect passes through the sensing device E can be used as the time range for researching and judging the case.
Further, in step 203, for each individual time-space studying and judging area, the size of the area selected by the frame on the police map can be used as the studying and judging space range of the corresponding area, and the size of the area selected by the frame can be determined according to the total sensing range of each sensing device in the area. For each time-space studying and judging area, the corresponding method for determining the studying and judging time range specifically comprises the following steps:
and estimating the time of the suspect passing through each sensing device along the path of the action track based on the action mode of the suspect and the time of leaving the case place, and further determining the judgment time range of each time-space judgment area by combining the division of the time-space judgment areas. Referring to fig. 3, taking an example that each time and space research and judgment area includes one sensing device, after determining the action mode of the suspect, the time point T1 when the suspect passes through the sensing device a can be calculated according to the driving speed of the corresponding action mode, the distance between the case-sending location and the sensing device a, and the time point when the suspect leaves the case-sending location, and a certain up-down fluctuation range Δ T is given on the basis of the corresponding time point, so that T1 ± Δ T can be used as the research and judgment time range of the corresponding time and space research and judgment area, and Δ T can be flexibly selected according to the research and judgment requirement, which is not limited herein. Similarly, the time points when the suspect passes through the sensing device B, C, D, E are calculated by the same method, and the judgment time range of the corresponding time-space judgment area is obtained.
In another alternative, to reduce the error caused by the mobile estimation, for each time-space judge region, the corresponding judge time range may be determined as follows: after the action track of the suspect is obtained through comprehensive study and judgment of the perception data, the time of the suspect passing through each perception device along the action track is determined, and then the study and judgment time range of each time-space study and judgment area is determined by combining the division of the time-space study and judgment area. Referring to fig. 3, still taking an example that each time-space studying and judging area includes a sensing device, in step 201, after determining the action trajectory of the suspect through comprehensive studying and judging on the sensing data, the time points when the suspect passes through the sensing device A, B, C, D, E are respectively queried, and a certain up-and-down fluctuation range (i.e., the corresponding time point ± Δ t) is given on the basis of the corresponding time point, which is taken as the studying and judging time range of the corresponding time-space studying and judging area.
With reference to fig. 6, in the actual research and judgment process, the perception data acquired in each time-space research and judgment area is subjected to cross collision comparison to determine perception data information corresponding to a suspect, and further determine the identity of the suspect (i.e., step 204), which specifically includes the following steps:
step 2041, according to the research and judgment thought of the police, a matching comparison relationship is established between each sensing device and each time-space research and judgment area along the action track, and a research and judgment model is formed.
In the police map, a matching comparison relationship can be established among all sensing devices and all time-space studying and judging areas along the action track in a dragging mode, so that the studying and judging thought of police personnel can be converted into a studying and judging flow chart with consistent man-machine thinking, namely a studying and judging model. The matching comparison relationship is mainly established for performing cross collision comparison between the sensing data, and the matching comparison relationship may be intersection, union, exclusion, and the like. For example, for a time-space studying and judging region including at least two sensing devices, a union set relationship is established between the at least two sensing devices, so that sensing data collected by each sensing device in the corresponding time-space studying and judging region can be subjected to union set processing and used as sensing data of the corresponding time-space studying and judging region; establishing an intersection relation among all the time-space studying and judging areas along the whole action track, so that intersection processing can be performed on the sensing data correspondingly acquired in all the time-space studying and judging areas; for some sensing devices with faults, the accuracy of the acquired sensing data is reduced, and the sensing data can be eliminated.
Further, after the study and judgment model is constructed, the relevant platform can also automatically judge the legality of the study and judgment model, for example, judge whether the matching comparison relationship between each sensing device and each time-space study and judgment area is correctly and reasonably constructed, whether the node setting of each time-space study and judgment area is reasonable, whether the model can be smoothly executed according to the current setting, and the like. Only if the judgment is legal, the judging model can be continuously executed; and if the judgment result is illegal, processing according to the found problems, and further rebuilding the model.
Step 2042, executing the studying and judging model, and performing cross collision comparison on the perception data in each time-space studying and judging area to obtain a suspect list containing one or more suspects.
According to the matching comparison relationship, after the same sensing data of each time-space studying and judging area is subjected to cross collision comparison, the sensing data which commonly appear in each area can be obtained, and the corresponding person can be judged as a suspect. The parts of co-occurring perception data can be displayed in a list form, and each piece of perception data corresponds to a suspect object. Taking the electronic fence perception data as an example, the electronic fence data which commonly appear in each area can be obtained after cross comparison, and the list display result of the electronic fence data can refer to fig. 7, where the list display result includes one or more of the IMSI number of the mobile phone, the attribution of the mobile phone number, the capturing times of the corresponding mobile phone number in the corresponding time-space study area, the latest activity frequency (for example, the number of days of activity in nearly 30 days) of the corresponding mobile phone number, and the activity time and the activity position of the corresponding mobile phone number in the corresponding time-space study area.
For convenience of list display, the capture times can be the minimum times after intersection processing, and the activity days of nearly 30 days can be the maximum activity days after intersection processing. When the investigation is carried out, the specific investigation can be carried out by considering the actual situation, for example, considering that some suspects are probably the current scurry case and the possibility of the other suspects is higher, the investigation can be focused on the non-local suspects of the mobile phone number attribution place; for the object with low activity frequency in the monitoring range in nearly 30 days, the object may temporarily arrive at a case to write a case, and the object does not basically move around the case at ordinary times, so that the object can also be mainly examined; and so on.
Taking fig. 7 as an example, after cross collision comparison, 4 pieces of common sensing data are obtained, and the identities of 4 suspect objects can be respectively determined according to the mobile phone information provided by the electronic fence data corresponding to the 4 suspect objects, so as to obtain a suspect list including 4 suspect persons. When portrait data and/or vehicle data are adopted, the perception data which appear together after cross collision comparison can be displayed in a list according to the mode of figure 7, and then the identity of the suspect object is determined according to portrait information and/or vehicle information.
And 2043, screening and examining each suspect in the suspect list by combining case characteristics and the prior background of each suspect in the police information base, and finally determining the identity of the suspect.
For suspects with pre-criminal background, the police information base has corresponding records, and compared with persons without pre-criminal background, the possibility of crime of persons with pre-criminal background is generally higher, so that after a list of suspects is obtained, case characteristics and the police information base can be combined to further screen and examine each suspects, and the final suspects can be more accurately positioned. For example, when the current case is a stolen case, a match query shows that one person in the suspect list has a pre-criminal department of theft, so that the suspect can be initially determined to be the suspect of the case. By the aid of the checking method, the checking range of police personnel can be greatly reduced, checking time can be obviously shortened, checking efficiency is greatly improved, and accuracy of checking results is improved.
Further, in consideration of the influence of the position of each sensing device, the capturing time interval, the moving speed of the captured object, the capturing surrounding environment, and the like in the actual monitoring process, the capturing rate is not 100%, and the capturing may be missed. For example, in 5 space-time study regions of the study and judgment model in the embodiment of the present invention, it is assumed that only 4 space-time study regions capture the trajectory information of the suspected object X, and if the sensing data of the 5 space-time study regions are still taken for cross-collision comparison, the trajectory of the suspected object X is not in a certain space-time study region due to capture omission of the sensing device, so that the comparison collision condition set by the study and judgment model cannot be completely satisfied. Therefore, the suspect X does not appear in the list of the suspects of the comparison collision, and the suspects are missed to cause inaccurate judgment.
In order to solve the problem of the capture omission, a secondary study and judgment can be carried out by using a 'missing point analysis study and judgment mode'. The method specifically comprises the following steps: before executing the studying and judging model, comprehensively analyzing the perception data in each time-space studying and judging area, and further studying and judging the capture omission condition of each suspected target; if there is a missing captured space-time judging area (that is, the missing captured by the sensing equipment in the corresponding area), then in the missing captured space-time judging area, the sensing data of the corresponding suspected targets are supplemented completely, and it is assumed that the suspected targets also appear in the missing captured sensing equipment, so that the integrity of the collision data can be compensated, the collision result is corrected, and the result of the misjudgment caused by the missing captured by the sensing equipment is corrected more accurately.
In the studying and judging process without leak point analysis, after the cross collision comparison of the perception data, the actually obtained objects which commonly appear in all the time-space studying and judging areas are listed as suspects. In the process of studying and judging the missing point analysis, after the sensing data are subjected to cross collision comparison, the objects which commonly appear in the time-space studying and judging area with the preset proportion are actually obtained, in a popular way, the objects which appear in most of the time-space studying and judging area are listed as suspects; wherein the predetermined proportion is typically greater than 50%, for example optionally 80%. In this way, if an object does not appear in all the time-space study regions, but appears in 80% or more than 80%, the object may have missing capture in a small part of the time-space study region, but the object may still be classified as a suspect.
In summary, in the method for analyzing perception data capable of determining a suspect according to the embodiments of the present invention, the action track of the suspect may be determined through comprehensive analysis and judgment of the perception data, and then a plurality of time-space analysis and judgment areas are framed on an police map based on a plurality of perception devices on the action track of the suspect, so as to construct a plurality of time-space comparison analysis and judgment nodes. In addition, the suspect is further screened and examined by combining case characteristics and an police information base, the final suspect can be more accurately positioned from people with pre-criminal departments, the examination range of police personnel is greatly reduced, and the examination time can also be obviously shortened. Meanwhile, in order to solve the problem of missing capture of the sensing equipment, a 'missing point analysis and study mode' can be adopted to make up for the integrity of collision data, correct the collision result and avoid study and study errors.
Example 2:
in the above embodiments, it is mentioned that the identity of the suspect may be determined by sensing data information, that is, analyzing and determining the identity of the suspect by using any one, any two or all three of portrait data (camera acquisition), vehicle data (camera acquisition) and fence data (fence acquisition). Generally speaking, it is easier to determine the identity of a suspect through all three items of sensing data, and it may have a certain difficulty to determine the identity of the suspect through any two items of sensing data, especially when only one item of sensing data is obtained, the difficulty is relatively large in order to determine the identity of the suspect.
In the actual business of public security, since many suspects have a certain anti-reconnaissance capability, various methods may be used to avoid being discovered or identified by the public security organization during the case. For example, when some suspects use the motor vehicle to do a crime, the face is intentionally shielded by using a baffle plate above the driver's seat of the vehicle, and the portrait data cannot be effectively acquired; some suspects do not have mobile phones during crime, and the data of the electronic fence cannot be effectively acquired; and so on. Therefore, all three kinds of sensing data can not be acquired, and only one or two kinds of sensing data can be acquired, so that the identity of the suspect can be locked quickly and effectively according to any one kind of sensing data in actual research and judgment without depending on the three kinds of sensing data.
In order to meet the above requirements, before the actual case study, the method further includes: carrying out multi-dimensional space-time cross fusion by using mass sensing data which are accumulated in a sensing data system before the issuance of a case, and establishing a corresponding relation between 'man-car-IMSI'; wherein the perception data comprises portrait data, vehicle data and electronic fence data. Referring specifically to fig. 8, the method includes:
step 101, a plurality of data sets are constructed by using sensing data collected before cases of various sensing devices, point position data of the sensing devices and self static data in an information base of the police.
Firstly, constructing a face feature and a face track data set: when no case is made at ordinary times, data of a plurality of portrait cameras are collected in advance, a perception data system is imported, and clustering is carried out by utilizing a portrait recognition algorithm, so that a structured data set of face features and portrait tracks is generated.
Secondly, constructing a vehicle characteristic and vehicle track data set: when no case is made at ordinary times, data of a plurality of vehicle cameras are collected in advance, a perception data system is imported, and clustering is carried out by using a vehicle recognition algorithm, so that a structured data set of vehicle features and vehicle tracks is generated.
Thirdly, constructing an IMSI and IMSI track data set: when no case is made at ordinary times, data of a plurality of electronic fences are collected in advance, and the data are imported into a perception data system to generate a structured data set of the electronic fences, namely a structured data set of IMSI tracks and IMSI tracks.
Fourthly, constructing point positions of the sensing equipment and a point position corresponding relation data set: when no plan is made at ordinary times, the point location and point location characteristics of a plurality of sensing devices are predetermined, and the corresponding relation between the point location of the sensing devices and the point location of each sensing device is calculated by adopting a GIS algorithm based on the position information of each sensing device, so that a point location and point location corresponding relation data set of the sensing devices is formed.
Fifthly, constructing a static relation data set: based on the own static data in the police information base, such as the household registration relationship data of personnel, the registration data of vehicles, the violation processing data of vehicles and the like, the static relationship data set of the objects such as the people, the vehicles and the like is constructed by using the algorithm logic of the database.
And 102, respectively constructing a 'person-vehicle' relationship, a 'vehicle-IMSI' relationship and a 'person-IMSI' relationship by combining the information in the plurality of data sets.
Firstly, a 'human-vehicle' relationship is constructed: in the registration information or violation processing information of the motor vehicle, the related information of people, such as identification number, name, telephone number and other real-name information, is generally available; and vehicle information such as license plate number, license plate color, vehicle brand and model number. Therefore, the direct relationship between the person and the vehicle can be quickly constructed through the registration information of the motor vehicle or the violation processing information and the like. However, in real life, it is not necessary that the owner is driving the vehicle, and it may be a relative and friend of the owner, at this time, three shots (shooting the vehicle, shooting the driver, shooting the copilot) can be taken through the front-end sensing device, so as to obtain the portrait information of the driver, and with the help of the portrait identification and comparison technology, the personnel information of the vehicle actually driven, such as the identification number, can be found, so as to associate the person with the vehicle. By the method and the technology, the relationship between people and vehicles can be mined.
Secondly, a vehicle-IMSI relationship is constructed: specifically, vehicle data captured at a vehicle card port (namely, a vehicle camera is arranged) and electronic fence equipment nearby can be used for cross-comparing the vehicle data captured at the vehicle card port with the mobile phone IMSI number captured by the electronic fence, so that the potential relationship between the vehicle and the mobile phone IMSI number can be calculated. In actual work, some electronic fence devices are usually established near the vehicle gate, and when the vehicle gate captures vehicle traffic information, the electronic fence devices also work at the same time to capture the IMSI of the mobile phone within a certain distance (which can be specifically set according to the needs of manufacturers and services). Since the mass data accumulated by the vehicle gate and the electronic fence for a long time is stored in the perception data system, the vehicles and the IMSI which have a certain relationship can deduce whether the correlation relationship exists between the vehicles and the IMSI through the calculation indexes such as the number of times of the same row, the number of points of the same row, the number of days of the same row and the like. The same row specifically means that within a certain time difference (for example, within 30 seconds), the vehicle mount and the electronic fence device which are close to each other are captured. For example, if a car and a mobile IMSI number, among a number of points, have more than 5 days, for a total of 30 peers (i.e., corresponding information is captured almost simultaneously), it can be inferred that a strong relationship exists between the car and the mobile IMSI. Accordingly, a relationship between the vehicle and the IMSI can be established.
Thirdly, constructing a 'human-IMSI' relationship: specifically, the portrait data captured at the portrait bayonet (namely, the portrait camera is arranged) and the electronic fence equipment nearby can be used for cross-comparing the portrait data captured at the portrait bayonet with the mobile phone IMSI number captured by the electronic fence, so that the potential relationship between the person and the mobile phone IMSI can be calculated. In actual work, some electronic fence devices are usually deployed at positions where the portrait bayonet is deployed, and when the portrait bayonet captures the face information of passersby, the electronic fence also works at the same time to capture the IMSI information of the mobile phone within a certain distance (which can be specifically set according to the needs of manufacturers and services). Because mass data for a long time are stored in the perception data system, people and IMSI with a certain relation can deduce whether an association relation exists between the people and the IMSI through calculation indexes such as the number of times of the same row, the number of points of the same row, the number of days of the same row and the like. The same row specifically means that within a certain time difference (for example, within 30 seconds), the same row is captured by the portrait gate and the electronic fence device which are close to each other. For example, if a person and a mobile IMSI are in 30 peers over 5 days between points (i.e., corresponding information is captured almost simultaneously), it can be inferred that there is a strong relationship between the person and the mobile IMSI. Accordingly, a relationship between the person and the IMSI can be established.
And 103, establishing a corresponding relation between the 'person-vehicle-IMSI' according to the 'person-vehicle' relation, the 'vehicle-IMSI' relation and the 'person-IMSI' relation.
Based on the construction of the three basic relationships in the step 102, a basic data set is constructed for the integration of the overall relationship, and the corresponding relationship between the three "person-vehicle-IMSI" can be further opened. Furthermore, the obtained corresponding relation of the 'human-vehicle-IMSI' and the static relation in the static relation data set can be integrated, and the relation between objects can be tamped. For example, the potential relationship between the vehicle owner and the actual driver of the vehicle can be calculated through the data of the sensing device, but if the household relationship exists between the vehicle owner and the actual driver of the vehicle at the same time, the relationships can be superposed to form a multiple relationship, and the corresponding relationship between the person, the vehicle and the IMSI is further enriched and generalized.
In summary, in the embodiment of the present invention, mass portrait data, vehicle data, and electronic fence data accumulated in the long years before the crime are fully utilized, and the correspondence between the "person-vehicle-IMSI" is established based on the rule or information discovered when the crime is not made at ordinary times, so as to be used as a basis for studying and judging the period of the crime. Therefore, when actual research and judgment are carried out, as long as one item of perception data of the suspect is obtained, other perception data can be mined according to the corresponding relation between the person, the vehicle and the IMSI, the identity of the suspect is determined after comprehensive analysis and research and judgment, and the requirement of case research and judgment is met.
Example 3:
in embodiment 1, the action trajectory after the suspect has been sent out is determined by comprehensively examining and judging the sensing data collected by the sensing devices around the location of the suspected person (i.e., step 201). When the determination is made according to the perception data acquired by the camera along the way, the image feature of the suspect needs to be determined first (specifically refer to step 2011), and if the image feature of the suspect is difficult to be determined directly, for example, the camera is not arranged at a scene, or the camera is not clear in shooting, and no witnesser provides corresponding information, or the camera cannot be directly locked to a criminal due to crowding, at this time, the action track of the suspect cannot be obtained directly through the camera.
Considering that most people carry mobile phones with them when going out, and the moving track of the mobile phones can be tracked and detected through electronic fence equipment, therefore, under the condition that the image characteristics of the suspect are difficult to directly obtain, in the embodiment of the invention, a plurality of possible moving paths can be found out by combining the electronic fence equipment, and then the moving track of the suspect is determined by comprehensive analysis by combining cameras along each moving path, or the identity of the suspect is directly determined in the process. Meanwhile, the number of the cameras is set to be large, and if the cameras are called one by one for inquiry, the workload of police is increased, so that the cameras are grouped in advance according to the shooting characteristics of the cameras and the browsing priority order is set, and the inquiry efficiency is improved.
As shown in fig. 9, an embodiment of the present invention provides a method for determining a track of action of a suspect, which specifically includes:
step 301, determining, by the electronic fence device, a plurality of IMSI numbers of the mobile phones that appear at the location of the case at the time of the case, and respectively obtaining a moving path of each IMSI number of the mobile phone within a preset time period after the case.
At the time of the case, a plurality of people usually appear at the case place, and most people carry mobile phones with them, so that as long as electronic fence equipment is arranged at the case place and around the case place, the IMSI (international mobile subscriber identity) number of each person can be detected, and the moving path of each mobile phone after the IMSI number is detected can be tracked, namely the moving path of each person appearing at the case place after the case is seen. The specific operation is as follows: firstly, determining a plurality of IMSI numbers of mobile phones corresponding to a plurality of target people appearing in a case place at the case time by inquiring electronic fence data acquired by electronic fence equipment at the case place. Then, target tracking is carried out on the mobile phone IMSI numbers by using a plurality of electronic fence devices around the place of the case, and then the moving paths of the mobile phone IMSI numbers in the preset time period after the case are obtained respectively.
For example, referring to fig. 10, at a case location at the time of the case, 3 IMSI numbers of the mobile phone are found after detection by the electronic fence device, and correspond to three target people a, b, and c, respectively; further, after target tracking is performed on the IMSI numbers of the 3 mobile phones through a plurality of electronic fence devices around the venue, corresponding three moving paths a, b, and c are obtained and respectively shown in the figure. The preset time period can be set according to the research and judgment requirements of the police, for example, if the police needs to acquire the activity condition of the suspect within 2 hours after the incident, the preset time period is set to be 2 hours.
And step 302, grouping the plurality of cameras on each moving path respectively according to the shooting characteristics of the cameras, so as to obtain a preferred group camera and a secondary group camera corresponding to the IMSI number of each mobile phone.
A plurality of cameras are usually arranged on a moving path corresponding to each target person, and after the moving path is determined, the relevant platform can automatically acquire and match the cameras along the corresponding moving path; through carrying out specific portrait information inquiry on the cameras, the information of the suspect can be further checked. Before the cameras are called to inquire the specific portrait information, the cameras on each moving path are grouped in advance in order to save the study and judgment time of police personnel. Wherein, the grouping basis is the shooting characteristic of camera, the shooting characteristic includes: one or more of the shooting direction, the shooting range, the shooting resolution, the shooting height and the shooting angle of the camera can be grouped according to one feature or a plurality of features. In the embodiment of the present invention, mainly taking the case of dividing into two groups, that is, the preferred group and the sub-selected group, after the grouping is finally completed, theoretically, the overall image quality corresponding to the preferred group should be better than that of the sub-selected group, and accordingly, the probability that valid information can be acquired is also higher than that of the sub-selected group, so that the information should be preferentially queried.
And 303, inquiring portrait information acquired by corresponding cameras on each moving path according to the priority sequence from the preferred group to the secondary group until the identity of the suspect is determined or the action track of the suspect is determined from a plurality of moving paths.
In the above steps, the moving paths of a plurality of target persons are obtained based on a plurality of IMSI numbers of the mobile phone, but each target person is not a suspect, and further screening and checking are required through a camera on each moving path; through screening and examination, the suspect can be preliminarily determined from a plurality of target figures, the corresponding action track is also determined, and the method has important significance for follow-up tracking and arresting of the suspect. Because the probability that the preferred group camera can obtain the effective information is greater than that of the secondary group, the preferred group can be inquired firstly, and if the effective information can be obtained, the secondary group camera does not need to be inquired continuously; and continuously inquiring the secondary group camera only when the effective information is not obtained. Therefore, each camera on each path does not need to be called, and the study and judgment time of the police can be greatly saved.
In the method provided by the embodiment of the invention, the plurality of cameras on the moving path of the suspect are grouped in advance, for example, the cameras are divided into the preferred group and the secondary group, and the priority order of inquiring the cameras of each group is set, so that the portrait information can be inquired according to the priority order when the case is researched and judged, thereby greatly improving the efficiency and the accuracy of case research and judgment, and helping police personnel to quickly and accurately determine the identity of the suspect or the action track of the suspect.
In step 302, it is assumed that grouping is performed only according to the shooting direction of the cameras. This is considered that, if the front of the target person can be photographed by analyzing from the photographing direction of the camera, the probability of acquiring the valid face features is higher, and therefore, it is important to refer to the photographing direction as the grouping basis. In this case, the grouping, according to the shooting characteristics of the cameras, the multiple cameras on each moving path respectively to obtain a preferred group camera and a secondary group camera corresponding to the IMSI number of each mobile phone (i.e. step 302), which may specifically refer to fig. 11, includes:
and step 3021, acquiring the shooting direction of each camera on each moving path, and matching the shooting direction with the moving path where each camera is located.
Referring to fig. 10, taking the moving path c of the target person c as an example, the multiple cameras arranged on the moving path are shown as small dots (including solid dots and hollow dots) in the figure. After the shooting direction of each camera on the path is determined, the shooting direction is matched with the moving path c, and after matching, it can be determined whether each camera is used for shooting the front, the side or the back of the target person C in the process that the target person C normally runs forwards. For example, if the target person c is traveling east on the moving path c, if the shooting direction of the cameras is toward the west or has slight direction deviation, the front of the target person c can be shot exactly in theory, while the cameras in other directions can not shoot the front of the target person c in theory, and may shoot the side or the back.
And step 3022, for each camera on each moving path, classifying the cameras with the shooting directions facing the front of the person into a preferred group, and classifying the cameras with the shooting directions facing the non-front of the person into a secondary group.
Continuing to refer to fig. 10, taking the moving path c of the target person c as an example, the solid small dots in the figure represent the cameras facing the front of the person, and are classified as the preferred group of cameras; the hollow small circle points represent cameras facing to the non-front side of the person and are classified as secondary group cameras, and the priority of the preferred group to be inquired is higher than that of the secondary group. The camera is divided into two groups according to the shooting direction, and naturally, in other alternative embodiments, the camera may be divided into more groups according to more bases. For example, assuming that the cameras on each path are divided into four groups based on the shooting direction and the resolution, the following settings can be made: the first group of cameras are high-resolution cameras, and the shooting direction of the first group of cameras faces the front of the target person; the second group of cameras has low resolution, but the shooting direction faces the front of the target person; the third group of cameras has high resolution, but the shooting direction faces the non-front side of the target person; the fourth group of cameras have low resolution, and the shooting direction faces the non-front side of the target person; theoretically, the image quality of the four groups of cameras is reduced in sequence, the probability of obtaining effective information is reduced in sequence, and therefore the priority sequence of the inquired information is reduced in sequence. The distinction between the high resolution and the low resolution may be, specifically, a certain preset value is used as a boundary, a value higher than the preset value is the high resolution, and a value lower than the preset value is the low resolution, which is not limited herein.
In another optional scheme, in consideration of the fact that the shooting range also has an important influence on whether valid information can be acquired, when the multiple cameras on each moving path are grouped, the grouping may further include an alternative group of cameras in addition to the preferred group of cameras and the second alternative group of cameras. Compared with the preferred group of cameras and the secondary group of cameras, the alternative group of cameras are special in installation position, concealed in shooting range and lowest in priority sequence of being inquired in the three groups of cameras. For example, the preferred group camera and the second-selected group camera are usually arranged at obvious positions on streets and the like, and can more easily shoot the passing target person, while the alternative group camera may be arranged on a certain wall beside the streets or be shielded by obstacles such as trees and the like, so that the shooting range is hidden, the target person is difficult to shoot when driving on the roads, the probability of obtaining effective information is low, and the priority order of the inquiry is lowest. However, in some special cases, it is still possible to capture the target person, for example, when the target person happens to climb a fence or hide in a grove, etc., it may happen that the target person can be captured by the alternative group cameras, and at this time, more valuable information than the preferred group camera and the second alternative group camera may be acquired, so that certain reference significance still exists.
Further, the step of querying portrait information collected by the corresponding cameras on each moving path according to the priority order from the preferred group to the secondary group until the identity of the suspect is determined or the action track of the suspect is determined from the multiple moving paths (i.e. step 303) specifically includes: and for each moving path, preferentially inquiring the portrait information acquired by the corresponding preferred group of cameras so as to acquire the facial features of the target person on the corresponding moving path. If the face features cannot be acquired through the preferred group of cameras, continuously inquiring the face information acquired by the corresponding secondary selected group of cameras; if the face features cannot be obtained through the secondary selection group of cameras, the corresponding target person is preliminarily listed as a suspect, and the corresponding moving path is preliminarily determined as a moving track of the suspect after the case happens.
Referring to fig. 10, taking the moving path c of the target person c as an example, if the target person c normally travels along the moving path, theoretically, the front effective facial features can be collected by the preferred group of cameras (i.e. solid dots), and at this time, the secondary group of cameras do not need to be called again. However, if the target person is blocked by the face, heads are lowered, and heads are turned sideways, or the shooting distance is long, the preferred group of cameras may not be capable of acquiring the face features, and then the secondary group of cameras is called for further confirmation. Theoretically, the front face features cannot be acquired by the secondary group selection camera (namely, the hollow dots), but if the target person C has the conditions of side turning, turning back and the like in the driving process, the front face can be just shot by the group selection camera, so that a certain reference value is still obtained. If the effective human face features cannot be obtained through the secondary selection group of cameras, the target person possibly shields the face intentionally or avoids the cameras, and a certain criminal suspicion can be considered to exist, so that the target person can be preliminarily listed as a suspect, and the corresponding moving path is preliminarily determined as a moving track of the suspect after the incident.
For any moving path, after the corresponding target person is preliminarily listed as a suspect if the face features cannot be obtained through the secondary selection group camera, the method further comprises the following steps: according to the IMSI number of the mobile phone on the corresponding moving path, the identity of the corresponding suspect is determined and matched with the person with the pre-criminal department in the police information base; if the matching is successful, the identity of the corresponding suspect is verified. By combining the analysis, if the face features cannot be obtained through each group of cameras, a certain case suspicion exists in the target person on the corresponding moving path, and although the face features cannot be obtained at the moment, the IMSI number of the mobile phone is determined, so that the identity of the corresponding suspicion person can be determined according to the IMSI number of the mobile phone. The matching with the person having the pre-criminal subject in the police information base is performed in consideration of the fact that the person having the pre-criminal subject is more likely to make a case than the person having no pre-criminal subject, and thus the person is prioritized. The method for determining the identity according to the IMSI number of the mobile phone can refer to embodiment 2, and based on the correspondence relationship of "person-vehicle-IMSI", other sensing data information of the suspect can be determined as long as the IMSI number of the mobile phone is known, so as to determine the identity.
Further, for any moving path, if the effective face features are obtained through the preferred group camera or the secondary group camera, the face features can be directly matched with the face features of the person with the pre-criminal discipline in the police information base; and if the matching is successful, the person corresponding to the pre-criminal department is listed as the suspect, and the identity of the suspect is finally determined. Or determining the identity of the corresponding target figure according to the acquired face characteristics, and then matching the identity with the person with the pre-crime department in the police information base; and if the matching is successful, the corresponding target person is listed as the suspect, and the identity of the suspect is finally determined. This is also in view of the fact that persons with pre-criminal disciplines are more likely to commit crimes than persons without pre-criminal disciplines, and therefore this part of the persons are prioritized.
In the above scheme, for any moving path, when determining whether the corresponding target person is likely to be a suspect, the method may further include: acquiring historical motion tracks of target people on the corresponding moving path through historical data acquired by electronic fence equipment or a camera; determining the relevance between the current moving path of the corresponding target person and the historical movement track; and if the relevance does not meet the preset requirement, the corresponding target person is listed as a suspect, and the identity of the suspect is determined through the face characteristics or the IMSI number of the mobile phone.
The sensing data system stores massive sensing data accumulated in each moving path for a long time, and records historical motion tracks of pedestrians appearing in each moving path, wherein the historical motion tracks can be realized by target tracking based on human face characteristics or target tracking based on IMSI numbers of mobile phones. For any moving path, if the face characteristics of the target person can be obtained, a plurality of historical motion tracks corresponding to the target person can be matched in the perception removing data system based on the face characteristics; if the face features of the target person cannot be obtained, but the IMSI number of the mobile phone can be obtained, a plurality of historical motion tracks corresponding to the target person can be matched in the perception data system based on the IMSI number of the mobile phone. After the historical movement track is obtained, the relevance between the current movement path and the historical movement track of the target person can be researched, namely whether the target person frequently moves along the current movement path historically is judged, and if the number of times of movement along the current movement path exceeds a certain number of times, the relevance between the current movement path and the historical movement track can be considered to meet the preset requirement; at this time, since the target person has historically been constantly moving along the current movement path, it can be inferred that this is a regular route of the target person, and the possibility of crime escape is small, so the corresponding target person can be preliminarily excluded as a suspect. On the contrary, if the target person never or hardly moves along the current moving path historically, the correlation between the current moving path and the historical moving track does not meet the preset requirement, it can be inferred that the target person is not a regular route of the target person and is likely to be a route selected after the temporary crime escapes, so that the corresponding target person can be preliminarily classified as a suspect, and the identity of the suspect can be determined through the acquired human face characteristics or the IMSI number of the mobile phone.
In summary, in the perception data analysis method provided in the embodiment of the present invention, when the perception device in the location of the incident cannot effectively lock the suspect, the electronic fence devices around the incident are used to track and analyze the movement paths of all target persons after the incident, and then the camera devices on the movement paths are used to further determine the suspect and the movement tracks of the suspect. The cameras on all paths are grouped according to shooting characteristics in advance before being analyzed by the aid of the cameras, so that an police can inquire information according to priority sequences when studying and judging, namely, the camera with the best image quality can be called and checked preferentially, pertinence is achieved, case situation studying and judging efficiency and accuracy are greatly improved, and police personnel can be helped to determine identity of a suspect or action track of the suspect quickly and accurately.
Example 4:
in the above embodiment 3, considering that most people carry their mobile phones when going out, the police can track the target of each mobile phone device that appears at the location of the accident at the time of the accident through the electronic fence device, and further determine the moving path of each target person. However, considering that some suspects may have certain anti-reconnaissance awareness, in order to avoid tracking by police, the suspects cannot be targeted to the suspects through mobile phone information because the suspects do not intentionally carry mobile phones in case of doing business. In view of the above situation, an embodiment of the present invention provides a method for generating a track of a specific suspect, as shown in fig. 12, specifically including:
step 401, determining one or more target persons who have appeared in the scheduled time but do not carry the mobile phone in the scheduled place by combining the electronic fence device and the camera in the scheduled place.
At a time point of a case, a plurality of people usually appear at the place of the case, and most people carry mobile phones with them, so that mobile phone information corresponding to the people carrying the mobile phones at the place of the case can be detected as long as the electronic fence equipment is arranged at the place of the case. The camera of the case-sending place can detect the image information of each person in the case-sending place, if a person can only detect the image information but does not have the corresponding mobile phone information, the person can be determined not to carry the mobile phone, and then the person can be determined as a target person for studying and judging the case.
And step 402, combining the image characteristics of each target person, performing target tracking on each target person by using a plurality of cameras around the case location, and further respectively acquiring the moving path of each target person in a preset time period after the case.
Here, the image features of the target person mainly refer to physical features that can be embodied in the image, such as height, body type, hair style, wearing make-up, and the like, and are mainly determined by the camera device of the hair care site. After the image characteristics are determined, the plurality of cameras around the case location can perform target tracking on each target person according to the corresponding image characteristics so as to obtain the moving path of each target person within the preset time period after the case is sent.
For example, referring to fig. 10, in a case place at the case time, after the electronic fence device and the camera device are combined and analyzed, three persons are found not to carry the mobile phone, and are respectively listed as target persons a, b, and c; further, after the three target persons are subjected to target tracking through a plurality of cameras around the case, corresponding three moving paths a, b and c are obtained and are respectively shown in the figure. The preset time period can be set according to the research and judgment requirements of the police, for example, if the police needs to acquire the activity condition of the suspect within 2 hours after the incident, the preset time period is set to be 2 hours.
And step 403, grouping the multiple cameras on each moving path according to the shooting characteristics of the cameras, so as to obtain a preferred group camera and a secondary group camera corresponding to each target person.
A plurality of cameras are usually arranged on a moving path corresponding to each target person, and after the moving path is determined, the relevant platform can automatically acquire and match the cameras along the corresponding moving path; through carrying out specific portrait information inquiry on the cameras, the information of the suspect can be further checked. Before the cameras are called to inquire the specific portrait information, the cameras on each moving path are grouped in advance in order to save the study and judgment time of police personnel. Wherein, the grouping basis is the shooting characteristic of camera, the shooting characteristic includes: one or more of the shooting direction, the shooting range, the shooting resolution, the shooting height and the shooting angle of the camera can be grouped according to one feature or a plurality of features. In the embodiment of the present invention, mainly taking the case of dividing into two groups, that is, the preferred group and the sub-selected group, after the grouping is finally completed, theoretically, the overall image quality corresponding to the preferred group should be better than that of the sub-selected group, and accordingly, the probability that valid information can be acquired is also higher than that of the sub-selected group, so that the information should be preferentially queried.
And step 404, inquiring the portrait information acquired by the corresponding cameras on each moving path according to the priority sequence from the preferred group to the secondary group until the identity of the suspect is determined or the action track of the suspect is determined from a plurality of moving paths.
In the above steps, a plurality of moving paths are obtained based on a plurality of target persons not carrying mobile phones, but not every target person not carrying mobile phones is a suspect, and further screening and examination are required through cameras on all moving paths; through screening and examination, the suspect can be determined from a plurality of target figures, and the corresponding action track is also determined, so that the method has important significance for follow-up tracking and arresting the suspect. Because the probability that the preferred group camera can obtain the effective information is greater than that of the secondary group, the preferred group can be inquired firstly, and if the effective information is obtained, the secondary group camera does not need to be inquired continuously; and continuously inquiring the secondary group camera only when the effective information is not obtained. Therefore, each camera on each path does not need to be called, and the study and judgment time of the police can be greatly saved.
In the track generation method for the specific suspect provided by the embodiment of the invention, the suspect which possibly has the anti-reconnaissance consciousness and does not carry a mobile phone can be screened out by combining the electronic fence equipment and the camera equipment of the incident place, and then special path analysis and identity investigation are carried out on the part of the suspect; meanwhile, the cameras on the moving path are grouped in advance, the priority sequence of the inquiry of each group of cameras is set, the inquiry of portrait information can be carried out according to the priority sequence when case study and judgment are carried out, and the efficiency and the accuracy of case study and judgment are improved.
In step 401, a target person who does not carry a mobile phone in a case-out location can be found out by specifically researching and matching the mobile data of the IMSI number of the mobile phone and the mobile data of the person. Then, the determining, with reference to fig. 13, one or more target persons who have appeared in the scheduled time but do not carry the mobile phone in the scheduled time (i.e. step 401), which specifically includes:
step 4011, determining mobile data corresponding to the IMSI numbers of the mobile phones that appear at the scheduled location at the scheduled time through the electronic fence device at the scheduled location.
Through the electronic fence equipment, the IMSI number of the mobile phone within the sensing range can be detected, and mobile data corresponding to the IMSI number of each mobile phone can be tracked; wherein the movement data comprises a movement direction and/or a movement speed at the incident location.
Step 4012, determining, by the camera device at the location of the case, movement data corresponding to a plurality of people who have appeared at the location of the case at the time of the case.
Through the camera equipment, the figure images within the perception range can be detected, and the moving data corresponding to each figure image is tracked; wherein the movement data comprises a movement direction and/or a movement speed at the incident location.
Step 4013, matching the mobile data of the IMSI numbers of the mobile phones with the mobile data of the people, and screening out one or more target people who have appeared in the case place but do not carry the mobile phones at the time of the case.
After the mobile data acquired by the two sensing devices are matched, for a person carrying a mobile phone, the mobile data of the figure image can be successfully matched with the mobile data of the IMSI number of the corresponding mobile phone; for a person who does not carry a mobile phone, only the mobile data of the corresponding person image cannot be matched with the mobile data of the corresponding mobile phone IMSI number. According to the rules, people who appear in the case place at the case time but do not carry the mobile phone can be screened out and used as target people for follow-up research.
In step 402, grouping is assumed to be performed only according to the shooting direction of the cameras. This is considered that, if the front of the target person can be photographed by analyzing from the photographing direction of the camera, the probability of acquiring the valid face features is higher, and therefore, it is important to refer to the photographing direction as the grouping basis. In this case, the grouping the plurality of cameras on each moving path according to the shooting characteristics of the cameras to obtain the preferred group camera and the sub-selected group camera corresponding to each target person (i.e. step 402) may specifically refer to the relevant description in embodiment 3 and fig. 11, and includes the following steps:
and step 3021, acquiring the shooting direction of each camera on each moving path, and matching the shooting direction with the moving path where each camera is located.
Referring to fig. 10, taking the moving path c of the target person c as an example, the multiple cameras arranged on the moving path are shown as small dots (including solid dots and hollow dots) in the figure. After the shooting direction of each camera on the path is determined, the shooting direction is matched with the moving path c, and after matching, it can be determined whether each camera is used for shooting the front, the side or the back of the target person C in the process that the target person C normally runs forwards. For example, if the target person c is traveling east on the moving path c, if the shooting direction of the cameras is toward the west or has slight direction deviation, the front of the target person c can be shot exactly in theory, while the cameras in other directions can not shoot the front of the target person c in theory, and may shoot the side or the back.
And step 3022, for each camera on each moving path, classifying the cameras with the shooting directions facing the front of the person into a preferred group, and classifying the cameras with the shooting directions facing the non-front of the person into a secondary group.
Continuing to refer to fig. 10, taking the moving path c of the target person c as an example, the solid small dots in the figure represent the cameras facing the front of the person, and are classified as the preferred group of cameras; the hollow small circle points represent cameras facing to the non-front side of the person and are classified as secondary group cameras, and the priority of the preferred group to be inquired is higher than that of the secondary group. The camera is divided into two groups according to the shooting direction, and naturally, in other alternative embodiments, the camera may be divided into more groups according to more bases. For example, assuming that the cameras on each path are divided into four groups based on the shooting direction and the resolution, the following settings can be made: the first group of cameras are high-resolution cameras, and the shooting direction of the first group of cameras faces the front of the target person; the second group of cameras has low resolution, but the shooting direction faces the front of the target person; the third group of cameras has high resolution, but the shooting direction faces the non-front side of the target person; the fourth group of cameras have low resolution, and the shooting direction faces the non-front side of the target person; theoretically, the image quality of the four groups of cameras is reduced in sequence, the probability of obtaining effective information is reduced in sequence, and therefore the priority sequence of the inquired information is reduced in sequence. The distinction between the high resolution and the low resolution may be, specifically, a certain preset value is used as a boundary, a value higher than the preset value is the high resolution, and a value lower than the preset value is the low resolution, which is not limited herein.
In another optional scheme, in consideration of the fact that the shooting range also has an important influence on whether valid information can be acquired, when the multiple cameras on each moving path are grouped, the grouping may further include an alternative group of cameras in addition to the preferred group of cameras and the second alternative group of cameras. Compared with the preferred group of cameras and the secondary group of cameras, the alternative group of cameras are special in installation position, concealed in shooting range and lowest in priority sequence of being inquired in the three groups of cameras. For example, the preferred group camera and the second-selected group camera are usually arranged at obvious positions on streets and the like, and can more easily shoot the passing target person, while the alternative group camera may be arranged on a certain wall beside the streets or be shielded by obstacles such as trees and the like, so that the shooting range is hidden, the target person is difficult to shoot when driving on the roads, the probability of obtaining effective information is low, and the priority order of the inquiry is lowest. However, in some special cases, it is still possible to capture the target person, for example, when the target person happens to climb a fence or hide in a grove, etc., it may happen that the target person can be captured by the alternative group cameras, and at this time, more valuable information than the preferred group camera and the second alternative group camera may be acquired, so that certain reference significance still exists.
Further, the step of querying portrait information collected by the corresponding cameras on each moving path according to the priority order from the preferred group to the secondary group until the identity of the suspect is determined or the action track of the suspect is determined from the multiple moving paths (i.e. step 404) specifically includes: and for each moving path, preferentially inquiring the portrait information acquired by the corresponding preferred group of cameras so as to acquire the facial features of the target person on the corresponding moving path. If the face features cannot be acquired through the preferred group of cameras, continuously inquiring the face information acquired by the corresponding secondary selected group of cameras; if the face features cannot be obtained through the secondary selection group of cameras, the corresponding target person is preliminarily listed as a suspect, and the corresponding moving path is preliminarily determined as a moving track of the suspect after the case happens.
Referring to fig. 10, taking the moving path c of the target person c as an example, if the target person c normally travels along the moving path, theoretically, the front effective facial features can be collected by the preferred group of cameras (i.e. solid dots), and at this time, the secondary group of cameras do not need to be called again. However, if the target person is blocked by the face, heads are lowered, and heads are turned sideways, or the shooting distance is long, the preferred group of cameras may not be capable of acquiring the face features, and then the secondary group of cameras is called for further confirmation. Theoretically, the front face features cannot be acquired by the secondary group selection camera (namely, the hollow dots), but if the target person C has the conditions of side turning, turning back and the like in the driving process, the front face can be just shot by the group selection camera, so that a certain reference value is still obtained. If the effective human face features cannot be obtained through the group selection camera, the target person can possibly intentionally shield the face or avoid the camera, and the target person does not carry a mobile phone, so that a certain criminal suspicion can be considered to exist, the target person can be preliminarily listed as a suspect, and the corresponding moving path is preliminarily determined as a moving track after the suspect happens.
For any moving path, after the corresponding target person is preliminarily listed as a suspect if the face features cannot be obtained through the secondary selection group camera, the method further comprises the following steps: acquiring vehicle information corresponding to a suspect according to a camera on a corresponding moving path, further determining the identity of the corresponding suspect according to the vehicle information, and matching the identity with a person with a pre-criminal department in an police information base; if the matching is successful, the identity of the corresponding suspect is verified. By combining the above analysis, for any moving path, if the face features cannot be obtained through each group of cameras, it can be considered that a certain crime suspicion exists for the target person on the corresponding moving path, and at this time, the face features cannot be obtained, and the mobile phone information cannot be obtained (without carrying a mobile phone), so that the identity of the corresponding suspect can be further determined by means of vehicle information research. Of course, the method is suitable for the case that the suspect drives to go out, and the vehicle information corresponding to the suspect, such as the license plate number, the vehicle color, the vehicle model and the like, can be obtained through the camera device (specifically, the vehicle camera at the vehicle bayonet) on the corresponding moving path, so as to determine the identity of the target person. The identity determining method according to the vehicle information may refer to embodiment 2, and based on the correspondence relationship between "person-vehicle-IMSI", other information of the suspect may be determined as long as the vehicle information is known, and the identity may be determined. The matching with the person having the pre-criminal subject in the police information base is performed in consideration of the fact that the person having the pre-criminal subject is more likely to make a case than the person having no pre-criminal subject, and thus the person is prioritized.
Further, for any moving path, if the effective face features are obtained through the preferred group camera or the secondary group camera, the face features can be directly matched with the face features of the person with the pre-criminal discipline in the police information base; and if the matching is successful, the person corresponding to the pre-criminal department is listed as the suspect, and the identity of the suspect is finally determined. Or determining the identity of the corresponding target figure according to the acquired face characteristics, and then matching the identity with the person with the pre-crime department in the police information base; and if the matching is successful, the corresponding target person is listed as the suspect, and the identity of the suspect is finally determined. This is also in view of the fact that persons with pre-criminal disciplines are more likely to commit crimes than persons without pre-criminal disciplines, and therefore this part of the persons are prioritized.
In the above scheme, for any moving path, when determining whether the corresponding target person is likely to be a suspect, the method may further include: acquiring historical motion tracks of target figures on corresponding moving paths through historical data acquired by a camera; determining the relevance between the current moving path of the corresponding target person and the historical movement track; and if the relevance does not meet the preset requirement, the corresponding target person is listed as a suspect, and the identity of the suspect is determined through the face features.
The perception data system stores massive perception data accumulated in each moving path for a long time, and records historical motion tracks of people appearing in each moving path, wherein the historical motion tracks can be realized by target tracking based on human face characteristics or target tracking based on the IMSI number of the mobile phone. For any moving path, if the face features of the target person can be obtained, a plurality of historical motion tracks corresponding to the target person can be matched in the perception removing data system based on the face features. After the historical movement track is obtained, the relevance between the current movement path and the historical movement track of the target person can be researched, namely whether the target person frequently moves along the current movement path historically is judged, and if the number of times of movement along the current movement path exceeds a certain number of times, the relevance between the current movement path and the historical movement track can be considered to meet the preset requirement; at this time, since the target person has historically been constantly moving along the current movement path, it can be inferred that this is a regular route of the target person, and the possibility of crime escape is small, so the corresponding target person can be preliminarily excluded as a suspect. On the contrary, if the target person never or hardly moves along the current moving path historically, the correlation between the current moving path and the historical movement track does not meet the preset requirement, it can be inferred that the target person is not a regular route of the target person and is likely to be a route selected after the temporary crime escapes, so that the corresponding target person can be preliminarily classified as the suspect, and the identity of the suspect can be determined through the acquired human face features.
The perception data analysis method provided by the embodiment of the invention is equivalent to supplement of the method in the embodiment 3, the embodiment 3 mainly analyzes and inspects people carrying mobile phones, the embodiment 4 mainly inspects people not carrying mobile phones, and all possible suspects can be inspected by matching the people without mobile phones.
In summary, in the embodiments of the present invention, under the condition that the sensing device in the location of the case cannot effectively lock the suspect, and the suspect is intentionally not provided with the mobile phone for avoiding the pursuit of the police, the electronic fence device and the camera device in the location of the case are combined to screen out the suspect who may have the anti-pursuit consciousness but does not carry the mobile phone, so as to obtain the moving paths of the suspect, and then the moving paths of the suspect and the moving tracks of the suspect are determined by matching the camera devices on the moving paths. The cameras on all paths are grouped according to shooting characteristics in advance before being analyzed by the aid of the cameras, so that an police can inquire information according to priority sequences when studying and judging, namely, the camera with the best image quality can be called and checked preferentially, pertinence is achieved, case situation studying and judging efficiency and accuracy are greatly improved, and police personnel can be helped to determine identity of a suspect or action track of the suspect quickly and accurately.
Example 5:
on the basis of the methods provided in embodiments 1-4, the present invention further provides an apparatus for implementing the methods, as shown in fig. 14, which is a schematic diagram of an apparatus architecture, one or more processors 21 and a memory 22 according to an embodiment of the present invention. In fig. 14, one processor 21 is taken as an example.
The processor 21 and the memory 22 may be connected by a bus or other means, and fig. 14 illustrates the connection by a bus as an example.
The memory 22, which is a non-volatile computer-readable storage medium, can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the track generation method for a specific suspect in embodiment 4. The processor 21 executes various functional applications and data processing, i.e., implements the methods of embodiments 1 to 4, by executing nonvolatile software programs, instructions, and modules stored in the memory 22.
The memory 22 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, and these remote memories may be connected to the processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22 and, when executed by the one or more processors 21, perform the methods of embodiments 1-4 described above, e.g., perform the steps shown in the figures described above.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A method for generating a trajectory of a specific suspect, comprising:
determining mobile data corresponding to IMSI numbers of a plurality of mobile phones appearing in a case place at the case place by electronic fence equipment at the case place; determining movement data corresponding to a plurality of people appearing in the case place at the case time through camera equipment at the case place; matching the mobile data of the IMSI numbers of the mobile phones with the mobile data of the people by combining electronic fence equipment and a camera of the case place, and screening out one or more target people who appear in the case place but do not carry the mobile phones at the case time; wherein the movement data comprises a movement direction and/or a movement speed at the incident location;
combining the image characteristics of each target person, carrying out target tracking on each target person by using a plurality of cameras around the case location, and further respectively obtaining the moving path of each target person in a preset time period after the case;
according to the shooting characteristics of the cameras, grouping the cameras on each moving path respectively to obtain a preferred group camera and a secondary group camera corresponding to each target person, specifically, obtaining the shooting direction of each camera on each moving path and matching with the moving path where each camera is located; for each camera on each moving path, classifying the cameras with the shooting direction facing to the front of the person into a preferred group, and classifying the cameras with the shooting direction facing to the non-front of the person into a secondary group;
respectively inquiring the portrait information acquired by the corresponding cameras on each moving path according to the priority sequence from the preferred group to the secondary group until the identity of the suspect is determined or the action track of the suspect is determined from a plurality of moving paths, specifically, for each moving path, preferably inquiring the portrait information acquired by the corresponding preferred group cameras to acquire the face features of the target person on the corresponding moving path; if the face features cannot be acquired through the preferred group of cameras, continuously inquiring the face information acquired by the corresponding secondary selected group of cameras; if the face features cannot be obtained through the secondary selection group of cameras, the corresponding target person is preliminarily listed as a suspect, and the corresponding moving path is preliminarily determined as a moving track of the suspect after the case happens.
2. The method as claimed in claim 1, wherein the capturing feature of the camera comprises: one or more of a photographing direction, a photographing range, a photographing resolution, a photographing height, and a photographing angle of the camera.
3. The method for generating the trajectory of the specific suspect according to claim 1, wherein when grouping the plurality of cameras on each moving path, the grouping further comprises an alternative group of cameras; and compared with the preferred group of cameras and the secondary group of cameras, the shooting range of the alternative group of cameras is hidden, and the priority sequence inquired in the three groups of cameras is the lowest.
4. The method of claim 1, wherein for any moving path, after the corresponding target person is preliminarily listed as the suspect if the facial features cannot be obtained through the secondary group of cameras, the method further comprises:
acquiring vehicle information corresponding to a suspect according to a camera on a corresponding moving path, further determining the identity of the corresponding suspect according to the vehicle information, and matching the identity with a person with a pre-criminal department in an police information base; if the matching is successful, the identity of the corresponding suspect is verified.
5. The method according to claim 1, wherein for any moving path, if the face features are obtained by the preferred group of cameras or the sub-selected group of cameras, the face features are matched with the persons with pre-criminal disciplines in the police information base; and if the matching is successful, the person corresponding to the pre-criminal department is listed as the suspect, and the identity of the suspect is finally determined.
6. The method of claim 1, wherein for any moving path, the method further comprises:
acquiring historical motion tracks of target figures on corresponding moving paths through historical data acquired by a camera;
determining the relevance between the current moving path of the corresponding target person and the historical movement track; and if the relevance does not meet the preset requirement, the corresponding target person is listed as a suspect, and the identity of the suspect is determined through the face features.
7. An apparatus for generating a track of a specific suspect, comprising at least one processor and a memory, wherein the at least one processor and the memory are connected through a data bus, and the memory stores instructions executable by the at least one processor, and the instructions are used for completing the track generation method of the specific suspect according to any one of claims 1 to 6 after being executed by the processor.
CN201910407230.2A 2019-05-16 2019-05-16 Specific suspect track generation method and apparatus Active CN110191424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910407230.2A CN110191424B (en) 2019-05-16 2019-05-16 Specific suspect track generation method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910407230.2A CN110191424B (en) 2019-05-16 2019-05-16 Specific suspect track generation method and apparatus

Publications (2)

Publication Number Publication Date
CN110191424A CN110191424A (en) 2019-08-30
CN110191424B true CN110191424B (en) 2021-06-15

Family

ID=67716573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910407230.2A Active CN110191424B (en) 2019-05-16 2019-05-16 Specific suspect track generation method and apparatus

Country Status (1)

Country Link
CN (1) CN110191424B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111163288A (en) * 2019-09-23 2020-05-15 济南市公安局 Intelligent mobile police system and police office method
CN111367955B (en) * 2019-10-09 2024-03-26 杭州海康威视系统技术有限公司 Target object identification method, target object identification device, electronic equipment and storage medium
CN110677588A (en) * 2019-10-14 2020-01-10 浙江大华技术股份有限公司 Picture acquisition method and device, storage medium and electronic device
CN110851646B (en) * 2019-11-18 2020-11-24 嵊州市万睿科技有限公司 Working efficiency statistical method for intelligent park
CN111382189A (en) * 2019-12-20 2020-07-07 厦门市美亚柏科信息股份有限公司 Heterogeneous data collision analysis method, terminal device and storage medium
CN111143602B (en) * 2019-12-24 2023-05-02 云粒智慧科技有限公司 Case clue association method, system, electronic equipment and storage medium
JP6935545B1 (en) * 2020-06-18 2021-09-15 三菱電機ビルテクノサービス株式会社 Person tracking support device and person tracking support system
CN112258761A (en) * 2020-10-21 2021-01-22 杭州电子科技大学 Intelligent community platform based on probe electronic fence system
CN112291531B (en) * 2020-11-12 2021-12-03 珠海大横琴科技发展有限公司 Monitoring point processing method and device
CN112738468B (en) * 2020-12-25 2022-07-08 四川众望安全环保技术咨询有限公司 Safety early warning method for intelligent park
CN113536083B (en) * 2021-05-31 2023-11-24 中国人民公安大学 Target person track collision analysis method based on event space-time coordinates
CN113329343A (en) * 2021-06-02 2021-08-31 杨成 Sniffing data analysis method based on target WiFi and Bluetooth characteristic ID
CN114093014A (en) * 2022-01-20 2022-02-25 深圳前海中电慧安科技有限公司 Graph code correlation strength calculation method, device, equipment and storage medium
CN114399537B (en) * 2022-03-23 2022-07-01 东莞先知大数据有限公司 Vehicle tracking method and system for target personnel

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8254633B1 (en) * 2009-04-21 2012-08-28 Videomining Corporation Method and system for finding correspondence between face camera views and behavior camera views
CN203136061U (en) * 2013-03-11 2013-08-14 武汉中软通科技有限公司 Monitoring system for area and personnel at urban hot spot
WO2014197497A2 (en) * 2013-06-03 2014-12-11 The Morey Corporation Geospatial asset tracking systems, methods and apparatus for acquiring, manipulating and presenting telematic metadata
US10127754B2 (en) * 2014-04-25 2018-11-13 Vivint, Inc. Identification-based barrier techniques
CN204046693U (en) * 2014-06-17 2014-12-24 王传琪 A kind of public security fence
CN104331929B (en) * 2014-10-29 2018-02-02 深圳先进技术研究院 Scene of a crime restoring method based on video map and augmented reality
CN106096577B (en) * 2016-06-24 2019-05-31 安徽工业大学 A kind of target tracking method in camera distribution map
CN108540751A (en) * 2017-03-01 2018-09-14 中国电信股份有限公司 Monitoring method, apparatus and system based on video and electronic device identification
CN109714565A (en) * 2017-10-26 2019-05-03 北京航天长峰科技工业集团有限公司 A kind of video frequency tracking application method based on video structural technology
CN107909033A (en) * 2017-11-15 2018-04-13 西安交通大学 Suspect's fast track method based on monitor video
CN109033440A (en) * 2018-08-15 2018-12-18 武汉烽火众智数字技术有限责任公司 A kind of video investigation multidimensional trajectory analysis method
CN109618286B (en) * 2018-10-24 2020-12-01 广州烽火众智数字技术有限公司 Real-time monitoring system and method
CN109657025B (en) * 2018-12-17 2020-09-11 武汉星视源科技有限公司 On-site investigation information collection system and on-site investigation management system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Event detection and target tracking based on co-operative multi-camera system";I-Cheng Chang; Jiun-Wei Yu; Jia-Hong Yang;《 2009 Digest of Technical Papers International Conference on Consumer Electronics》;20090529;全文 *

Also Published As

Publication number Publication date
CN110191424A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN110191424B (en) Specific suspect track generation method and apparatus
CN110008298B (en) Parking multidimensional information perception application system and method
US10930151B2 (en) Roadside parking management method, device, and system based on multiple cameras
CN104200671B (en) A kind of virtual bayonet socket management method based on large data platform and system
EP2980767B1 (en) Video search and playback interface for vehicle monitor
CN103294775B (en) Police service cloud image recognition vehicle administrating system based on geographic space-time constraint
CN110175217A (en) It is a kind of for determining the perception data analysis method and device of suspect
CN110213723A (en) A kind of method and apparatus quickly determining suspect according to track
CN109615572B (en) Personnel intimacy degree analysis method and system based on big data
CN109033440A (en) A kind of video investigation multidimensional trajectory analysis method
CN108417047A (en) A kind of vehicle location method for tracing and its system
CN106529401A (en) Vehicle anti-tracking method, vehicle anti-tracking device and vehicle anti-tracking system
US8798318B2 (en) System and method for video episode viewing and mining
CN112218243B (en) Method, device, equipment and storage medium for associating massive man-vehicle data
US20230073717A1 (en) Systems And Methods For Electronic Surveillance
CN107862072B (en) Method for analyzing vehicle urban-entering fake plate crime based on big data technology
CN101430827B (en) Taxi wireless video monitoring system and method based on GPS
KR20160109761A (en) Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing
CN114067270B (en) Vehicle tracking method and device, computer equipment and storage medium
WO2016201804A1 (en) Object positioning method and device
CN113870551B (en) Road side monitoring system capable of identifying dangerous and non-dangerous driving behaviors
EP3244344A1 (en) Ground object tracking system
KR101686851B1 (en) Integrated control system using cctv camera
CN111753587A (en) Method and device for detecting falling to ground
CN112633163A (en) Detection method for realizing illegal operation vehicle detection based on machine learning algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 430000, No.3, 3rd floor, Chuangye building, science and Technology Park, Wuhan University, Donghu Development Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Digital Mining Technology Co.,Ltd.

Address before: 430000, No.3, 3rd floor, Chuangye building, science and Technology Park, Wuhan University, Donghu Development Zone, Wuhan City, Hubei Province

Applicant before: Wuhan Number Mine Science and Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant