CN114491156A - Method for intelligently pushing approximate object based on video image - Google Patents

Method for intelligently pushing approximate object based on video image Download PDF

Info

Publication number
CN114491156A
CN114491156A CN202111297767.1A CN202111297767A CN114491156A CN 114491156 A CN114491156 A CN 114491156A CN 202111297767 A CN202111297767 A CN 202111297767A CN 114491156 A CN114491156 A CN 114491156A
Authority
CN
China
Prior art keywords
persons
data
similar
personnel
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111297767.1A
Other languages
Chinese (zh)
Inventor
龚波
苏学武
水军
林剑明
谢丽
何晓伟
刘怀春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xindehui Information Technology Co ltd
Original Assignee
Zhuhai Xindehui Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xindehui Information Technology Co ltd filed Critical Zhuhai Xindehui Information Technology Co ltd
Priority to CN202111297767.1A priority Critical patent/CN114491156A/en
Publication of CN114491156A publication Critical patent/CN114491156A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2216/00Indexing scheme relating to additional aspects of information retrieval not explicitly covered by G06F16/00 and subgroups
    • G06F2216/03Data mining

Abstract

The invention discloses a method for intelligently pushing an approximate object based on a video image, which finds out similar information of a plurality of related personnel through a portrait comparison algorithm; identifying a plurality of specific characteristic factors with high relevance degree with a specific scene from the personnel track and background data of a common group by a statistic method based on a TF-IDF algorithm; carrying out intelligent weighting and assigning comprehensive measurement and calculation processing on the information of the similar persons to obtain comprehensive scores of similar related persons obtained by comparing the figures, sequencing the scores from high to low, analyzing potential high-approximation approximate objects, and actively pushing the objects to responsible persons. According to the invention, a big data resource system is fully utilized, a portrait intelligent mining analysis model based on a historical scene is constructed through algorithms and means such as data mining and machine learning, and an approximate object of a new scene is accurately pushed through intelligent analysis and inference by automatically learning the related personnel scene rules of the historical scene, so that an intelligent detection application scene is provided for workers.

Description

Method for intelligently pushing approximate object based on video image
Technical Field
The invention relates to the technical field of behavior study and judgment, in particular to a method for intelligently pushing an approximate object based on a video image.
Background
With the rapid development of computer vision face recognition technology, in video detection, a technical and tactical method of acquiring the portrait of relevant personnel on site and locking the identity of a target through face comparison is widely applied. The staff is required to investigate the whereabouts of the relevant persons through videos shot by the monitoring videos, but because the scene portrait is easily affected by factors such as insufficient pixels, incorrect angles, shielding and the like, accurate targets are generally lower in face comparison, and correct target objects often appear in positions with lower ranks in comparison results. Therefore, a great deal of energy is consumed for comparing results, and manual research, judgment and discrimination are performed one by one, so that the video detection efficiency is undoubtedly reduced, and a great deal of working time is consumed.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for intelligently pushing an approximate object based on a video image, so as to solve the problems in the background art, accurately push the approximate object of a new scene, provide an intelligent detection application scene for workers, and comprehensively improve the detection and judgment capacity and level.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows.
The method for intelligently pushing the approximate object based on the video image finds out similar information of a plurality of related personnel through a portrait comparison algorithm; identifying a plurality of specific characteristic factors with high relevance degree with a specific scene from the personnel track and background data of a common group by a statistic method based on a TF-IDF algorithm; carrying out intelligent weighting and assigning comprehensive measurement and calculation processing on the activity tracks, frequent stay places, social security information and related recorded information of similar persons to obtain comprehensive scores of similar related persons which are compared by portraits, and sequencing from high to low, thereby analyzing potential high-approximation-degree approximate objects and actively pushing the objects to responsible persons.
Further optimizing the technical scheme, the method specifically comprises the following steps:
s1, carrying out data aggregation and data preprocessing;
s2, constructing a model, wherein the model comprises a specific correlation characteristic discovery model and a personnel approximation degree comprehensive measurement model;
s3, identifying factors with large relation with the traditional relevant scene, and comprehensively measuring and calculating the personnel in the batch portrait ratio;
s4, conducting manual study and verification on the previous similar persons one by one, and finally determining the identities of the related persons.
Further optimizing the technical solution, the step S1 includes the following steps:
s11, establishing the data of the active personnel in the last year;
s12, establishing traditional related group data;
s13, establishing general related group data;
and S14, establishing background information of the traditional related groups.
In step S2, a discovery model of a specific association feature is constructed based on tf-idf algorithm in a statistical method, and the formula is established as follows:
TF-IDF (certain type of feature) — (number of such features in the conventional relevant population/total number of conventional relevant populations) × log (total number of general populations/(number of such features in the general population + 1)).
In step S2, constructing an integral-based measurement model for similar behavior characteristics of people, and establishing a formula as follows:
(score of Σ features (1-N) × human face similarity/100.
Further optimizing the technical scheme, the contents of the comprehensive measurement and calculation model for the personnel approximation degree are as follows:
defining a characteristic item;
defining the score of each characteristic item and setting the corresponding scores of the characteristic items under different measurement indexes;
defining the weight occupied by the score of each characteristic item;
defining the similarity of human faces.
Further optimizing the technical solution, the step S3 includes the following steps:
s31, substituting the specific association feature discovery model for operation, and identifying factors with large relation to the traditional related scenes;
s32, acquiring data of similar personnel to be calculated;
s33, measuring and calculating similar people in batches, substituting the identification numbers of all people into a person similarity comprehensive measuring and calculating model, calculating based on the data set formed in the step S1, and measuring and calculating the related characteristics of the people item by item.
Further optimizing the technical solution, the step S32 includes the following steps:
s321, acquiring portrait data compared with a specific portrait from multiple sources of each portrait comparison system;
and S322, combining and de-duplicating the portrait data acquired from multiple sources to form a general list of identity information of similar persons, and acquiring the identity information of the similar persons.
Further optimizing the technical solution, the step S4 includes the following steps:
s41, data verification: the related background and track data of similar persons are verified manually;
s42, photo verification: searching the face tracks of daily activities of similar persons by using the household registration photos of the similar persons, extracting related photos, and manually checking whether the appearance characteristics of the photos conform to scene persons or not;
and S43, finally, specifying the related personnel of the scene.
Due to the adoption of the technical scheme, the technical progress of the invention is as follows.
The invention finds out similar information of a plurality of related persons through a portrait comparison algorithm, then identifies a plurality of specific characteristic factors with higher relevance with a specific scene from the trajectories and background data of persons in a common group through a statistical method based on a TF-IDF algorithm, then carries out comprehensive measurement and calculation processing such as intelligent weighting and scoring on the activity trajectories, frequent stay places, social security information and related existing recorded information of the similar persons through a comprehensive measurement and calculation model of the personal similarity constructed in the invention, finally obtains the comprehensive scores of the similar related persons compared by the portrait, sorts the scores from high to low, analyzes potential approximate objects with high similarity, and actively pushes the approximate objects to responsible persons. Meanwhile, the assignment rule is flexibly changed according to the change of the scene characteristics, and the potential approximate object can be automatically analyzed for a new scene.
The invention is mainly applied to automatically analyzing, comparing, associating and other operation processing of portraits, personnel backgrounds, existing records, personnel tracks, frequently-staying places and other information of related personnel, establishes a set of flexible and configurable scoring algorithm mechanism to measure and calculate the association degree of a list of similar personnel to be selected and a certain scene, improves the intelligent portrait identification processing technology adopted in the scene detection research and judgment process, carries out multi-dimensional weighted assignment on suspected personnel after the portrait comparison, and recommends an approximate object candidate list with high similarity.
The invention is based on new technologies such as cloud computing and big data, fully utilizes the existing big data resource system, constructs an intelligent measuring and calculating analysis model based on the portrait comparison result based on data such as historical scenes, personnel backgrounds and personnel behavior tracks through algorithms and means such as data mining and machine learning, carries out intelligent analysis and inference by learning the activity rules of related personnel in the historical scenes, the activity conditions before and after the scenes occur, specific existing recorded information, personnel background information and the like, accurately pushes the approximate object of a new scene, provides an intelligent detection application scene for workers, improves the working efficiency of basic level workers by utilizing scientific and technological means, and can further improve the detection and study capabilities and levels.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the figures and specific examples.
The current video construction is gradually improved on the whole, portrait information is very important information in the scene detection and research process, and hidden information such as basic information, internet surfing information, accommodation information, train information, social security information, flight information and communication information of more similar related personnel is mined for scene expansion.
The invention needs to solve the defects caused by comparison algorithm, picture definition, photographing angle, ambient light and the like in the current pure portrait comparison technology, and carries out comprehensive operation on the comparison result of the pure portrait by combining with the relevant specific characteristic information of the personnel so as to measure and calculate the personnel approximation degree.
The method comprises the steps of finding out similar information of a plurality of related persons through a portrait comparison algorithm, identifying a plurality of specific characteristic factors with high relevance with a specific scene from the trajectories and background data of persons in a common group through a statistic method based on a TF-IDF algorithm, carrying out comprehensive measurement and calculation processing such as intelligent weighting and scoring on the movement trajectories, frequent stay places, social security information and related recorded information of the similar persons through a comprehensive personnel similarity measurement and calculation model constructed in the method, finally obtaining the comprehensive scores of the similar related persons obtained through portrait comparison, sequencing from high to low, analyzing potential high-similarity approximate objects, and actively pushing the objects to responsible persons. Meanwhile, the assignment rule is flexibly changed according to the change of the scene characteristics, and the potential approximate object can be automatically analyzed for a new scene.
The invention uses a TF-IDF algorithm-based statistical method and a personnel approximation comprehensive measuring and calculating model, combines the data of the existing record information of personnel, resident certificate information, Internet bar information, travel information, social security information and the like, analyzes the background information and track information characteristics of the existing record personnel with contact, thereby constructing a multi-dimensional evaluation assigning system aiming at the traditional contact related group, and the system calculates the final scores of a plurality of similar personnel by performing combined operation and weighted assigning on the related similar personnel and orders the similar personnel from high to low.
And a plurality of persons with the top rank are the scene related approximate objects which are to be selected and have high approximation degree.
Referring to fig. 1, the present invention specifically includes the following steps:
and S1, gathering and preprocessing data, and respectively forming activity personnel data, related group data, general related group data and related group background data after gathering and preprocessing various data related to personnel.
Step S1 includes the following steps:
and S11, establishing the data of the active personnel in the last year. The method comprises the steps of cleaning and table combining recorded track data of the tourism, the internet cafe, the temporary residence, the railway, the passenger transport and the existing record according to fields such as certificate numbers, names, track time, track point names, track addresses, track longitude and latitude, track belonged areas, track belonged scenes and the like to obtain personnel with activity tracks in the last year and track data thereof.
And S12, establishing traditional related group data. And screening out traditional related personnel registration information through related personnel registration information, and performing intersection with the trace personnel data combination table in the last year to obtain traditional related personnel in the last year and trace data thereof.
And S13, establishing general related group data. The existing record composition and native formation information of the trace personnel in the last year is obtained through the intersection and self-operation of the related personnel registration information, the trace personnel data close table in the last year and the social security participation and security information table.
And S14, establishing background information of the traditional related groups. The existing record composition and native composition information of the traditional related personnel who act in the last year are calculated through the collection and self-operation of related personnel registration information (screening out the traditional related personnel registration information), the trace personnel data in the last year and the social security participation information table.
And S2, constructing a model, wherein the model comprises a specific associated feature discovery model and a personnel approximation comprehensive measurement model.
In step S2, a discovery model of a specific correlation characteristic is constructed based on tf-idf algorithm in a statistical method, and the formula is established as follows:
TF-IDF (certain type of feature) — (number of such features in the conventional relevant population/total number of conventional relevant populations) × log (total number of general populations/(number of such features in the general population + 1)).
In step S2, an integral-based measurement model for the similar behavior characteristics of the person is constructed, and the formula is established as follows:
(score of Σ features (1-N) × human face similarity/100.
The contents of the comprehensive measurement model for the human similarity are as follows:
1) defining a characteristic item (namely a scoring item), wherein the characteristic item is used as a scoring item when the related information of a person meets a specified condition.
2) Defining the score of each characteristic item (scoring item) and setting the corresponding scores of the characteristic items under different measurement indexes (time difference index, duration index, distance index and times).
3) The weight of the score of each feature item (score item) is defined.
4) And defining the face similarity, namely acquiring the face similarity of a person from the existing portrait comparison system.
And S3, identifying factors with large relation with the traditional relevant scene, and comprehensively measuring and calculating the personnel in the batch portrait ratio.
Step S3 includes the following steps:
s31, identifying and finding the specific associated characteristic item by using the correlation model of the step S2, and setting corresponding scores and weights. Substituting into a specific correlation characteristic discovery model to carry out operation, and identifying factors with larger relation to the traditional related scenes.
1) Selecting a plurality of data fields which are often used as classification statistics from all field items of the data set formed in the step S1, including: existing records, specific categories, event locations, event areas, event periods, native locations, domicile areas, existing areas, gender, type of event, etc.
2) Substituting the fields in the step 1) and the data set formed in the step S1 into a "specific associated feature discovery model" to find that the weight of the existing record field is the largest, the factor having a large relationship with the conventional related scenario is: existing records of personnel, activity places, activity periods, and places.
3) Continuing to focus on the specific content of the existing record, data relating to the existing record is screened from the data set formed in step S1, from which various scene categories relating to the existing record are extracted.
4) Substituting the scene category in the step 3) and the data set formed in the step S1 into a "specific associated feature discovery model" to find that the weight of two types of scenes is the largest, and then the other factors having a large relationship with the traditional related scenes include the two types of scenes.
5) Focusing on the specific content of the event time interval, the event track time (here, the railway booking information is taken as an example) is extracted from the data set formed in step S1 and compared with the scene occurrence time, and the time difference is classified from small to large (0 day, 1 day, 2 days and the like), and the time difference and the data set formed in step S1 are substituted into the specific associated feature discovery model operation, so that the scene occurrence time is more likely to be correlated (the closer the weight is, the greater the weight is).
6) Focusing on the specific content of the event location, the event trajectory location (here, travel accommodation information is taken as an example) is extracted from the data set formed in step S1, and compared with the scene occurrence location, and classified from near to far (0 km, 0.1 km, 0.2 km, and the like) according to the distance difference, and the distance difference and the data set formed in step S1 are substituted into the "specific association feature discovery model" to calculate that the event occurrence location is more likely to be related as the accommodation location is closer to the scene occurrence location (the closer weight is greater).
7) Continuing to pay attention to the specific contents of the activity places, a list of resident places of people (here, internet caf information is taken as an example) is extracted from the data set formed in step S1, and the places and the data set formed in step S1 are substituted into a "specific correlation feature discovery model" to find the top5 place names with larger weights, that is, the places are possible gathering places before the scene occurs.
8) Continuing to pay attention to the specific content of the native regions, extracting all native information from the data set formed in the step S1, substituting the native information into a specific associated feature discovery model to find that the native regions are not local and the weights of some native regions are larger, and indicating that the regions are the key regions of interest.
9) These factors obtained in the above steps 1) to 8) are defined as feature items of the "comprehensive human approximation calculation model" (including: the personnel has records, the scene category of the personnel has records belongs to a certain category, the activity time place of the personnel is adjacent to the scene occurrence time place, the personnel stays in key security places, and the local place of the personnel is a key area), and corresponding scores and weights are set.
And S32, acquiring data of the similar person to be calculated.
S321, obtaining portrait data compared with a specific portrait from multiple sources such as various portrait comparison systems.
S322, combining and de-duplicating the portrait data obtained from multiple sources to form a general list of identity information of similar persons, and obtaining identity information (name, identity card and portrait similarity).
S33, measuring and calculating similar people in batches, substituting the identification numbers of all people into a person similarity comprehensive measuring and calculating model, calculating based on the data set formed in the step S1, measuring and calculating the related characteristics of the people item by item, and obtaining related scores. The specific measurement and calculation steps are as follows:
1) if the person has an existing record, a bonus is given.
2) If the existing record of the person belongs to a certain scene category, a bonus is given.
3) Corresponding bonus points are given according to the difference between the booking time of the railway booking track of the person and the scene occurrence time (the smaller the time difference is, the larger the given point value is).
4) The corresponding bonus is given according to the distance between the hotel host site of the person and the scene occurrence site (the closer the distance is, the larger the score is given).
5) If the person lingers in the first 10 key security scenes before and after the scene occurs, points are added, wherein the more times of lingering are, and the longer the time is, the larger the points are.
6) If the person's native place is non-local and belongs to a high risk area, a bonus is given.
7) And finally, after the scores are weighted and then are subjected to summation operation, the scores are multiplied by the similarity of the portrait and are divided by 100, and the final score of the person is obtained.
8) Repeating the steps 1) to 7) until all the similar persons are measured and calculated, obtaining scores, and then pushing the first persons with the highest scores (such as top 5).
S4, conducting manual study and verification on the previous similar persons one by one, and finally determining the identities of the related persons. In this embodiment, the similar persons of top5 are manually checked one by one. And sorting the similar staff according to the scores of the similar staff from high to low, selecting the first few high-score staff lists, and pushing the lists to related staff. The staff carries out staff study and judgment on the first few high-score staff and finally confirms the identity of the related staff.
Step S4 includes the following steps:
s41, data verification: the relevant background and trajectory data of similar persons are verified manually.
S42, photo verification: the method comprises the steps of searching the face tracks of daily activities of similar persons through the household registration photos of the similar persons, extracting related photos, and manually checking whether the appearance features of the photos accord with scene persons or not.
And S43, finally, specifying the related personnel of the scene.
The invention is based on preliminary portrait comparison result data, identity information of similar persons and relevant track and background data are compared and are used as input parameters of an assigned system model after data preprocessing, then the model is used for carrying out multi-dimensional comprehensive measurement and calculation on the similar persons, and finally comprehensive similarity values of the similar persons are obtained, so that a high-similarity person list is pushed out, and secondary research and judgment work is carried out on the staff.
The method combines the portrait comparison technology with the endowment system model to play a greater role, thereby reducing the manual judgment work of workers, improving the research and judgment work efficiency and greatly improving the pushing precision of similar workers.
The invention is based on new technologies such as cloud computing and big data, fully utilizes the existing big data resource system, constructs an intelligent measuring and calculating analysis model based on the portrait comparison result based on data such as historical scenes, personnel backgrounds and personnel behavior tracks through algorithms and means such as data mining and machine learning, carries out intelligent analysis and inference by learning the activity rules of related personnel in the historical scenes, the activity conditions before and after the scenes occur, specific existing recorded information, personnel background information and the like, accurately pushes the approximate object of a new scene, provides an intelligent detection application scene for workers, improves the working efficiency of basic level workers by utilizing scientific and technological means, and can further improve the detection and study capabilities and levels.

Claims (9)

1. The method for intelligently pushing the approximate object based on the video image is characterized in that similar information of a plurality of related persons is found out through a portrait comparison algorithm; identifying a plurality of specific characteristic factors with high relevance degree with a specific scene from the personnel track and background data of a common group by a statistic method based on a TF-IDF algorithm; carrying out intelligent weighting and assigning comprehensive measurement and calculation processing on the activity tracks, frequent stay places, social security information and related recorded information of similar persons to obtain comprehensive scores of similar related persons which are compared by portraits, and sequencing from high to low, thereby analyzing potential high-approximation-degree approximate objects and actively pushing the objects to responsible persons.
2. The method for intelligently pushing the approximate object based on the video image according to claim 1, specifically comprising the following steps:
s1, carrying out data aggregation and data preprocessing;
s2, constructing a model, wherein the model comprises a specific associated feature discovery model and a personnel approximation degree comprehensive measurement model;
s3, identifying factors with large relation with the traditional relevant scene, and comprehensively measuring and calculating the personnel in the batch portrait ratio;
s4, conducting manual study and verification on the previous similar persons one by one, and finally determining the identities of the related persons.
3. The method for pushing approximate objects intelligently based on video images as claimed in claim 2, wherein said step S1 comprises the following steps:
s11, establishing the data of the active personnel in the last year;
s12, establishing traditional related group data;
s13, establishing general related group data;
and S14, establishing background information of the traditional related groups.
4. The method for intelligently pushing approximate objects based on video images as claimed in claim 2, wherein in step S2, a discovery model of a specific associated feature is constructed based on tf-idf algorithm in statistical method, and the formula is as follows:
TF-IDF (certain type of feature) — (number of such features in the conventional relevant population/total number of conventional relevant populations) × log (total number of general populations/(number of such features in the general population + 1)).
5. The method for intelligently pushing approximate objects based on video images as claimed in claim 2, wherein in the step S2, an integral-based calculation model for similar behavior characteristics of people is constructed, and the formula is as follows:
(score of Σ features (1-N) × human face similarity/100.
6. The method for intelligently pushing the approximate object based on the video image as claimed in claim 5, wherein the content of the comprehensive human approximation calculation model is as follows:
defining a characteristic item;
defining the score of each characteristic item and setting the corresponding scores of the characteristic items under different measurement indexes;
defining the weight occupied by the score of each characteristic item;
and defining the similarity of the human face.
7. The method for intelligently pushing approximate objects based on video images as claimed in claim 2, wherein the step S3 comprises the following steps:
s31, substituting the specific association feature discovery model for operation, and identifying factors with large relation to the traditional related scenes;
s32, acquiring data of similar personnel to be calculated;
s33, measuring and calculating similar people in batches, substituting the identification numbers of all people into a person similarity comprehensive measuring and calculating model, calculating based on the data set formed in the step S1, and measuring and calculating the related characteristics of the people item by item.
8. The method for pushing approximate objects intelligently based on video images as claimed in claim 7, wherein said step S32 includes the following steps:
s321, acquiring portrait data compared with a specific portrait from multiple sources of each portrait comparison system;
and S322, combining and de-duplicating the portrait data acquired from multiple sources to form a general list of identity information of similar persons, and acquiring the identity information of the similar persons.
9. The method for pushing approximate objects intelligently based on video images as claimed in claim 2, wherein said step S4 comprises the following steps:
s41, data verification: the related background and track data of similar persons are verified manually;
s42, photo verification: searching the face tracks of daily activities of similar persons through the household registration photos of the similar persons, extracting related photos, and manually verifying whether the appearance characteristics of the photos accord with scene persons or not;
and S43, finally, specifying the related personnel of the scene.
CN202111297767.1A 2021-11-04 2021-11-04 Method for intelligently pushing approximate object based on video image Pending CN114491156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111297767.1A CN114491156A (en) 2021-11-04 2021-11-04 Method for intelligently pushing approximate object based on video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111297767.1A CN114491156A (en) 2021-11-04 2021-11-04 Method for intelligently pushing approximate object based on video image

Publications (1)

Publication Number Publication Date
CN114491156A true CN114491156A (en) 2022-05-13

Family

ID=81492843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111297767.1A Pending CN114491156A (en) 2021-11-04 2021-11-04 Method for intelligently pushing approximate object based on video image

Country Status (1)

Country Link
CN (1) CN114491156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115687460A (en) * 2023-01-04 2023-02-03 北京码牛科技股份有限公司 Method and system for mining associated object of key crowd by using trajectory data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115687460A (en) * 2023-01-04 2023-02-03 北京码牛科技股份有限公司 Method and system for mining associated object of key crowd by using trajectory data

Similar Documents

Publication Publication Date Title
CN105389562B (en) A kind of double optimization method of the monitor video pedestrian weight recognition result of space-time restriction
CN103324677B (en) Hierarchical fast image global positioning system (GPS) position estimation method
CN106469181B (en) User behavior pattern analysis method and device
CN107145862A (en) A kind of multiple features matching multi-object tracking method based on Hough forest
CN103473786A (en) Gray level image segmentation method based on multi-objective fuzzy clustering
CN111126122B (en) Face recognition algorithm evaluation method and device
CN108764943B (en) Suspicious user monitoring and analyzing method based on fund transaction network
CN112464843A (en) Accurate passenger flow statistical system, method and device based on human face human shape
CN115309998B (en) Employment recommendation method and system based on big data
CN111209446A (en) Method and device for presenting personnel retrieval information and electronic equipment
CN101950448B (en) Detection method and system for masquerade and peep behaviors before ATM (Automatic Teller Machine)
CN111382948A (en) Method and device for quantitatively evaluating enterprise development potential
CN111291596A (en) Early warning method and device based on face recognition
CN112016618A (en) Measurement method for generalization capability of image semantic segmentation model
CN114491156A (en) Method for intelligently pushing approximate object based on video image
CN103971100A (en) Video-based camouflage and peeping behavior detection method for automated teller machine
CN114707685A (en) Event prediction method and device based on big data modeling analysis
CN111753642B (en) Method and device for determining key frame
Maglietta et al. The promise of machine learning in the Risso’s dolphin Grampus griseus photo-identification
Zhang et al. A Multiple Instance Learning and Relevance Feedback Framework for Retrieving Abnormal Incidents in Surveillance Videos.
CN115687460B (en) Method and system for mining associated object of key crowd by using trajectory data
CN111415081A (en) Enterprise data processing method and device
CN108460630B (en) Method and device for carrying out classification analysis based on user data
Valldor et al. Firearm detection in social media images
CN114999644A (en) Building personnel epidemic situation prevention and control visual management system and management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination