CN117635402B - Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium - Google Patents

Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium Download PDF

Info

Publication number
CN117635402B
CN117635402B CN202410105994.7A CN202410105994A CN117635402B CN 117635402 B CN117635402 B CN 117635402B CN 202410105994 A CN202410105994 A CN 202410105994A CN 117635402 B CN117635402 B CN 117635402B
Authority
CN
China
Prior art keywords
information
text
database
track
close
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410105994.7A
Other languages
Chinese (zh)
Other versions
CN117635402A (en
Inventor
李盈
郎轶群
邱韬
李雯丽
王俞涵
谭真
肖卫东
唐彬
张宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202410105994.7A priority Critical patent/CN117635402B/en
Publication of CN117635402A publication Critical patent/CN117635402A/en
Application granted granted Critical
Publication of CN117635402B publication Critical patent/CN117635402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The present application relates to the field of computer technologies, and in particular, to an intelligent streaming system, method, computer device, and storage medium. In the system, a parameter acquisition unit acquires the name, the identity card number and the self-describing text of a person to be investigated; the auxiliary information extraction unit extracts attribute information from the multi-source information database according to the name and the identity card number, and extracts auxiliary information from the multi-source information database according to the attribute information; the pedestrian re-recognition unit extracts travel track information from the video database, and obtains track text according to the travel track information and the auxiliary information; the data retrieval unit outputs corresponding retrieval sentences according to the text and the track text respectively, and outputs a first close-contact person list and a second close-contact person list according to the retrieval sentences; and the pedestrian re-identification unit extracts the surrounding crowd information of the personnel to be investigated from the video database to obtain a third closely-connected personnel list. By adopting the system, the retrieval precision and the retrieval efficiency of close-contact personnel can be improved.

Description

Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an intelligent streaming system, method, computer device, and storage medium.
Background
Traditional flow adjustment methods rely on manual collection, arrangement and analysis of data, and are time-consuming and prone to error. During the period of virus epidemic, the situation of infectious diseases changes rapidly, a large amount of flow regulation information needs to be processed by a flow regulator, screening of close-contact personnel and secondary close-contact personnel is carried out, flow regulation reports are written, the flow regulation report for each patient to be diagnosed needs to take 6 hours, the human participation degree of the process is large, and a large amount of financial resources and material resources are needed.
The existing infectious disease flow regulating system generally locates personal movement tracks through a mobile phone base station, workers combine the manual travel of a patient to be diagnosed to a travel code database and a place code database for searching, trace other user information passing through an epidemic area, repeat the steps and screen one by one. Such a system has the following drawbacks: (1) The position information among people in a room, a building, a subway and the like cannot be accurately calculated; (2) Individuals infected with the virus (diagnosed patients) may not be able to accurately recall other individual information that they have passed by and remained at their own level; (3) The system cannot guarantee whether the diagnosis of the patient's description is hidden and missed; (4) The infectious disease flow adjuster can determine close contact only by contacting the patient with confirmed diagnosis to know the condition, but also by contacting the suspected person, so that the time and the labor are consumed, and the efficiency is low; (5) And manually searching the information of each position and place in the database by a flow regulating person after the journey is obtained. The human participation degree is high, the efficiency is low, and with the increase of the workload, the occurrence of error leakage and delay is unavoidable; (6) And manual retrieval and analysis of the multi-source information by personnel are extremely difficult, and the multi-source information is mobilized, analyzed and aggregated by a large amount of manpower and resources, and a large amount of time, financial resources and material resources are consumed.
The infectious disease prevention and control systems developed by other commercial companies can also be used for screening close-connected personnel, but the application of the systems is not wide, the system has commercial application, the internal screening principle is not disclosed, and the system is limited by factors such as regions, infectious disease prevention and control requirements, data resources and the like, so that a plurality of systems have a plurality of problems in actual deployment and application. The biggest problem among the problems is that the different deployment and application scenes can provide different data resources and have different rights of calling information materials, so that the functions of many systems cannot be realized. And the problems can cause difficult system deployment, and special adjustment is required according to scenes during deployment, so that the difficulty of system deployment is greatly increased. Secondly, because the infectious disease prevention and control departments have different infectious disease prevention and control requirements and staff capacities, the same system and even the system of the same company cannot be used, and barriers for data interaction and information circulation between different systems exist, so that information interaction barriers exist between different departments, and the infectious disease prevention and control efficiency is greatly reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an intelligent streaming system, method, computer device and storage medium.
An intelligent flow regulating system comprises a parameter acquisition unit, a data retrieval unit, an auxiliary information extraction unit, a pedestrian re-identification unit and a data center; the data center comprises a multi-source information database corresponding to the activity records of the personnel to be surveyed on each activity platform and a video database corresponding to the monitoring data of the personnel to be surveyed in each activity area;
The parameter acquisition unit acquires the name, the identity card number and the self-description text of the personnel to be investigated, sends the name and the identity card number to the auxiliary information extraction unit, and sends the self-description text to the data retrieval unit;
the auxiliary information extraction unit comprises an attribute extraction module and an auxiliary information extraction module, wherein the attribute extraction module extracts attribute information from the multi-source information database according to the name and the identity card number of a person to be investigated, and the auxiliary information extraction module extracts auxiliary information from the multi-source information database according to the attribute information and sends the auxiliary information to the pedestrian re-identification unit; the auxiliary information comprises activity position information of the personnel to be investigated in each time period;
The pedestrian re-identification unit extracts travel track information of the personnel to be investigated from the video database, obtains track text according to the travel track information and the auxiliary information, and sends the track text to the data retrieval unit;
The data retrieval unit comprises an SQL analysis module and a retrieved database, wherein the SQL analysis module is used for respectively receiving the text and the track text, outputting a corresponding first retrieval sentence and a corresponding second retrieval sentence, and the retrieved database is used for respectively carrying out first-layer close-contact person screening and second-layer close-contact person screening according to the input first retrieval sentence and second retrieval sentence and outputting a first-layer close-contact person screening result and a second-layer close-contact person screening result; the searched database comprises a travel code database and a place code database;
the pedestrian re-identification unit is further used for extracting surrounding crowd images of people to be investigated from the video database, and calling face data in the face database to confirm surrounding crowd information corresponding to the surrounding crowd images, so that a third-layer close-contact person screening result is obtained.
In one embodiment, the person to be investigated comprises a diagnosed patient and an adhesion person of the diagnosed patient; the close-contact person is obtained according to a first layer close-contact person screening result, a second layer close-contact person screening result and a third layer close-contact person screening result corresponding to the diagnosed patient; when the person to be investigated is a patient-diagnosed close-fitting person, the system outputs the patient-diagnosed next close-fitting person.
In one embodiment, the system further comprises an output unit; the output unit is respectively connected with the pedestrian re-identification unit and the data retrieval unit and is used for automatically generating a flow regulation report and marking a risk area according to the track text of the patient to be diagnosed and updating the health codes of the close-contact person and the secondary close-contact person according to the close-contact person and the secondary close-contact person of the patient to be diagnosed.
In one embodiment, the data center is further configured to store the self-describing text and the track text of each person to be surveyed.
In one embodiment, the SQL parsing module receives the text and the track text, and outputs the corresponding first search sentence and second search sentence, which includes: preprocessing an input text, and performing NL2SQL processing on the preprocessed input text to obtain a corresponding search sentence; the input text includes the self-contained text and the track text; the pretreatment comprises the following steps: creating a hash table from the text and the history track text according to the history stored in the data center; converting the input text into a corresponding hash value by utilizing a hash algorithm, and converting the length of the hash value into a fixed length by utilizing a hash function; and performing data deduplication according to the hash value and the hash value stored in the hash table.
In one embodiment, extracting trip track information of a person under investigation from the video database includes: acquiring image information of a person to be investigated and a video frame containing a pedestrian target in the video database; preprocessing a video frame containing a pedestrian target, re-identifying the pedestrian of the preprocessed video frame according to the image information to obtain the position information of the personnel to be investigated, and generating travel track information according to the position information and a timestamp corresponding to the current video frame; a step of acquiring an image containing a pedestrian target, comprising: detecting a corresponding pedestrian target in the monitoring data, and intercepting a video clip containing the pedestrian target in a mode of recording a time stamp; and carrying out downsampling treatment on each video segment to obtain a plurality of video frames containing pedestrian targets.
In one embodiment, preprocessing a video frame containing a pedestrian target includes: and carrying out alignment operation on continuous video frames containing pedestrian targets, fusing the aligned video frames, and carrying out image reconstruction on the fused video frames by adopting an interpolation method to obtain preprocessed video frames.
An intelligent streaming method, the method comprising:
acquiring the name, the identification card number and the self-describing text of a person to be investigated;
extracting attribute information from the multi-source information database according to the name and the identity card number of the personnel to be investigated, and extracting auxiliary information from the multi-source information database according to the attribute information; the auxiliary information comprises activity position information of the personnel to be investigated in each time period;
Extracting travel track information of a person to be investigated from the video database, and obtaining a track text according to the travel track information and the auxiliary information;
SQL processing is respectively carried out on the self-text and the track text, a corresponding first search sentence and a corresponding second search sentence are output, a first layer of close-contact person screening and a second layer of close-contact person screening are carried out according to the first search sentence and the second search sentence, and a first layer of close-contact person screening result and a second layer of close-contact person screening result are output; the searched database comprises a travel code database and a place code database;
And extracting surrounding crowd images of people to be surveyed from the video database, and calling face data in the face database to confirm surrounding crowd information corresponding to the surrounding crowd images so as to obtain a third-layer close-contact person screening result.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring the name, the identification card number and the self-describing text of a person to be investigated;
extracting attribute information from the multi-source information database according to the name and the identity card number of the personnel to be investigated, and extracting auxiliary information from the multi-source information database according to the attribute information; the auxiliary information comprises activity position information of the personnel to be investigated in each time period;
Extracting travel track information of a person to be investigated from the video database, and obtaining a track text according to the travel track information and the auxiliary information;
SQL processing is respectively carried out on the self-text and the track text, a corresponding first search sentence and a corresponding second search sentence are output, a first layer of close-contact person screening and a second layer of close-contact person screening are carried out according to the first search sentence and the second search sentence, and a first layer of close-contact person screening result and a second layer of close-contact person screening result are output; the searched database comprises a travel code database and a place code database;
And extracting surrounding crowd images of people to be surveyed from the video database, and calling face data in the face database to confirm surrounding crowd information corresponding to the surrounding crowd images so as to obtain a third-layer close-contact person screening result.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring the name, the identification card number and the self-describing text of a person to be investigated;
extracting attribute information from the multi-source information database according to the name and the identity card number of the personnel to be investigated, and extracting auxiliary information from the multi-source information database according to the attribute information; the auxiliary information comprises activity position information of the personnel to be investigated in each time period;
Extracting travel track information of a person to be investigated from the video database, and obtaining a track text according to the travel track information and the auxiliary information;
SQL processing is respectively carried out on the self-text and the track text, a corresponding first search sentence and a corresponding second search sentence are output, a first layer of close-contact person screening and a second layer of close-contact person screening are carried out according to the first search sentence and the second search sentence, and a first layer of close-contact person screening result and a second layer of close-contact person screening result are output; the searched database comprises a travel code database and a place code database;
And extracting surrounding crowd images of people to be surveyed from the video database, and calling face data in the face database to confirm surrounding crowd information corresponding to the surrounding crowd images so as to obtain a third-layer close-contact person screening result.
According to the intelligent flow regulating system, the intelligent flow regulating method, the computer equipment and the storage medium, the closely-connected personnel are quickly and accurately screened by acquiring the related information of the personnel to be investigated, the workload of the infectious disease flow regulating staff is reduced, and the infectious disease prevention and control tasks such as flow regulating report generation of the patient to be diagnosed, closely-connected personnel screening, risk area marking, health code updating, information issuing and the like can be completed by only contacting a small number of related personnel. Greatly improves the efficiency, accelerates the information circulation and furthest controls the infectious disease spreading speed. Three layers of screening are realized through the data retrieval unit, the auxiliary information extraction unit and the pedestrian re-identification unit, specifically, a first layer of screening is carried out according to an input self-description text, a second layer of screening is carried out by utilizing auxiliary information and a more perfect track text generated by a travel track, a third layer of screening is carried out by utilizing a target detection technology to detect surrounding people, close contact persons of people to be investigated are determined according to the result of the three times of screening, the screening precision is improved, and precise epidemic prevention is realized. The embodiment of the invention can improve the retrieval precision and the retrieval efficiency of the close-contact personnel.
Drawings
FIG. 1 is a schematic diagram of the composition and connection of a smart talk system in one embodiment;
FIG. 2 is a schematic diagram of an architecture of a smart key system according to an embodiment;
FIG. 3 is a schematic diagram of three-layer screening in one embodiment;
FIG. 4 is a schematic diagram of a pedestrian re-recognition algorithm in one embodiment;
FIG. 5 is a schematic diagram of a pedestrian re-recognition detection process in one embodiment;
FIG. 6 is a flow diagram of domain migration in one embodiment;
FIG. 7 is a flow chart of a smart key method according to an embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in FIG. 1, there is provided an intelligent streaming system comprising:
The system comprises a parameter acquisition unit, a data retrieval unit, an auxiliary information extraction unit, a pedestrian re-identification unit and a data center; the data center comprises a multi-source information database corresponding to the activity records of the personnel to be surveyed on each activity platform and a video database corresponding to the monitoring data of the personnel to be surveyed in each activity area;
the parameter acquisition unit acquires the name, the identity card number and the self-description text of the personnel to be investigated, sends the name and the identity card number to the auxiliary information extraction unit, and sends the self-description text to the data retrieval unit;
the auxiliary information extraction unit comprises an attribute extraction module and an auxiliary information extraction module, wherein the attribute extraction module extracts attribute information from the multi-source information database according to the name and the identity card number of a person to be investigated, and the auxiliary information extraction module extracts auxiliary information from the multi-source information database according to the attribute information and sends the auxiliary information to the pedestrian re-identification unit; the auxiliary information comprises activity position information of the personnel to be investigated in each time period;
the pedestrian re-identification unit extracts travel track information of the personnel to be investigated from the video database, obtains track text according to the travel track information and the auxiliary information, and sends the track text to the data retrieval unit;
the data retrieval unit comprises an SQL analysis module and a retrieved database, wherein the SQL analysis module is respectively used for receiving the text and the track text, outputting a corresponding first retrieval sentence and a corresponding second retrieval sentence, and the retrieved database is respectively used for carrying out first-layer close-contact person screening and second-layer close-contact person screening according to the input first retrieval sentence and second retrieval sentence, and outputting a first-layer close-contact person screening result and a second-layer close-contact person screening result; the searched database comprises a travel code database and a place code database;
The pedestrian re-recognition unit is also used for extracting surrounding crowd images of people to be investigated from the video database, calling face data in the face database to confirm surrounding crowd information corresponding to the surrounding crowd images, and obtaining a third-layer closely-connected person screening result.
In the intelligent flow regulating system, the closely-connected personnel are quickly and accurately screened by acquiring the related information of the personnel to be investigated, so that the workload of the infectious disease flow regulating staff is reduced, and the infectious disease prevention and control tasks such as flow regulating report generation of the patient to be diagnosed, closely-connected personnel screening, risk area marking, health code updating, information release and the like can be completed by only contacting a small number of related personnel. Greatly improves the efficiency, accelerates the information circulation and furthest controls the infectious disease spreading speed. Three layers of screening are realized through the data retrieval unit, the auxiliary information extraction unit and the pedestrian re-identification unit, specifically, a first layer of screening is carried out according to an input self-description text, a second layer of screening is carried out by utilizing auxiliary information and a more perfect track text generated by a travel track, a third layer of screening is carried out by utilizing a target detection technology to detect surrounding people, close contact persons of people to be investigated are determined according to the result of the three times of screening, the screening precision is improved, and precise epidemic prevention is realized. The embodiment of the invention can improve the retrieval precision and the retrieval efficiency of the close-contact personnel.
The modules in the intelligent streaming system may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, the personnel to be investigated include the patient to be diagnosed and the close-coupled personnel of the patient to be diagnosed; the close-contact person is obtained according to a first layer close-contact person screening result, a second layer close-contact person screening result and a third layer close-contact person screening result corresponding to the diagnosed patient; when the person to be investigated is the patient-confirmed-contact person, the system outputs the patient-confirmed-contact person. In this embodiment, the wisdom is transferred system and can be for the quick accurate screening close personnel of infectious disease transfer staff and inferior close personnel, can also automatic generation transfer report, mark risk area etc. and save financial resources and materials, reduce the manpower participation degree, promote work efficiency and screening quality.
In order to maximally improve the working efficiency and accelerate the information interaction between different departments, the invention constructs a unified system suitable for different departments and superior and subordinate units. The system is suitable for different departments and superior and subordinate units, can perform intelligent adjustment and deployment according to specific applicable scenes and different data resources and authorities, and can avoid deployment barriers which need to be customized one by the existing system.
In one embodiment, the system further comprises an output unit; the output unit is respectively connected with the pedestrian re-identification unit and the data retrieval unit and is used for automatically generating a flow regulation report and marking a risk area according to the track text of the patient to be diagnosed and updating the health codes of the close-contact person and the secondary close-contact person according to the close-contact person and the secondary close-contact person of the patient to be diagnosed. The connection relationship between the output unit and other units is shown in fig. 1, and the output unit in fig. 1 is the output module in fig. 2, and in the schematic diagram of the intelligent flow control system architecture shown in fig. 2, a dashed frame is an unnecessary extension item. In this embodiment, the system retrieves the attribute information of the patient and the image information of the patient to be diagnosed according to the input information of the patient to be diagnosed to the relevant database, and invokes information such as personnel accommodation information, train taking information, plane taking information and the like to assist in determining the track by using the social platform information of the patient to be diagnosed. There is also an interface to the banking system to assist in track completion by determining the location through the consumption record in the case of rights.
Further, when the next close contact person is determined, the close contact person information is input.
Furthermore, intelligent streaming is only one application of the system, and module modification is carried out according to different data of the streaming, so that the intelligent streaming can be suitable for related departments to generate related information of searched personnel. Specifically, the data center can replace and input different data at any time according to the use scene and the authority. Meanwhile, the system can use or reject the designated module according to the specific scene. When the system is deployed, only the used modules are selected according to actual conditions, the system intelligently combines the selected modules and automatically builds and deploys the modules, and the difficulty of deploying the system is greatly reduced. Because of the autonomous selection of the system modules, the system can be deployed in different departments and scenes in a targeted manner, and all the systems can share the information of the corresponding authorities, so that the consumption of time in data transfer of the cross departments in infectious disease prevention and control is greatly reduced, the efficiency of transmitting and releasing infectious disease prevention and control information is improved to the greatest extent, and great convenience is provided for infectious disease prevention and control.
In one embodiment, as shown in FIG. 3, a three-layer screening schematic is provided, into which the following information is entered into the system: patient information (name, identification number) is confirmed, and self-describing journey text of the patient is confirmed. Firstly, extracting keywords in the text by using NL2SQL from the text, generating a search sentence of a database by combining search database information, searching related close-contact person information to a travel code and place code database, and obtaining a first close-contact person list. And secondly, completing the personal track. Because the confirmed patient can have omission and lie report from the text, based on the pedestrian re-identification technology, a complete track chain of the confirmed patient is formed through the monitoring video mark, the process needs to mobilize related institutions and public monitoring video data and private monitoring video data of appointed places and merchants, and auxiliary information such as consumption records, accommodation information and the like can be used for assisting in determining tracks. And generating a track text according to the generated track chain, returning to the first step, and continuously searching the close-contact personnel to obtain a second close-contact personnel list. And thirdly, identifying surrounding people of the patient to be diagnosed by utilizing target detection and face detection technology in pedestrian re-identification technology according to the occurrence time point of the patient to be diagnosed in the video, and combining data in a face database to obtain a third closely-connected personnel list. Thus, after three times of detection, the system can identify and determine the close-contact person to the maximum extent. A flow regulation report is automatically generated according to the diagnosed patient track and risk areas are marked.
In one embodiment, the data center is further configured to store self-describing text and track text for each person to be surveyed. The data center contains data such as patient key information, self-describing text, merchant records, monitoring video and the like, the system utilizes the data to judge close personnel through a technology, and further, the system can provide help for relevant departments with the mobile personnel screening requirement by deploying the data center to store the data, so that personnel can be screened rapidly and accurately.
In one embodiment, the SQL parsing module receives the text and the track text respectively, and outputting the corresponding first search statement and second search statement comprises: preprocessing an input text, and performing NL2SQL processing on the preprocessed input text to obtain a corresponding search sentence; inputting text including self-contained text and track text; the pretreatment comprises the following steps: creating a hash table from the text and the history track text according to the history stored in the data center; converting an input text into a corresponding hash value by utilizing a hash algorithm, and converting the length of the hash value into a fixed length by utilizing a hash function; and performing data deduplication according to the hash value and the stored hash value in the hash table.
Specifically, when processing self-describing texts or track texts of a large number of people to be investigated, the quality and consistency of data are ensured through data duplication removal and missing value processing, so that the possibility of generating inaccurate queries is reduced. In processing a large number of confirmed patient travel texts or trace texts, duplicate records or information may exist, so that duplicate removal is necessary. The specific operation steps are as follows:
S1, creating a hash table for storing mapping relations between all travel data and hash values in history.
S2, traversing the new text records one by one, and converting each record into a unique hash value through a hash algorithm. An arbitrary length input is converted to a fixed length output using a hash function MD5 (128 bits).
S3, comparing the calculated hash value with the hash value existing in the table, and if the same hash value is found, indicating that repeated data exist, and not processing the newly added text; if there is no duplicate hash value, the record is added to the hash table.
In converting the patient from the text to formatted database data, there may be a significant data loss, such as from the text "I call Wang Di," positive diagnosis 12, month 7, 21, month 4, business was conducted to A, and month 4 was active in B until now. The main missing of the strip is that personal information is missing and the visiting place is missing. If the personal information is missing, the data is returned to the acquisition end to request the patient to fill in again, and if the visiting place information is missing, the video streaming data is combined to complete the filling.
In the embodiment, NL2SQL (SQL parsing technology) is adopted to solve the problem of low manual retrieval efficiency. NL2SQL is a technology for converting natural sentences of users into executable SQL sentences, has great significance in improving interaction modes between users and databases, can reduce interaction cost with the databases, enables common users to interact with the databases, and can save a great deal of time and energy of the users. The NL2SQL technology is matched with a database system and related management software, and particularly the NL2SQL based on the bridge filling technology is used for converting natural language into database retrieval language. The SQL analytic model based on bridging filling is a fusion model combining a template filling technology and a sequence generating technology, and the model is the prior art and is not described herein.
In one embodiment, extracting trip trajectory information for a person under investigation from a video database comprises: acquiring image information of a person to be investigated and a video frame containing a pedestrian target in a video database; preprocessing a video frame containing a pedestrian target, re-identifying the preprocessed video frame according to image information to obtain the position information of a person to be investigated, and generating travel track information according to the position information and a timestamp corresponding to the current video frame; a step of acquiring an image containing a pedestrian target, comprising: detecting a corresponding pedestrian target in the monitoring data, and intercepting a video clip containing the pedestrian target in a mode of recording a time stamp; and carrying out downsampling treatment on each video segment to obtain a plurality of video frames containing pedestrian targets.
Specifically, the invention analyzes the monitoring video frame by frame based on opencv through a target detection frame, detects the pedestrian target in the image, identifies the pedestrian by using a sliding window or a region-based method, and finally intercepts the video segment containing the pedestrian in a mode of recording a time stamp. Depending on the specific requirements, some filtering conditions may be set to determine which video clips to extract, such as the number of pedestrians, etc.
Our invention realizes different recognition functions through different target detection algorithms: by adopting yolo algorithm, rapid target detection is realized, other algorithms of higher accuracy, namely fast R-CNN and Mask R-CNN, are adopted to provide more accurate identification for users, the algorithms generally adopt more complex network structures and multi-stage detection processes to obtain higher accuracy, the balance of accuracy and algorithm speed is realized by adopting the three algorithms, and video processing at real-time level is realized.
In this embodiment, the pedestrian re-recognition technology based on the multi-source auxiliary information is based on the traditional pedestrian re-recognition technology, and an auxiliary information extraction module is added, and the auxiliary information extraction module extracts patient auxiliary information by using a knowledge extraction technology, and complements the patient track chain in an auxiliary manner, so that a better screening effect is achieved, and the knowledge extraction technology further perfects the patient track by using auxiliary information such as payment information, ticket purchasing information and the like through relation extraction, supervision learning and the like. The method comprises the steps of obtaining more accurate position information of a user from an auxiliary information extraction module by utilizing multi-source information of the user, combining public video data and private video data, generating a complete and accurate user track chain according to points appearing in the user video and the multi-source position information of the points by utilizing a face detection technology, automatically generating a user track text, and returning to an intelligent streaming system input module.
As shown in a flow chart of a pedestrian re-recognition algorithm based on multi-source auxiliary information in fig. 4, specifically, the algorithm adopts a yolov5.X algorithm as a support of a target detection technology, uses Pytorch as a deep learning framework, adopts a feature extraction method based on deep learning, and extracts feature vectors in pedestrian images by processing pedestrian images in an input test set through a Deep Convolutional Neural Network (DCNN). In consideration of a relatively simple structure of the ResNet-50 model and excellent performances of the ResNet-50 model in certain systems, a ResNet-50 model is selected as a backbone to construct a pedestrian image feature extractor, after image feature extraction, a pedestrian re-recognition method based on measurement learning is adopted, and the distance between two pictures is learned through a network, so that the distance between the loss function of the network and different pictures (positive sample pairs) of the same pedestrian is as small as possible, and the distance between the pictures (negative sample pairs) of different pedestrians is as large as possible. The algorithm is obtained from the mark-1501 data setIs effective in (1). For a definite patient whose trajectory is not determined, the trajectory thereof is generated in the pedestrian re-recognition section by the following steps.
S1, the rough action range is obtained through dictation of a patient, and a subdivided action area is determined according to multi-source information such as mobile phone positioning, consumption records, place codes and the like.
S2, the monitoring video which can be acquired in the action area is processed by the target detection technology according to frames and then is imported into gallery folders.
S3, generating a detection frame for the patient photo by using a target detection technology, and placing the generated detection frame into a query folder.
S4, checking the places and moments of the patient in the area in the show to generate a travel track.
Further, the method comprises the steps of,The algorithm adopts an NMS algorithm, the invention uses DIoU as a judgment criterion of the NMS, minimizes the normalized distance of two frames, improves the detection effect of pedestrians in an overlapping area, and has the following formula:
Wherein the method comprises the steps of ,/>Is ground truth,/>For prediction frame,/>,/>For prediction frame,/>Respectively/>Center point of/>Is the Euclidean distance between two points,/>The length of the diagonal line of the smallest box that can cover two boxes is represented.
The pedestrian re-identification technology based on the multi-source auxiliary information can capture the position information of the indoor, building, subway and the like of the definite patient; when the description of the patient is confirmed to be hidden and omitted, the complete track of the patient can be restored to the greatest extent, and a reliable flow regulation report is generated; the trace and trace text of the patient to be diagnosed can be automatically generated, so that a large number of manual operations are reduced; and (3) mobilizing multi-source data information (card swiping record, monitoring, accommodation information and the like) of the user to assist in generating a track, and reasonably utilizing the multi-source information to generate a track text.
In one embodiment, preprocessing a video frame containing a pedestrian target includes: and carrying out alignment operation on continuous video frames containing pedestrian targets, fusing the aligned video frames, and carrying out image reconstruction on the fused video frames by adopting an interpolation method to obtain preprocessed video frames. In this embodiment, the resolution is improved by using a multi-frame fusion method. The multi-frame fusion method uses correlation and redundant information between a plurality of consecutive frames in a video to generate an image with higher resolution by performing alignment and fusion operations on the frames. Specifically, first, an alignment operation is performed on a plurality of consecutive frames in a video to eliminate inter-frame offset due to pedestrian motion. The continuous frame alignment is specifically realized by adopting a SIFT method based on feature point matching. The aligned frames may then undergo a fusion operation. The pixel level averaging operation is performed on a plurality of frames by a simple averaging fusion method. On one hand, noise and artifacts can be reduced, and the signal-to-noise ratio of the image can be improved; on the other hand, the processing speed is ensured due to the simple algorithm. After the fusion operation, the fused image is reconstructed to a high resolution of the target using interpolation. In addition, the focused hot spot video clips are further processed by adopting a conventional manual processing mode, including clipping and scaling of the face images, or manually performing brightness adjustment, contrast enhancement, histogram equalization and the like, so that the visual effect of the face images is more finely enhanced, and the recognition accuracy of close-contact persons in the later steps is improved.
In one embodiment, as shown in the schematic diagram of the pedestrian re-recognition detection process shown in fig. 5, the invention adopts a mode of training different data sets by using different models in different areas to improve the accuracy of searching for a single person. A special data set should be made for monitoring different areas of the sky eye for training the model before the system is expanded for use. The data set comprises training sets (train and val) and test sets (query and gallery), wherein the lower part of the train folder is a folder in which folders of various categories are stored; the picture selected under each category in the training pictures is the picture under each category in val. Each camera selects a picture under each classification in the test pictures in the query; the remaining test pictures are in gallery. One category is a person, and there is no intersection of the training set with the categories in the test set. The naming convention for photographs is presented by way of example 0002_c1s1_000451_03, 0002 representing that it belongs to a second class, where one class holds a photograph of the same person, c1 representing that the photograph is from a first camera, s1 representing that the photograph is from a non-continuous sixth video segment of the camera (no discrimination of the self-made dataset), 000451 representing that it is the 451 rd frame of c1s1, 03 representing that it is the 3 rd detection frame of the frame obtained using yolov5.X algorithm, if noted manually, 00. The method comprises the steps of acquiring original video data according to the sky-eye monitoring, importing the original video data into a photo according to frames, generating a boundary frame through a pedestrian detection technology, extracting a part of data across cameras, naming and classifying the data according to the principle to serve as training data, marking labels of the data, training the model by using a marked training set to obtain a training set, training a model of a corresponding area by using the training set, and then putting the model into use.
In one embodiment, as shown in the flow chart of the domain migration shown in fig. 6, when the domain migration is performed to different departments, if the difference between the day eye data of the active area of the searched person and the currently accessed data is large, such as different monitoring devices, the obtained picture quality difference is too large, the new part needs to be re-partitioned to make a training set for model training and then added into the original intelligent streaming system, and the using method is similar to that of the intelligent streaming system.
In one embodiment, the system can realize the functions of checking personnel status, checking personnel flow information, dividing and checking risk areas, searching key information inquiry places and personnel, screening and displaying close-connected personnel, inputting self-described text, adding and deleting personnel information and the like. In the main interface of the system, the name side 'check flow adjustment' of the personnel is clicked, the sex, the age, the identity card number, the home address, the self-description information, the history track and the like of the personnel can be displayed, the information side 'check close contact' of the positive personnel is clicked, the close contact personnel can be displayed, and the high, medium and low risk areas are distinguished by using the identification. The new information interface can be input with identification card number, positive or not, sex, age, mobile phone number, home address, self-describing information, etc.
In one embodiment, as shown in fig. 7, an intelligent streaming method is provided, which includes the following steps:
Step 702, obtaining the name, the identification card number and the self-describing text of the personnel to be investigated.
And step 704, extracting attribute information from the multi-source information database according to the name and the identification card number of the personnel to be investigated, and extracting auxiliary information from the multi-source information database according to the attribute information.
The auxiliary information includes activity position information of the person to be investigated in each time period.
And step 706, extracting the travel track information of the personnel to be investigated from the video database, and obtaining the track text according to the travel track information and the auxiliary information.
Step 708, performing SQL processing on the self-text and the track text respectively, outputting a corresponding first search sentence and a corresponding second search sentence, performing first-layer close-contact person screening and second-layer close-contact person screening according to the first search sentence and the second search sentence, and outputting a first-layer close-contact person screening result and a second-layer close-contact person screening result.
The retrieved databases include a travel code database and a location code database.
And 710, extracting surrounding crowd images of people to be surveyed from the video database, and calling face data in the face database to confirm surrounding crowd information corresponding to the surrounding crowd images so as to obtain a third-layer close-contact person screening result.
It should be understood that, although the steps in the flowchart of fig. 7 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 7 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, or the order in which the sub-steps or stages are performed is not necessarily sequential, but may be performed in rotation or alternatively with at least a portion of the sub-steps or stages of other steps or steps. For specific limitations of the intelligent streaming method, reference is made to the above limitation of the intelligent streaming system, and no further description is given here.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a smart streaming method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the method of the above embodiments when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (9)

1. An intelligent flow regulating system is characterized by comprising a parameter acquisition unit, a data retrieval unit, an auxiliary information extraction unit, a pedestrian re-identification unit and a data center; the data center comprises a multi-source information database corresponding to the activity records of the personnel to be surveyed on each activity platform and a video database corresponding to the monitoring data of the personnel to be surveyed in each activity area;
The parameter acquisition unit acquires the name, the identity card number and the self-description text of the personnel to be investigated, sends the name and the identity card number to the auxiliary information extraction unit, and sends the self-description text to the data retrieval unit;
the auxiliary information extraction unit comprises an attribute extraction module and an auxiliary information extraction module, wherein the attribute extraction module extracts attribute information from the multi-source information database according to the name and the identity card number of a person to be investigated, and the auxiliary information extraction module extracts auxiliary information from the multi-source information database according to the attribute information and sends the auxiliary information to the pedestrian re-identification unit; the auxiliary information comprises activity position information of the personnel to be investigated in each time period;
The pedestrian re-identification unit extracts travel track information of the personnel to be investigated from the video database, obtains track text according to the travel track information and the auxiliary information, and sends the track text to the data retrieval unit;
The data retrieval unit comprises an SQL analysis module and a retrieved database, wherein the SQL analysis module is used for respectively receiving the text and the track text, outputting a corresponding first retrieval sentence and a corresponding second retrieval sentence, and the retrieved database is used for respectively carrying out first-layer close-contact person screening and second-layer close-contact person screening according to the input first retrieval sentence and second retrieval sentence and outputting a first-layer close-contact person screening result and a second-layer close-contact person screening result; the searched database comprises a travel code database and a place code database;
the pedestrian re-recognition unit is also used for extracting surrounding crowd images of people to be investigated from the video database, calling face data in the face database to confirm surrounding crowd information corresponding to the surrounding crowd images, and obtaining a third-layer close-contact person screening result;
The SQL analysis module receives the text and the track text respectively, and outputs a corresponding first search sentence and a corresponding second search sentence, wherein the steps comprise:
Preprocessing an input text, and performing NL2SQL processing on the preprocessed input text to obtain a corresponding search sentence; the input text includes the self-contained text and the track text; the pretreatment comprises the following steps:
creating a hash table from the text and the history track text according to the history stored in the data center;
Converting the input text into a corresponding hash value by utilizing a hash algorithm, and converting the length of the hash value into a fixed length by utilizing a hash function;
performing data deduplication according to the hash value and the hash value stored in the hash table;
The obtaining the track text according to the travel track information and the auxiliary information comprises the following steps:
And (3) according to the input information of the personnel to be investigated, searching the relevant database to obtain corresponding attribute information and image information, calling accommodation information, ticket buying information, traffic information, code scanning information and consumption information by utilizing social platform information of the personnel to be investigated, and determining the position through consumption records under the condition of permission, and assisting in track completion.
2. The system of claim 1, wherein the person to be investigated comprises a diagnosed patient and a contact person of the diagnosed patient; the close-contact person is obtained according to a first layer close-contact person screening result, a second layer close-contact person screening result and a third layer close-contact person screening result corresponding to the diagnosed patient;
when the person to be investigated is a patient-diagnosed close-fitting person, the system outputs the patient-diagnosed next close-fitting person.
3. The system of claim 2, further comprising an output unit;
The output unit is respectively connected with the pedestrian re-identification unit and the data retrieval unit and is used for automatically generating a flow regulation report and marking a risk area according to the track text of the patient to be diagnosed and updating the health codes of the close-contact person and the secondary close-contact person according to the close-contact person and the secondary close-contact person of the patient to be diagnosed.
4. The system of claim 1, wherein the data center is further configured to store self-describing text and trajectory text for each person to be surveyed.
5. The system of claim 1, wherein extracting trip trajectory information for a person under investigation from the video database comprises:
Acquiring image information of a person to be investigated and a video frame containing a pedestrian target in the video database;
Preprocessing a video frame containing a pedestrian target, re-identifying the pedestrian of the preprocessed video frame according to the image information to obtain the position information of the personnel to be investigated, and generating travel track information according to the position information and a timestamp corresponding to the current video frame;
A step of acquiring an image containing a pedestrian target, comprising:
Detecting a corresponding pedestrian target in the monitoring data, and intercepting a video clip containing the pedestrian target in a mode of recording a time stamp;
and carrying out downsampling treatment on each video segment to obtain a plurality of video frames containing pedestrian targets.
6. The system of claim 5, wherein preprocessing the video frame containing the pedestrian object comprises:
and carrying out alignment operation on continuous video frames containing pedestrian targets, fusing the aligned video frames, and carrying out image reconstruction on the fused video frames by adopting an interpolation method to obtain preprocessed video frames.
7. A smart streaming method implemented in the system according to any of claims 1-6, the method comprising:
acquiring the name, the identification card number and the self-describing text of a person to be investigated;
extracting attribute information from the multi-source information database according to the name and the identity card number of the personnel to be investigated, and extracting auxiliary information from the multi-source information database according to the attribute information; the auxiliary information comprises activity position information of the personnel to be investigated in each time period;
Extracting travel track information of a person to be investigated from the video database, and obtaining a track text according to the travel track information and the auxiliary information;
SQL processing is respectively carried out on the self-text and the track text, a corresponding first search sentence and a corresponding second search sentence are output, a first layer of close-contact person screening and a second layer of close-contact person screening are carried out according to the first search sentence and the second search sentence, and a first layer of close-contact person screening result and a second layer of close-contact person screening result are output; the searched database comprises a travel code database and a place code database;
extracting surrounding crowd images of people to be surveyed from the video database, and calling face data in a face database to confirm surrounding crowd information corresponding to the surrounding crowd images so as to obtain a third-layer close-contact person screening result;
The method for outputting the corresponding first search sentence and the second search sentence comprises the following steps of:
Preprocessing an input text, and performing NL2SQL processing on the preprocessed input text to obtain a corresponding search sentence; the input text includes the self-contained text and the track text; the pretreatment comprises the following steps:
creating a hash table from the text and the history track text according to the history stored in the data center;
Converting the input text into a corresponding hash value by utilizing a hash algorithm, and converting the length of the hash value into a fixed length by utilizing a hash function;
performing data deduplication according to the hash value and the hash value stored in the hash table;
The obtaining the track text according to the travel track information and the auxiliary information comprises the following steps:
And (3) according to the input information of the personnel to be investigated, searching the relevant database to obtain corresponding attribute information and image information, calling accommodation information, ticket buying information, traffic information, code scanning information and consumption information by utilizing social platform information of the personnel to be investigated, and determining the position through consumption records under the condition of permission, and assisting in track completion.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of claim 7 when executing the computer program.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of claim 7.
CN202410105994.7A 2024-01-25 2024-01-25 Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium Active CN117635402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410105994.7A CN117635402B (en) 2024-01-25 2024-01-25 Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410105994.7A CN117635402B (en) 2024-01-25 2024-01-25 Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium

Publications (2)

Publication Number Publication Date
CN117635402A CN117635402A (en) 2024-03-01
CN117635402B true CN117635402B (en) 2024-05-17

Family

ID=90020288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410105994.7A Active CN117635402B (en) 2024-01-25 2024-01-25 Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium

Country Status (1)

Country Link
CN (1) CN117635402B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740266A (en) * 2014-12-10 2016-07-06 国际商业机器公司 Data deduplication method and device
CN108229879A (en) * 2017-12-26 2018-06-29 拉扎斯网络科技(上海)有限公司 A kind of stroke duration predictor method, device and storage medium
CN110213651A (en) * 2019-05-28 2019-09-06 暨南大学 A kind of intelligent merit Computer Aided Analysis System and method based on security protection video
CN113963399A (en) * 2021-09-09 2022-01-21 武汉众智数字技术有限公司 Personnel trajectory retrieval method and device based on multi-algorithm fusion application
CN114420306A (en) * 2022-01-21 2022-04-29 腾讯烟台新工科研究院 Infectious disease close contact person tracking system and method
CN114610744A (en) * 2022-02-28 2022-06-10 三一集团有限公司 Data query method and device and computer readable storage medium
CN115064280A (en) * 2022-06-10 2022-09-16 山东健康医疗大数据有限公司 Data analysis method and system based on acute respiratory infectious disease infected population
CN116578746A (en) * 2023-05-19 2023-08-11 上海哔哩哔哩科技有限公司 Object de-duplication method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113935358A (en) * 2020-06-29 2022-01-14 中兴通讯股份有限公司 Pedestrian tracking method, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740266A (en) * 2014-12-10 2016-07-06 国际商业机器公司 Data deduplication method and device
CN108229879A (en) * 2017-12-26 2018-06-29 拉扎斯网络科技(上海)有限公司 A kind of stroke duration predictor method, device and storage medium
CN110213651A (en) * 2019-05-28 2019-09-06 暨南大学 A kind of intelligent merit Computer Aided Analysis System and method based on security protection video
CN113963399A (en) * 2021-09-09 2022-01-21 武汉众智数字技术有限公司 Personnel trajectory retrieval method and device based on multi-algorithm fusion application
CN114420306A (en) * 2022-01-21 2022-04-29 腾讯烟台新工科研究院 Infectious disease close contact person tracking system and method
CN114610744A (en) * 2022-02-28 2022-06-10 三一集团有限公司 Data query method and device and computer readable storage medium
CN115064280A (en) * 2022-06-10 2022-09-16 山东健康医疗大数据有限公司 Data analysis method and system based on acute respiratory infectious disease infected population
CN116578746A (en) * 2023-05-19 2023-08-11 上海哔哩哔哩科技有限公司 Object de-duplication method and device

Also Published As

Publication number Publication date
CN117635402A (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN109086756A (en) A kind of text detection analysis method, device and equipment based on deep neural network
US20180181834A1 (en) Method and apparatus for security inspection
Yaghoubi et al. SSS-PR: A short survey of surveys in person re-identification
CN113689382B (en) Tumor postoperative survival prediction method and system based on medical images and pathological images
CN111985538A (en) Small sample picture classification model and method based on semantic auxiliary attention mechanism
JP7389787B2 (en) Domain adaptive object detection device and method based on multi-level transition region
CN110942456B (en) Tamper image detection method, device, equipment and storage medium
Balchandani et al. A deep learning framework for smart street cleaning
CN114067431A (en) Image processing method, image processing device, computer equipment and storage medium
CN113706481A (en) Sperm quality detection method, sperm quality detection device, computer equipment and storage medium
Ajagbe et al. Performance investigation of two-stage detection techniques using traffic light detection dataset
CN109002776A (en) Face identification method, system, computer equipment and computer readable storage medium
Wang et al. Automatic identification and location of tunnel lining cracks
CN112232236B (en) Pedestrian flow monitoring method, system, computer equipment and storage medium
Fahmy et al. Toward an automated dental identification system
Soomro et al. FPGA Based Real-Time Face Authorization System for Electronic Voting System
CN117635402B (en) Intelligent streaming system, intelligent streaming method, intelligent streaming computer device and intelligent streaming storage medium
Imoh et al. Experimental face recognition using applied deep learning approaches to find missing persons
Xia et al. Motion attention deep transfer network for cross-database micro-expression recognition
Sun et al. Automatic building age prediction from street view images
Golasangi et al. A Survey on Face Recognition Based Attendance System
CN111275035B (en) Method and system for identifying background information
CN118136272A (en) Pedestrian re-identification method, device and equipment based on multi-source auxiliary information
Vignesh Baalaji et al. Autonomous face mask detection using single shot multibox detector, and ResNet-50 with identity retrieval through face matching using deep siamese neural network
CN105893994A (en) Face recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant