CN112990254A - Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data - Google Patents

Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data Download PDF

Info

Publication number
CN112990254A
CN112990254A CN202011494608.6A CN202011494608A CN112990254A CN 112990254 A CN112990254 A CN 112990254A CN 202011494608 A CN202011494608 A CN 202011494608A CN 112990254 A CN112990254 A CN 112990254A
Authority
CN
China
Prior art keywords
data
image
analysis
fusion
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011494608.6A
Other languages
Chinese (zh)
Inventor
曾智颖
赵明明
孙亚妮
李凡平
石柱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yisa Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Original Assignee
Beijing Yisa Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yisa Technology Co ltd, Qingdao Yisa Data Technology Co Ltd filed Critical Beijing Yisa Technology Co ltd
Priority to CN202011494608.6A priority Critical patent/CN112990254A/en
Publication of CN112990254A publication Critical patent/CN112990254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fusion analysis method based on multi-source heterogeneous data, which comprises the following steps: acquiring video stream data, image data, personnel and vehicle basic data and data collected by an electronic fence and a network fence; analyzing the video stream data, the image data and the personnel and vehicle basic data by adopting a fusion recognition algorithm to obtain an analysis result; performing fusion analysis and real-time calculation under big data on the analysis result, the image data, the basic data of personnel and vehicles and the data acquired by the electronic fence and the network fence to obtain a real-time calculation result; storing the real-time calculation result; performing off-line calculation on the historical data to obtain an off-line calculation result; and performing fusion analysis on the real-time calculation result and the off-line calculation result to obtain a data association relation. The method realizes the fusion analysis of the multi-source heterogeneous data to obtain the incidence relation among the data, enables all the data to be interconnected and intercommunicated and exerts the value of data fusion.

Description

Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data
Technical Field
The invention relates to the technical field of software, in particular to a fusion analysis method, a fusion analysis system, fusion analysis equipment and fusion analysis media based on multi-source heterogeneous data.
Background
At present, all units and all manufacturers in the public safety industry process and recognize information based on single type of information data, for example, a vehicle information analysis system based on video image data and a face information analysis system based on face camera data can only carry out structuralization and image analysis on a single data source by using a video image algorithm, but can not carry out cross verification and multiple confirmation by using other data, and various types of data have an isolated island phenomenon and can not play the application value of data fusion.
Disclosure of Invention
Aiming at the defects in the prior art, the embodiment of the invention provides a fusion analysis method, a system, equipment and a medium based on multi-source heterogeneous data, which are used for performing fusion analysis on the multi-source heterogeneous data, performing fusion analysis on the heterogeneous data to obtain an association relation between the data, interconnecting and communicating all the data and exerting the value of data fusion.
In a first aspect, an embodiment of the present invention provides a fusion analysis method based on multi-source heterogeneous data, including the following steps:
acquiring video stream data, image data, personnel and vehicle basic data and data collected by an electronic fence and a network fence;
analyzing the video stream data, the image data and the personnel and vehicle basic data by adopting a fusion recognition algorithm to obtain an analysis result;
performing fusion analysis and real-time calculation under big data on the analysis result, the image data, the basic data of personnel and vehicles and the data acquired by the electronic fence and the network fence to obtain a real-time calculation result;
storing the real-time calculation result;
performing off-line calculation on the historical data to obtain an off-line calculation result;
and performing fusion analysis on the real-time calculation result and the off-line calculation result to obtain a data association relation.
In a second aspect, an embodiment of the present invention provides a fusion analysis system based on multi-source heterogeneous data, including: a data acquisition module, a fusion analysis module, a real-time calculation module, a big data storage module, an off-line calculation module and an analysis module, wherein,
the data acquisition module is used for acquiring video stream data, image data, personnel and vehicle basic data and data acquired by an electronic fence and a network fence;
the fusion analysis module is used for respectively analyzing the video stream data, the image data and the personnel and vehicle basic data by adopting a fusion identification algorithm to obtain analysis results;
the real-time computing module is used for performing fusion analysis and real-time computing on the analysis result, the image data, the basic data of the personnel and the vehicles and the data acquired by the electronic fence and the network fence under big data to obtain a real-time computing result;
the big data storage module is used for storing a calculation result;
the off-line calculation module is used for off-line calculation of the historical data to obtain an off-line calculation result;
the analysis module is used for fusing and analyzing the data association relation according to the real-time calculation result and the off-line calculation result.
In a third aspect, an intelligent device provided in an embodiment of the present invention includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the foregoing embodiment.
In a fourth aspect, the present invention provides a computer-readable storage medium storing a computer program, the computer program comprising program instructions, which, when executed by a processor, cause the processor to execute the method described in the above embodiments.
The invention has the beneficial effects that:
according to the fusion analysis method, the fusion analysis system, the fusion analysis equipment and the fusion analysis medium based on the multi-source heterogeneous data, provided by the embodiment of the invention, the fusion analysis is carried out on the multi-source heterogeneous data to obtain the incidence relation among the data, all the data are interconnected and intercommunicated, and the value of data fusion is exerted. The method has the advantages that the fusion recognition algorithm is adopted for video stream data and image data, all targets in the image or the video can be positioned once and subjected to target recognition, the model does not need to be input for many times, all target positioning and target recognition can be completed once, the recognition speed is high, the recognition process is realized in a pure parallelization mode, the traditional algorithm recognition logic is changed, and the performance is higher compared with that of similar algorithms in the industry. Real-time calculation and off-line calculation are combined with a big data analysis algorithm, the structured data based on the image video are adapted and optimized, a large amount of data in a security scene are accumulated, the analysis efficiency is high, and the analysis result is reliable.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a flowchart illustrating a fusion analysis method based on multi-source heterogeneous data according to a first embodiment of the present invention;
fig. 2 is a block diagram illustrating a fusion analysis system based on multi-source heterogeneous data according to a second embodiment of the present invention;
fig. 3 shows a block diagram of an intelligent device according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
As shown in fig. 1, a flowchart illustrating a fusion analysis method based on multi-source heterogeneous data according to a first embodiment of the present invention is provided, and is suitable for a fusion analysis system based on multi-source heterogeneous data, and includes the following steps:
and S1, acquiring video stream data, image data, personnel and vehicle basic data and data collected by the electronic fence and the network fence.
Specifically, the system can simultaneously access data of multiple dimensions such as electronic police snapshot image data, snapshot bayonet image snapshot data, face camera snapshot data, video camera snapshot data, electronic fence data, network fence data, personnel basic information data and vehicle basic data. The data collected by the electronic fence and the data collected by the network fence are structured text data, and comprise information such as the collected position of the mobile phone, the Imsi/Imei number of the mobile phone, and the Mac of the mobile phone. And the data collected by the electronic fence and the data collected by the network fence are accessed into the system after being cleaned.
And S2, analyzing the video stream data, the image data and the basic data of the personnel and the vehicles by adopting a fusion recognition algorithm to obtain an analysis result.
Specifically, the specific method for analyzing the video stream data by adopting the fusion recognition algorithm comprises the following steps:
decoding video stream data, tracking a target, and selecting an optimal frame to obtain a video image;
preprocessing a video image to obtain a preprocessed video image;
and performing target positioning, target identification and feature extraction on the preprocessed video image by adopting a deep convolutional neural network to respectively obtain a target frame, a target attribute and a feature value.
The specific method for analyzing the image data by adopting the fusion recognition algorithm comprises the following steps:
preprocessing image data to obtain a preprocessed image;
and performing target positioning, target identification and feature extraction on the preprocessed image by adopting a deep convolutional neural network to respectively obtain a target frame, a target attribute and a feature value.
Wherein the preprocessing comprises deviation correction, noise reduction and image transformation.
The fusion recognition algorithm is adopted for video stream data and image data, all targets appearing in the image or video can be positioned once and subjected to target recognition, multiple model input is not needed, all target positioning and target recognition can be completed once, the recognition process is realized in a pure parallel mode, the traditional algorithm recognition logic is changed, and the performance is faster compared with that of similar algorithms in the industry.
And S3, performing fusion analysis and real-time calculation under big data on the analysis result, the image data, the basic data of the personnel and the vehicles and the data acquired by the electronic fence and the network fence to obtain a real-time calculation result.
And S4, storing the real-time calculation result.
And S5, performing off-line calculation on the historical data to obtain an off-line calculation result.
And S6, performing fusion analysis on the real-time calculation result and the off-line calculation result to obtain a data association relation.
The real-time calculation adopts a flink technology, and the off-line calculation adopts a spark technology. Map, filter, group, window, foreach and reduce used in real-time calculation and off-line calculation all belong to operators, namely data conversion. Real-time calculation and off-line calculation are combined with a big data analysis algorithm, the structured data based on the image video is adapted and optimized, and a large amount of data in a security scene is accumulated, so that the analysis efficiency is higher, and the analysis result is more reliable.
For human face camera video or video camera video stream data, a fusion algorithm supports access of mainstream stream media protocols such as Rtsp/Rtmp (real-time stream transmission protocol), decoding, target tracking, optimal frame selection and other algorithm processing are carried out on the video, finally vehicles, two-wheeled vehicles, tricycles, pedestrians and all human face sub-targets in the images are analyzed through a fusion recognition algorithm, and then fusion big data are calculated and stored in real time.
Bayonet image data: the method comprises the steps of taking a snapshot picture from a front-end bayonet, analyzing a vehicle, a two-wheeled vehicle, a three-wheeled vehicle and a pedestrian in the image and face sub-targets in the image through a fusion recognition algorithm, and then carrying out fusion big data real-time calculation and storage.
Vehicle basic data and person basic data: the vehicle basic database and the personnel basic database are respectively stored with vehicle basic data and personnel basic data, the vehicle basic database and the personnel basic database are respectively provided with basic information such as vehicle registration information, personnel identity information and the like, after the vehicle basic database and the personnel basic database are accessed to a message platform, feature extraction is carried out through a fusion recognition algorithm, and then fusion big data are carried out for real-time calculation processing and storage.
Electronic and network fences: the data structured text data comprises information such as the position of the mobile phone, the Imsi (international mobile subscriber identity)/Imei (international mobile equipment identity) of the mobile phone, the Mac (local area network address) of the mobile phone and the like, and the information enters a message system after being cleaned, and then is subjected to fusion big data real-time calculation processing and storage.
Performing big data fusion analysis and calculation according to the data source, and mining various incidence relation data: vehicle-mobile phone relation, vehicle-human face relation, human body-two-wheel vehicle-three-wheel vehicle identity determination, human face-mobile phone relation.
Vehicle-cell phone relationship: through the analysis of the fusion big data technology, the vehicle track and the mobile phone track are fitted, and the mobile phones used by people driving or riding the vehicle can be obtained.
Vehicle-face relationship: the fusion recognition algorithm can simultaneously analyze the vehicle related information data and main drivers and auxiliary drivers in the vehicle related information data, can search corresponding face information through the vehicle information, and then can know which people drive or ride the vehicles by comparing the face characteristics with the static face basic information base; meanwhile, the person can search out the face according to the person, so that the person can know which vehicles the person takes or drives; the vehicle driven by the person is compared with the static vehicle library, so that whether the person drives the vehicle can be known, whether the vehicle is driven by other people is obtained, and finally, clues of drivers of the same vehicle are obtained.
Identity determination of human body-two-wheeled vehicle-three-wheeled vehicle: searching a human body target containing a clear human face by searching fuzzy human body features of a specific person, and then inquiring the identity of the person corresponding to the human body by comparing the human face features with a static human face library; the driver identity of the two-wheeled vehicle and the three-wheeled vehicle can also be determined.
Face-cell phone relationship: the relationship between the human face and the mobile phone can be obtained through fitting the space-time tracks of the human face, the pedestrian human face, the two-wheel vehicle human face, the three-wheel vehicle human face and all targets in the image and the video with the tracks of the mobile phone, and then the human face characteristics are compared with a static human face library, so that the personnel identity clues corresponding to the mobile phone can be determined.
And performing fusion of full-dimensional data according to the relationship data: and performing algorithm clustering on all faces by using face feature search as a bridge, associating the identities with a static personnel library, wherein each cluster contains all previous space-time point behaviors of the person, and mapping the space-time points to: pedestrians, two-wheel vehicles, tricycles and vehicles can acquire the travel mode of the person at each point, and meanwhile, the travel mode is further fused with historical associated mobile phones, so that user images can be built, and the person, the vehicle and the mobile phones are completely stringed.
And performing space-time backtracking according to the relationship data: for a criminal incident, complete target features and face features of pedestrians, vehicles, two-wheeled vehicles and tricycles around the criminal incident can be extracted, the pedestrians, vehicles, two-wheeled vehicles and tricycles are simultaneously associated with mobile phone equipment, a specific personnel feature set is analyzed, feature comparison is carried out on the pedestrians, vehicles, two-wheeled vehicles and tricycles, suspect vehicle information and specific personnel identity information are obtained, the search range is reduced, meanwhile, the targets meeting the suspect features are alarmed by combining a real-time bayonet image and video stream analysis and fusion analysis technology, and clues and efficiency of case breaking can be effectively provided.
According to the fusion analysis method based on the multi-source heterogeneous data, the multi-source heterogeneous data is subjected to fusion analysis to obtain the incidence relation among the data, all the data are interconnected and communicated, and the value of data fusion is exerted. The fusion recognition algorithm is adopted for video stream data and image data, all targets appearing in the image or video can be positioned once and subjected to target recognition, multiple model input is not needed, all target positioning and target recognition can be completed once, the recognition process is realized in a pure parallel mode, the traditional algorithm recognition logic is changed, and the performance is faster compared with that of similar algorithms in the industry. Real-time calculation and off-line calculation are combined with a big data analysis algorithm, the structured data based on the image video are adapted and optimized, and a large amount of data in a security scene are accumulated, so that the analysis efficiency is higher, and the analysis result is more reliable.
In the first embodiment, a fusion analysis method based on multi-source heterogeneous data is provided, and correspondingly, a fusion analysis system based on multi-source heterogeneous data is also provided. Please refer to fig. 2, which is a block diagram illustrating a fusion analysis system based on multi-source heterogeneous data according to a second embodiment of the present invention. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 2, a block diagram of a fusion analysis system based on multi-source heterogeneous data according to a second embodiment of the present invention is shown, where the system includes: the system comprises a data acquisition module, a fusion analysis module, a real-time calculation module, a big data storage module, an offline calculation module and an analysis module, wherein the data acquisition module is used for acquiring video stream data, image data, personnel and vehicle basic data, and data acquired by an electronic fence and a network fence; the fusion analysis module is used for respectively analyzing the video stream data, the image data and the personnel and vehicle basic data by adopting a fusion identification algorithm to obtain analysis results; the real-time computing module is used for performing fusion analysis and real-time computing on the analysis result, the image data, the basic data of the personnel and the vehicles and the data acquired by the electronic fence and the network fence under big data to obtain a real-time computing result; the big data storage module is used for storing a calculation result; the off-line calculation module is used for off-line calculation of the historical data to obtain an off-line calculation result; the analysis module is used for fusing and analyzing the data association relation according to the real-time calculation result and the off-line calculation result.
The system comprises a fusion analysis module and a fusion analysis module, wherein the fusion analysis module comprises a video stream analysis unit, the video stream analysis unit is used for decoding video stream data, tracking a target and selecting an optimal frame to process the video stream data to obtain a video image, the video image is preprocessed to obtain a preprocessed video image, the preprocessed video image is subjected to target positioning, target identification and feature extraction by adopting a deep convolution neural network to respectively obtain a target frame, a target attribute and a feature value. The fusion analysis module in the system comprises an image analysis unit, wherein the image analysis unit is used for preprocessing image data to obtain a preprocessed image, and the preprocessed image is subjected to target positioning, target identification and feature extraction by adopting a deep convolution neural network to respectively obtain a target frame, a target attribute and a feature value. The preprocessing includes rectification, noise reduction and image transformation.
The fusion analysis system based on the multi-source heterogeneous data performs fusion analysis on the multi-source heterogeneous data, enables all data to be interconnected and communicated and exerts the value of data fusion. The fusion recognition algorithm is adopted for video stream data and image data, all targets appearing in the image or video can be positioned once and subjected to target recognition, multiple model input is not needed, all target positioning and target recognition can be completed once, the recognition process is realized in a pure parallel mode, the traditional algorithm recognition logic is changed, and the performance is faster compared with that of similar algorithms in the industry. Real-time calculation and off-line calculation are combined with a big data analysis algorithm, the structured data based on the image video is adapted and optimized, and a large amount of data in a security scene is accumulated, so that the analysis efficiency is higher, and the analysis result is more reliable.
As shown in fig. 3, a block diagram of an intelligent device provided in a third embodiment of the present invention is shown, where the intelligent device includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used for storing a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the above embodiment.
It should be understood that in the embodiments of the present invention, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device may include a display (LCD, etc.), a speaker, etc.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the processor, the input device, and the output device described in the embodiments of the present invention may execute the implementation described in the method embodiments provided in the embodiments of the present invention, and may also execute the implementation described in the system embodiments in the embodiments of the present invention, which is not described herein again.
The invention also provides an embodiment of a computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions that, when executed by a processor, cause the processor to carry out the method described in the above embodiment.
The computer readable storage medium may be an internal storage unit of the terminal described in the foregoing embodiment, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. A fusion analysis method based on multi-source heterogeneous data is characterized by comprising the following steps:
acquiring video stream data, image data, personnel and vehicle basic data and data collected by an electronic fence and a network fence;
analyzing the video stream data, the image data and the personnel and vehicle basic data by adopting a fusion recognition algorithm to obtain an analysis result;
performing fusion analysis and real-time calculation under big data on the analysis result, the image data, the basic data of personnel and vehicles and the data acquired by the electronic fence and the network fence to obtain a real-time calculation result;
storing the real-time calculation result;
performing off-line calculation on the historical data to obtain an off-line calculation result;
and performing fusion analysis on the real-time calculation result and the off-line calculation result to obtain a data association relation.
2. The method of claim 1, wherein the specific method for parsing the video stream data by using the fusion recognition algorithm comprises:
decoding video stream data, tracking a target, and selecting an optimal frame to obtain a video image;
preprocessing a video image to obtain a preprocessed video image;
and performing target positioning, target identification and feature extraction on the preprocessed video image by adopting a deep convolutional neural network to respectively obtain a target frame, a target attribute and a feature value.
3. The method of claim 1, wherein the specific method for analyzing the image data by using the fusion recognition algorithm comprises:
preprocessing image data to obtain a preprocessed image;
and performing target positioning, target identification and feature extraction on the preprocessed image by adopting a deep convolutional neural network to respectively obtain a target frame, a target attribute and a feature value.
4. A method according to claim 2 or 3, wherein the pre-processing comprises deskewing, denoising and image transformation.
5. A fusion analysis system based on multi-source heterogeneous data is characterized by comprising: a data acquisition module, a fusion analysis module, a real-time calculation module, a big data storage module, an off-line calculation module and an analysis module, wherein,
the data acquisition module is used for acquiring video stream data, image data, personnel and vehicle basic data and data acquired by an electronic fence and a network fence;
the fusion analysis module is used for respectively analyzing the video stream data, the image data and the personnel and vehicle basic data by adopting a fusion identification algorithm to obtain analysis results;
the real-time computing module is used for performing fusion analysis and real-time computing on the analysis result, the image data, the basic data of the personnel and the vehicles and the data acquired by the electronic fence and the network fence under big data to obtain a real-time computing result;
the big data storage module is used for storing a calculation result;
the off-line calculation module is used for off-line calculation of the historical data to obtain an off-line calculation result;
the analysis module is used for fusing and analyzing the data association relation according to the real-time calculation result and the off-line calculation result.
6. The system of claim 5, wherein the fusion analysis module comprises a video stream analysis unit, the video stream analysis unit decodes the video stream data, tracks the target, selects an optimal frame to process to obtain a video image, preprocesses the video image to obtain a preprocessed video image, and performs target positioning, target recognition and feature extraction on the preprocessed video image by using a deep convolutional neural network to obtain a target frame, a target attribute and a feature value respectively.
7. The system of claim 5, wherein the fusion analysis module comprises an image analysis unit, the image analysis unit preprocesses image data to obtain a preprocessed image, and the preprocessed image is subjected to target positioning, target recognition and feature extraction by using a deep convolutional neural network to obtain a target frame, a target attribute and a feature value respectively.
8. The system of claim 6 or 7, wherein the pre-processing includes deskewing, noise reduction, and image transformation.
9. An intelligent device comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being for storing a computer program, the computer program comprising program instructions, characterized in that the processor is configured to invoke the program instructions to perform the method according to any one of claims 1 to 4.
10. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-4.
CN202011494608.6A 2020-12-17 2020-12-17 Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data Pending CN112990254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011494608.6A CN112990254A (en) 2020-12-17 2020-12-17 Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011494608.6A CN112990254A (en) 2020-12-17 2020-12-17 Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data

Publications (1)

Publication Number Publication Date
CN112990254A true CN112990254A (en) 2021-06-18

Family

ID=76345019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011494608.6A Pending CN112990254A (en) 2020-12-17 2020-12-17 Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data

Country Status (1)

Country Link
CN (1) CN112990254A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642658A (en) * 2021-08-19 2021-11-12 大唐环境产业集团股份有限公司 Multi-source heterogeneous data feature extraction method and device for desulfurization system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103676829A (en) * 2013-09-11 2014-03-26 无锡加视诚智能科技有限公司 An intelligent urban integrated management system based on videos and a method thereof
CN110232564A (en) * 2019-08-02 2019-09-13 南京擎盾信息科技有限公司 A kind of traffic accident law automatic decision method based on multi-modal data
CN110459027A (en) * 2019-08-15 2019-11-15 青岛文达通科技股份有限公司 A kind of Community Safety means of defence and system based on multi-source heterogeneous data fusion
CN110489395A (en) * 2019-07-27 2019-11-22 西南电子技术研究所(中国电子科技集团公司第十研究所) Automatically the method for multi-source heterogeneous data knowledge is obtained
CN111126324A (en) * 2019-12-25 2020-05-08 深圳力维智联技术有限公司 Method, device, product and medium for multi-source heterogeneous data fusion
CN111405475A (en) * 2020-03-12 2020-07-10 罗普特科技集团股份有限公司 Multidimensional sensing data collision fusion analysis method and device
CN111611589A (en) * 2020-05-19 2020-09-01 浙江华途信息安全技术股份有限公司 Data security platform, computer equipment and readable storage medium
CN111787189A (en) * 2020-07-17 2020-10-16 塔盾信息技术(上海)有限公司 Gridding automatic monitoring system for integration of augmented reality and geographic information
CN111859451A (en) * 2020-07-23 2020-10-30 北京尚隐科技有限公司 Processing system of multi-source multi-modal data and method applying same
CN111897875A (en) * 2020-07-31 2020-11-06 平安科技(深圳)有限公司 Fusion processing method and device for urban multi-source heterogeneous data and computer equipment
CN111915887A (en) * 2020-07-10 2020-11-10 广州运星科技有限公司 Integration and processing system and method based on multi-source heterogeneous traffic data

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103676829A (en) * 2013-09-11 2014-03-26 无锡加视诚智能科技有限公司 An intelligent urban integrated management system based on videos and a method thereof
CN110489395A (en) * 2019-07-27 2019-11-22 西南电子技术研究所(中国电子科技集团公司第十研究所) Automatically the method for multi-source heterogeneous data knowledge is obtained
CN110232564A (en) * 2019-08-02 2019-09-13 南京擎盾信息科技有限公司 A kind of traffic accident law automatic decision method based on multi-modal data
CN110459027A (en) * 2019-08-15 2019-11-15 青岛文达通科技股份有限公司 A kind of Community Safety means of defence and system based on multi-source heterogeneous data fusion
CN111126324A (en) * 2019-12-25 2020-05-08 深圳力维智联技术有限公司 Method, device, product and medium for multi-source heterogeneous data fusion
CN111405475A (en) * 2020-03-12 2020-07-10 罗普特科技集团股份有限公司 Multidimensional sensing data collision fusion analysis method and device
CN111611589A (en) * 2020-05-19 2020-09-01 浙江华途信息安全技术股份有限公司 Data security platform, computer equipment and readable storage medium
CN111915887A (en) * 2020-07-10 2020-11-10 广州运星科技有限公司 Integration and processing system and method based on multi-source heterogeneous traffic data
CN111787189A (en) * 2020-07-17 2020-10-16 塔盾信息技术(上海)有限公司 Gridding automatic monitoring system for integration of augmented reality and geographic information
CN111859451A (en) * 2020-07-23 2020-10-30 北京尚隐科技有限公司 Processing system of multi-source multi-modal data and method applying same
CN111897875A (en) * 2020-07-31 2020-11-06 平安科技(深圳)有限公司 Fusion processing method and device for urban multi-source heterogeneous data and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高楚阳: "实时与历史结合的多视图态势呈现系统设计与实现", 《中国优秀硕士学位论文全文数据库_信息科技辑》, 15 August 2019 (2019-08-15), pages 138 - 900 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642658A (en) * 2021-08-19 2021-11-12 大唐环境产业集团股份有限公司 Multi-source heterogeneous data feature extraction method and device for desulfurization system

Similar Documents

Publication Publication Date Title
US11783589B2 (en) License plate detection and recognition system
CN109145742B (en) Pedestrian identification method and system
CN107944427B (en) Dynamic face recognition method and computer readable storage medium
CN112085952B (en) Method and device for monitoring vehicle data, computer equipment and storage medium
CN103699677B (en) A kind of criminal's whereabouts mapping system and method based on face recognition technology
CN101059838A (en) Human face recognition system and recognition method
CN110569720A (en) audio and video intelligent identification processing method based on audio and video processing system
CN105827976A (en) GPU (graphics processing unit)-based video acquisition and processing device and system
US11164028B2 (en) License plate detection system
CN108509912A (en) Multipath network video stream licence plate recognition method and system
CN112651293B (en) Video detection method for road illegal spreading event
CN114677607A (en) Real-time pedestrian counting method and device based on face recognition
CN113299073A (en) Method, device, equipment and storage medium for identifying illegal parking of vehicle
CN112328820A (en) Method, system, terminal and medium for searching vehicle image through face image
CN111062319B (en) Driver call detection method based on active infrared image
CN112990254A (en) Fusion analysis method, system, equipment and medium based on multi-source heterogeneous data
CN105740675A (en) Method and system for identifying and triggering authorization management on the basis of dynamic figure
CN110826449A (en) Non-motor vehicle re-identification target retrieval method based on light convolutional neural network
CN108198433B (en) Parking identification method and device and electronic equipment
KR101337554B1 (en) Apparatus for trace of wanted criminal and missing person using image recognition and method thereof
CN114140719A (en) AI traffic video analysis technology
CN112580531A (en) Method and system for identifying and detecting true and false license plates
CN106780599A (en) A kind of circular recognition methods and system based on Hough changes
Huang et al. PEFNet: Position enhancement faster network for object detection in roadside perception system
CN114283361A (en) Method and apparatus for determining status information, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266400 No. 77, Lingyan Road, LINGSHANWEI sub district office, Huangdao District, Qingdao City, Shandong Province

Applicant after: Issa Technology Co.,Ltd.

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Address before: 266400 No. 77, Lingyan Road, LINGSHANWEI sub district office, Huangdao District, Qingdao City, Shandong Province

Applicant before: Qingdao Issa Technology Co.,Ltd.

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Address after: 266400 No. 77, Lingyan Road, LINGSHANWEI sub district office, Huangdao District, Qingdao City, Shandong Province

Applicant after: Qingdao Issa Technology Co.,Ltd.

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Address before: 100020 room 108, 1 / F, building 17, yard 6, Jingshun East Street, Chaoyang District, Beijing

Applicant before: BEIJING YISA TECHNOLOGY Co.,Ltd.

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.