CN111198909A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111198909A
CN111198909A CN201811367162.3A CN201811367162A CN111198909A CN 111198909 A CN111198909 A CN 111198909A CN 201811367162 A CN201811367162 A CN 201811367162A CN 111198909 A CN111198909 A CN 111198909A
Authority
CN
China
Prior art keywords
data
type
real
time
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811367162.3A
Other languages
Chinese (zh)
Inventor
陈予郎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changxin Memory Technologies Inc
Original Assignee
Changxin Memory Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changxin Memory Technologies Inc filed Critical Changxin Memory Technologies Inc
Priority to CN201811367162.3A priority Critical patent/CN111198909A/en
Publication of CN111198909A publication Critical patent/CN111198909A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure relates to a data processing method and device, electronic equipment and a storage medium, and relates to the technical field of big data, wherein the method comprises the following steps: acquiring first type data of all objects in a plurality of activity areas from a plurality of first data sources, and acquiring the first type data to obtain processed first type data; acquiring second type data of all objects in the plurality of activity areas from a second data source, and acquiring the second type data to obtain processed second type data; integrating the processed first type data and the second type data to obtain statistical data; and screening the statistical data to obtain real-time data of the target object, and sending the real-time data to a terminal to display the real-time data. The method and the device can obtain the real-time data of the target object more accurately.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of big data technologies, and in particular, to a data processing method, a data processing apparatus, an electronic device, and a computer-readable storage medium.
Background
To ensure safety, more and more area management is identifying the specific location of each person.
In the related art, the approximate position of each person is usually reasonably estimated according to some hardware management tools, for example, data of a card swiping cage for wild beasts machine is used for confirming the real-time position of the person; using RFID (Radio frequency identification) to be put into an identification card worn by a person to acquire the real-time position of the person; a surveillance system using video surveillance or the like to identify real-time locations of persons, and so forth.
However, in the above-described method, the range of positions obtained by the card swipe cage for wild beasts is too wide to give an accurate position. When the person does not wear the identification card, the real position of the person cannot be obtained; identity recognition cannot be carried out in the dead angle of video monitoring, and the real-time position cannot be obtained. If only one identification mode is used, the obtained data of the personnel is not comprehensive enough, and the specific position of each personnel cannot be accurately obtained.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a data processing method and apparatus, an electronic device, and a storage medium, thereby overcoming, at least to some extent, the problem that a real-time location of a target object cannot be accurately identified due to limitations and disadvantages of the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided a data processing method including: acquiring first type data of all objects in a plurality of activity areas from a plurality of first data sources, and acquiring the first type data to obtain processed first type data; acquiring second type data of all objects in the plurality of activity areas from a second data source, and acquiring the second type data to obtain processed second type data; integrating the processed first type data and the processed second type data to obtain statistical data; and screening the statistical data to obtain real-time data of the target object, and sending the real-time data to a terminal to display the real-time data.
In an exemplary embodiment of the present disclosure, acquiring first type data of all objects in a plurality of activity areas from a plurality of first data sources, and performing extraction processing on the first type data to obtain processed first type data includes: acquiring associated data and an object number from an cage for wild beasts machine database, a conference database, a wireless network database and a radio frequency identification database, and taking the associated data and the object number as processed first type data, wherein the first type data is formatted data.
In an exemplary embodiment of the present disclosure, the method further comprises: the plurality of active areas are divided based on cage for wild beasts machine distribution locations.
In an exemplary embodiment of the present disclosure, obtaining the second type data of all the objects in the plurality of activity areas from the second data source comprises: and acquiring second type data of all the objects in the plurality of activity areas from a monitoring database, wherein the second type data is unformatted data.
In an exemplary embodiment of the disclosure, the second type of data includes image data, and the obtaining of the processed second type of data by performing the capturing process on the second type of data includes: judging whether a moving object exists in the image data; if the moving object exists, carrying out face recognition on the image data of the moving object to determine the object number of the moving object; after the object number is determined, determining the image characteristics of the image data of the moving object through a deep learning model, and obtaining the processed second type data according to the image characteristics and the object number.
In an exemplary embodiment of the present disclosure, determining whether a moving object exists in the image data includes: obtaining a discrimination index according to the moving pixel discrimination function and the moving object discrimination function; and if the judgment index meets a preset value, determining that the moving object exists.
In an exemplary embodiment of the disclosure, the filtering the statistical data to obtain the real-time data of the target object includes: determining a target activity area, and filtering the statistical data according to the target activity area to obtain the statistical data of all objects contained in the target activity area; and if the trigger operation of the statistical data acting on the target object is received, determining the real-time data of the target object.
In an exemplary embodiment of the disclosure, the filtering the statistical data to obtain the real-time data of the target object includes: screening the statistical data according to preset image characteristics to obtain statistical data of a plurality of candidate objects containing the preset image characteristics; and if receiving a trigger operation acting on one of the candidate objects, taking the candidate object corresponding to the trigger operation as a target object, and determining the real-time data of the target object.
In an exemplary embodiment of the present disclosure, the screening the statistical data according to a preset image feature, and obtaining the statistical data corresponding to a plurality of candidate objects including the preset image feature includes: and determining statistical data corresponding to the candidate objects containing the preset image characteristics according to the sequence of time from small to large.
In an exemplary embodiment of the present disclosure, the real-time data includes position data and path data, and determining the real-time data of the target object includes: determining the position data of the target objects arranged according to a first order according to an arrangement order of the importance degrees of the data sources; determining path data of the target objects arranged in the second order according to the arrangement order of time.
In an exemplary embodiment of the present disclosure, the real-time data includes image data and image features, and sending the real-time data to a terminal to enable the terminal to display includes: sending the position data and the path data to a terminal so that the terminal displays the position data according to a first sequence and displays the path data according to a second sequence; and sending the image data of the target object and the image characteristics to the terminal so as to display the terminal.
According to an aspect of the present disclosure, there is provided a data processing apparatus including: the first processing module is used for acquiring first type data of all objects in a plurality of activity areas from a plurality of first data sources and acquiring the first type data to obtain processed first type data; the second processing module is used for acquiring second type data of all the objects in the plurality of activity areas from a second data source and acquiring the second type data to obtain processed second type data; the data integration module is used for integrating the processed first type data and the processed second type data to obtain statistical data; and the data acquisition module is used for screening the statistical data to obtain real-time data of the target object and sending the real-time data to a terminal so as to display the real-time data on the terminal.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any one of the data processing methods described above via execution of the executable instructions.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a data processing method as described in any one of the above.
In the data processing method, the data processing apparatus, the electronic device, and the computer-readable storage medium provided in the exemplary embodiment of the disclosure, on one hand, the statistical data is obtained by capturing and integrating the multi-type data of the plurality of first data sources and the plurality of second data sources, so that the statistical data is more comprehensive and more accurate, and the position of the target object is favorably determined in an auxiliary manner. On the other hand, through the plurality of divided active areas, the number of data to be screened is reduced during screening, the difficulty of searching the position of the target object is reduced, the real-time data of the target object can be quickly obtained, the position acquisition efficiency is improved, the real-time data of the target object can be accurately obtained instead of obtaining the approximate position of the target object, and the position acquisition accuracy is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically illustrates a data processing method in an exemplary embodiment of the present disclosure.
Fig. 2 schematically illustrates a first type of data extraction process according to an exemplary embodiment of the disclosure.
Fig. 3 schematically illustrates a second type of data extraction process according to an exemplary embodiment of the disclosure.
Fig. 4 schematically illustrates a schematic diagram of data integration in an exemplary embodiment of the present disclosure.
FIG. 5 is a schematic diagram of an interface for screening statistics based on image characteristics.
Fig. 6 schematically shows an architecture diagram of data processing in an exemplary embodiment of the present disclosure.
Fig. 7 schematically illustrates a schematic diagram showing real-time data of a target object in an exemplary embodiment of the present disclosure.
Fig. 8 schematically shows a block diagram of a data processing apparatus in an exemplary embodiment of the present disclosure.
Fig. 9 schematically illustrates a block diagram of an electronic device in an exemplary embodiment of the disclosure.
Fig. 10 schematically illustrates a program product in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The embodiment of the present invention first provides a data processing method, which can be applied to application scenarios such as environmental security, fire escape, security management, and the like in companies, schools, public places, or other areas, so as to view and observe specific real-time positions of people, and obtain the real-time position of each person in time. The data processing method is described in detail with reference to fig. 1.
In step S110, first type data of all objects in a plurality of activity areas are obtained from a plurality of first data sources, and the first type data is captured to obtain processed first type data.
In the present exemplary embodiment, the data processing method is performed in a company as an example. The first data source refers to a hardware system capable of approximately obtaining the position of the person, and may include, for example and without limitation, a database corresponding to each of the following data sources: cage for wild beasts machine database for card swiping, a meeting database, a wireless network database, and radio frequency identification data. After the first type data is obtained, the first type data can be captured to obtain processed first type data. The extraction process refers to, specifically, as shown in fig. 3, the extraction process of the first type data can be performed by the common ETL (Extract-Transform-Load) software and the ETL system. The ETL is an important ring for constructing a data warehouse and mainly comprises the steps of extracting required data from a data source, cleaning the data and loading the data into the data warehouse according to a predefined data warehouse model.
The specific process of acquiring the first type data of all the objects in the plurality of activity areas from the plurality of first data sources and obtaining the processed first type data includes: acquiring associated data and an object number from an cage for wild beasts machine database, a conference database, a wireless network database and a radio frequency identification database, wherein the associated data refers to data corresponding to each data source, such as cage for wild beasts machine data, conference data, wireless network data and radio frequency identification data, and the like, and the following situations are specifically included: case one, cage for wild beasts sets of data and object numbers are obtained from the cage for wild beasts sets of databases. Referring to fig. 2, shown in fig. a, cage for wild beasts machine data refers to the card swipe cage for wild beasts machine number, card swipe time, direction (whether the object enters cage for wild beasts machine or leaves cage for wild beasts machine) of each object entering cage for wild beasts machine. The object number is used to indicate the identity of each object that is swiped, and may be, for example, a job number in a company, a school number in a school, etc.
It should be noted that, in the process of acquiring cage for wild beasts machines data, a plurality of activity areas can be divided according to the distribution positions of cage for wild beasts machines. Specifically, the area entrance/exit can be a card swipe cage for wild beasts machine, defining a range area in which all people can move, i.e., an activity area. For example, the entire area is divided into a region a, a region B, and a region C. Specifically, all floor plans of a building may be acquired first; next, checking the distribution position of the card swiping cage for wild beasts machine and accurately marking the distribution position into the floor plan of the building; further, based on the position of the card swiping cage for wild beasts machine, all freely movable areas after the card swiping enters are divided as movable areas; then, naming each activity area to obtain professional name definition; finally, tables [ area number | area name ], [ cage for wild beasts machine number | cage for wild beasts machine name ], [ cage for wild beasts machine number | area number ] are defined to associate the obtained data by cage for wild beasts machine number and area number. After the first type data acquired from the cage for wild beasts machine database is subjected to extraction processing, the acquired processed first type data can be represented by a table, and the table can include elements such as a card swiping cage for wild beasts machine number, card swiping time, direction (entering or leaving), object number and the like.
The current activity areas of all the objects in the management range are divided on the basis of the divided activity areas, so that the distribution of the areas where the objects are located can be observed, the approximate activity position space of the objects can be known, and the object list of the activity areas can be known after the activity areas are appointed.
And in case two, acquiring the conference data and the object number from the conference database. Referring to fig. 2, the conference data may include a conference start time, a conference end time, a conference name, etc., and the object number may be a job number of an object participating in a certain conference, etc. The processed first type data corresponding to the meeting database may also be represented by a table, and the table may include elements such as a meeting start time, a meeting end time, a meeting name, and an object number.
And thirdly, acquiring the network data and the object number from a wireless network database, wherein the wireless network database can be a WIFI database. Referring to a diagram C in fig. 2, the network data may include connection start time, connection interruption time, a WIFI identification name, and the like, and the object number may be a job number of an object connected to a certain WIFI. The processed first type data corresponding to the wireless network database can also be represented by a table, and the table can include elements such as connection starting time, connection interruption time, a WIFI identification name and an object number.
And fourthly, obtaining the Identification data and the object number from the Radio Frequency Identification database, wherein the Radio Frequency Identification database can be RFID (Radio Frequency Identification). RFID tags may be mobile or fixed tags that are used to obtain a real-time location after the subject wears or passes the location where the tag is placed. Referring to a diagram D in fig. 2, the identification data may include a detection time, an RFID identification name, and the like, and the object number may be a job number of the object wearing the RFID. The processed first type data corresponding to the RFID database may also be represented by a table, which may include elements such as a detection time, an RFID identification name, and an object number.
Referring to fig. 2, since the first type data is formatted data, when the first type data is obtained, the first type data can be captured by a common ETL software and an ETL system to obtain formatted data after ETL, and the formatted data corresponding to each data source is represented by a table.
In step S120, second type data of all the objects in the plurality of activity areas are obtained from a second data source, and the second type data is captured to obtain processed second type data.
In this exemplary embodiment, the second data source may be, for example, a monitoring database corresponding to all video monitors, that is, cameras, and the second type data may be non-format data, and the second type data may include image data, position data, and the like.
After the second type data is acquired, the second type data can be captured and processed to obtain the processed second type data. Referring to fig. 3, since the second type data is unformatted data, the second type data can be captured through a deep learning algorithm, a big data processing algorithm and an image processing method to obtain processed second type data, so that the second type data belonging to the unformatted data is processed into formatted second type data. The specific process can comprise the following steps: first, whether a moving object exists in the image data is judged. The moving object refers to an object whose position changes, and can be specifically realized by a moving object detection function. When determining whether there is a moving object, it can be determined by a discrimination index. The discriminant index may be expressed in formula (1) according to a moving pixel discriminant function and a moving object discriminant function. If the judgment index meets the preset value, the mobile object can be considered to exist; if the discrimination index does not satisfy the preset value, it can be considered that no moving object exists. The preset value may be set to 1, or may be set to other values according to actual needs, and is not particularly limited herein.
Figure BDA0001868881550000081
Wherein α is a discrimination index, τ1For moving pixel discrimination functions, τ2For the moving object discrimination function, I (I, j) is a monitor imaging signal source, I is a frame number (frame number) of imaging image data, and N is the number of image pixels.
Moving pixel discrimination function tau1Is expressed by equation (2):
Figure BDA0001868881550000082
discrimination function tau of moving object2Is expressed by equation (3):
Figure BDA0001868881550000083
and secondly, if the moving object exists, performing face recognition on the image data of the moving object to determine the object number of the moving object. On the basis of the first step, if a moving object is detected, face recognition may be performed on the detected moving object to determine an object number representing its identity. The image data may be, for example, a face image, a wearing article image, and the like, and specifically, the image data of the moving object may be compared with all objects stored in the database to determine an object number of the moving object. For example, when performing the face recognition function, the recognition data may be a photo on the object identification card (work card or other certificate), and the recognition result may include a video number, a video time, an object number of the moving object, and the like. If the mobile object is determined to be a company employee, the job number of the mobile object is used as an object number; if the mobile object is determined not to be a company employee, the object number is represented by NULL or a numerical value of a fixed number of bits, as long as the mobile object is distinguished from the job number. The result obtained in the second step can be represented by a table, and the table may specifically include elements such as a video recording number, a video recording time, and an object number of the mobile object.
And thirdly, after the object number is determined, determining the image characteristics of the image data of the moving object through a deep learning model so as to obtain the processed second type data according to the image characteristics and the object number. In this step, the object worn by the moving object is mainly identified. The worn items may include, for example, hats, bags, clothing, etc., each of which may be designated by a number. For example 001 for a hat, 002 for a bag, etc. The deep learning algorithm may be, for example, a convolutional neural network algorithm, or may be other machine learning models for classification, such as a classifier model, and the like. The image features may include, for example, type features as well as color features. Specifically, the image data is input as the convolutional neural network model, and the output result is the number of any one target image data in the image data, so as to determine the category.
Besides, the colors of the target image data can be identified according to the deep learning model, and different colors can be represented by different numbers, wherein the numbers can be in an alphanumeric mode, such as a11 for blue, B22 for red, and the like. Specifically, numbers representing different colors can be used as training data to train a convolutional neural network model for identifying the colors of the articles; and further inputting the target image data into the trained model to obtain the probability that the target image data belongs to a certain number, and taking the color corresponding to the number with the highest probability as the color of the target image data. For example, the moving object 1 is analyzed, and the image features composed of the type feature and the color feature are a gray hat, a blue jacket, a blue tie, a green backpack, and the like. The result obtained in the third step can be represented by a table, and the table mainly comprises elements such as video numbers, article numbers, color numbers and the like.
In this way, the image feature and the object number can be used as the processed second type data, and the second type data can be stored in a table form, for example, the recording time and the object number are used as the main key, and the item and the color are used as the value.
Next, in step S130, the processed first type data and the second type data are integrated to obtain statistical data.
In the present exemplary embodiment, Data of all Data sources may be integrated through a Data Integration Platform (DIP), so as to obtain statistical Data. That is, the statistical data includes position data, image data, and the like of all the objects in the plurality of active regions, and all the data belongs to the formatted data.
Referring to FIG. 4, the data integration platform may include a Hadoop data platform and a Green plenum decentralized database. The Hadoop data platform is used for rapidly storing a large amount of data, the Green plenum distributed database is used for being mutually connected and communicated with the Hadoop, and the big data analysis platform which meets the requirements of capabilities of rapid storage, a relevant structure, SQL instruction support, transaction support, rapid operation, rapid query, authority control and management and the like is achieved. After the processed first type data and the processed second type data are obtained, the data can be stored in the data integration platform, so that the processed first type data and the processed second type data are integrated through the data integration platform to obtain statistical data. Through integrating data, can obtain the comprehensive accurate data including all data sources, be favorable to the position of supplementary judgement object, through integrating the data of first data source and second data source, need not to increase other data acquisition systems, reduced the hardware cost.
In step S140, the statistical data is filtered to obtain real-time data of the target object, and the real-time data is sent to a terminal so that the terminal can display the real-time data.
In the present exemplary embodiment, the target object may be any specified observation object in a certain activity area. The screening mode can be screening through the active region, and can also be screening through image characteristics. Firstly, taking screening through an active area as an example for explanation, the specific process comprises the following steps: determining a target activity area, and filtering the statistical data according to the target activity area to obtain the statistical data of all objects contained in the target activity area; and if the trigger operation of the statistical data acting on the target object is received, determining the real-time data of the target object.
The user can use the activity area to be filtered as a target activity area by inputting or clicking the activity area on the interface of the terminal, for example, using the area a as the target activity area. Further, the position data of all the objects in the target activity area can be obtained from the statistical data, for example, all the objects in the area a can be included, and the number of people in the area a can be determined, which is specifically shown in table 1.
Active area Time of entry Shop number Name (I) First-class department Second grade department
Zone A 2018/09/18 08:30:01 E00534 Zhang three IT CIT
Zone A E00345 Li four
TABLE 1
In the terminal interface, a block which can start a filter is attached to each active area, and Data of all objects is obtained through a query instruction by using Data of a card swiping cage for wild beasts machine on the Data platform, and can be represented as PD (Personal Data, related column information of personnel). The real-time number of people in each activity area is counted by the target activity area in the PD and is presented behind the name of each activity area, so that the user can easily observe the number distribution of people in each activity area. And presenting a plane configuration diagram containing the related activity area building on the right side of the interface by taking the activity area field information in the PD as a reference. All the field information in the PD is presented above all the floor plans in a table list (e.g., table 1), the number of the table list is about ten, the number of pages is given below the list, and the table list is provided with a sorting function in a header column. A personnel information condition function is provided above the filter, so that a user can input keywords of a target object as similar conditions, specifically, a PD can be filtered by using 'like' in an SQL instruction, and objects which do not meet the similar conditions in the PD are removed. And when the user clicks any one activity area to start the filter, filtering the PD by taking the designated activity area as a target activity area, and removing all persons which do not meet the conditions in the PD.
Further, a certain object in the target activity area may be used as a target object, and the target object may be determined by a trigger operation of statistical data acting on the target object. The trigger operation may be, for example, a click operation or a double click operation, etc. For example, if a click operation acting on the 1 st row in table 1 is received, zhang san is taken as a target object, and the real-time data of zhang san can be screened out from the statistical data.
Next, screening by image features will be described as an example. The specific process comprises the following steps: screening the statistical data according to preset image characteristics to obtain statistical data of a plurality of candidate objects containing the preset image characteristics; and if receiving a trigger operation acting on one of the candidate objects, taking the candidate object corresponding to the trigger operation as a target object, and determining the real-time data of the target object. The preset image features may be one or more of the image features obtained by the deep learning model, and may be, for example, a green backpack, a gray hat. For example, the statistical data is filtered according to a green backpack and a gray hat, and the statistical data of a plurality of candidate objects which meet or include the green backpack and the gray hat is obtained. Such as entering a green backpack and gray hat, the resulting candidates may include zhang san, lie seq, and so on. It should be noted that, in order to facilitate the user to view the information of the candidate object, the statistical data corresponding to the multiple candidate objects including the preset image feature may be determined in a time descending order, or may be determined in a time descending order. For example, if the time for Zhang III is less than the time for Liqu, then statistics as shown in Table 2 may be formed.
Figure BDA0001868881550000121
TABLE 2
The image characteristics of the wearing article are obtained from the image data through the deep learning model, and then screening is carried out according to the preset image characteristics, so that the real-time data of the target object is obtained, and a user can conveniently track the characteristics of some suspicious persons.
Probability of occurrence Time difference Time of day Information source Approximate position Detailed position
Super high 1 minute 32 seconds 2018/09/12 16:18:32 Monitoring camera 4-storied building south area Directly above the outside of the toilet
Height of 0 second 2018/09/12 16:00:00 Meeting notification Middle area of 4 storied building 431 chamber
In 0 second 2018/09/12 13:35:13 WIFI transmitter 4-storied building south area Middle upper part
In Location of department 4-storied building south area BAM in the middle of the southern region
TABLE 3
After obtaining tables 1 and 2, if the target object is determined by clicking a certain line in tables 1 and 2, real-time data of the target object is screened from the statistical data. The real-time data may include position data and path data, for which reference is made to table 3. Which may include six elements of probability, time difference, time, information, general location, detailed location, etc. Probability is the empirically defined importance of each data source, time difference is the distance between the time of occurrence of the event and the current time, time is the time of occurrence of the event, information is a brief description of the data source, the approximate location is the approximate location range provided in the information, and the detailed location is the most detailed location provided in the information.
It should be noted that each piece of information in table 3 only presents the one with the largest time, and the position data of the target objects arranged according to the first order may be determined according to the arrangement order of the importance degree of the data source, for example, the arrangement order of the data source is ordered sequentially at the high, medium and low levels.
At the same time, the path data of the target object can also be determined, as shown in table 4. The elements in table 4 may include time, information, a general location, a detailed location, and the like, and the path data of the target objects arranged in the second order may be determined according to an arrangement order of time, specifically, may be arranged in a descending order of time, or may be arranged in a descending order of time.
Time of day Information source Approximate position Detailed position
2018/09/12 16:17:16 Card swiping record Gate for 1 building in north area Number 3 rd cage for wild beasts machine from left
2018/09/12 16:17:20 Monitoring camera North area of building 1 Right above gate cage for wild beasts
2018/09/12 16:18:32 Monitoring camera 4-storied building south area Over the outside of the toilet
TABLE 4
The data of the first data source and the second data source are integrated through the data integration platform, so that the position distribution of the objects in each activity area can be observed by a user, the objects can be distinguished by the activity areas, and the user can conveniently and quickly find and view detailed position data of the target object after screening and filtering.
The real-time data of the tables 3 and 4 are obtained by screening the statistical data integrated by the integration platform, so that more comprehensive position data and path data aiming at the target object can be accurately determined, the position of the target object can be favorably judged in an auxiliary manner, and the accuracy and the real-time performance of position acquisition are improved. By screening the statistical data, the position data acquisition efficiency can be improved. In addition, data generated by all systems in a company are integrated and analyzed, a plurality of information evidences related to real-time positions are provided, the accurate judgment and decision making of the positions of the target objects by the users are facilitated, and the purchase cost or the hardware cost does not need to be increased.
In addition, the real-time data may include image data and image features of the target object. After the target object is determined, the position data, the path data, the image characteristics and other information of the target object are all sent to the terminal so that the terminal can display the information. The terminal may be a computer, a notebook, a mobile phone, or other terminal with a display function, and a specific architecture for sending the real-time data to the terminal for display is shown in fig. 6. When the position data is displayed, the position data needs to be displayed according to the determined first sequence, that is, the table 3 corresponding to the target object can be displayed on the terminal. When the path data is displayed, the path data needs to be displayed according to the determined second order, that is, the table 4 corresponding to the target object can be displayed on the terminal.
Referring to fig. 7, when a plurality of objects are screened through the activity area or the preset image feature, if it is detected that the user clicks the first line in the form, the third page is taken as the target object, and the real-time data of the third page, that is, the position data, the path data, the image data and the list of wearing articles (image feature) can be displayed on the terminal. By presenting the identification results of the monitoring camera picture and the wearing article, the characteristics of the body of the object and the characteristics of the background environment can be observed more clearly, so that the real-time data of the target object can be acquired more comprehensively.
It is necessary to supplement that, in the whole process, the right setting can be performed on the account number of the login system, that is, only the user who passes the right authentication can log in the system to check the information such as the real-time data of the target object, so as to improve the data security.
The present disclosure also provides a data processing apparatus. Referring to fig. 8, the data processing apparatus 800 may include:
a first processing module 801, configured to obtain first type data of all objects in a plurality of activity areas from a plurality of first data sources, and perform extraction processing on the first type data to obtain processed first type data;
a second processing module 802, configured to obtain second type data of all the objects in the multiple activity areas from a second data source, and perform extraction processing on the second type data to obtain processed second type data;
a data integration module 803, configured to integrate the processed first type data and the second type data to obtain statistical data;
the data acquisition module 804 is configured to screen the statistical data to obtain real-time data of a target object, and send the real-time data to a terminal so that the terminal can display the real-time data.
It should be noted that, the specific details of each module in the data processing apparatus have been described in detail in the corresponding data processing method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 900 according to this embodiment of the invention is described below with reference to fig. 9. The electronic device 900 shown in fig. 9 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 9, the electronic device 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: the at least one processing unit 910, the at least one memory unit 920, and a bus 930 that couples various system components including the memory unit 920 and the processing unit 910.
Wherein the storage unit stores program code that is executable by the processing unit 910 to cause the processing unit 910 to perform steps according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of the present specification. For example, the processing unit 910 may perform the steps as shown in fig. 1: in step S110, first type data of all objects in a plurality of activity areas are obtained from a plurality of first data sources, and the first type data is captured to obtain processed first type data; in step S120, second type data of all the objects in the plurality of activity areas is obtained from a second data source, and the second type data is captured to obtain processed second type data; in step S130, integrating the processed first type data and the second type data to obtain statistical data; in step S140, the statistical data is filtered to obtain real-time data of the target object, and the real-time data is sent to a terminal so that the terminal can display the real-time data.
The storage unit 920 may include a readable medium in the form of a volatile storage unit, such as a random access memory unit (RAM)9201 and/or a cache memory unit 9202, and may further include a read only memory unit (ROM) 9203.
Storage unit 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 930 can be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The display unit 940 may be a display having a display function to show a processing result obtained by the processing unit 910 performing the method in the present exemplary embodiment through the display. The display includes, but is not limited to, a liquid crystal display or other display.
The electronic device 900 may also communicate with one or more external devices 1100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 900, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 900 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 950. Also, the electronic device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 960. As shown, the network adapter 960 communicates with the other modules of the electronic device 900 via the bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
Referring to fig. 10, a program product 1000 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (14)

1. A data processing method, comprising:
acquiring first type data of all objects in a plurality of activity areas from a plurality of first data sources, and acquiring the first type data to obtain processed first type data;
acquiring second type data of all objects in the plurality of activity areas from a second data source, and acquiring the second type data to obtain processed second type data;
integrating the processed first type data and the processed second type data to obtain statistical data;
and screening the statistical data to obtain real-time data of the target object, and sending the real-time data to a terminal to display the real-time data.
2. The data processing method of claim 1, wherein obtaining first type data of all objects in a plurality of activity areas from a plurality of first data sources, and obtaining the processed first type data by retrieving the first type data comprises:
acquiring associated data and an object number from an cage for wild beasts machine database, a conference database, a wireless network database and a radio frequency identification database, and taking the associated data and the object number as processed first type data, wherein the first type data is formatted data.
3. The data processing method of claim 2, wherein the method further comprises:
the plurality of active areas are divided based on cage for wild beasts machine distribution locations.
4. The data processing method of claim 1, wherein obtaining the second type of data for all objects in the plurality of activity areas from a second data source comprises:
and acquiring second type data of all the objects in the plurality of activity areas from a monitoring database, wherein the second type data is unformatted data.
5. The data processing method of claim 1, wherein the second type of data comprises image data, and the obtaining of the processed second type of data by capturing the second type of data comprises:
judging whether a moving object exists in the image data;
if the moving object exists, carrying out face recognition on the image data of the moving object to determine the object number of the moving object;
after the object number is determined, determining the image characteristics of the image data of the moving object through a deep learning model, and obtaining the processed second type data according to the image characteristics and the object number.
6. The data processing method of claim 5, wherein determining whether a moving object is present in the image data comprises:
obtaining a discrimination index according to the moving pixel discrimination function and the moving object discrimination function;
and if the judgment index meets a preset value, determining that the moving object exists.
7. The data processing method of claim 1, wherein the filtering the statistical data to obtain real-time data of the target object comprises:
determining a target activity area, and filtering the statistical data according to the target activity area to obtain the statistical data of all objects contained in the target activity area;
and if the trigger operation of the statistical data acting on the target object is received, determining the real-time data of the target object.
8. The data processing method of claim 1, wherein the filtering the statistical data to obtain real-time data of the target object comprises:
screening the statistical data according to preset image characteristics to obtain statistical data of a plurality of candidate objects containing the preset image characteristics;
and if receiving a trigger operation acting on one of the candidate objects, taking the candidate object corresponding to the trigger operation as a target object, and determining the real-time data of the target object.
9. The data processing method of claim 8, wherein the filtering the statistical data according to a preset image feature to obtain statistical data corresponding to a plurality of candidate objects including the preset image feature comprises:
and determining statistical data corresponding to the candidate objects containing the preset image characteristics according to the sequence of time from small to large.
10. The data processing method of claim 7 or 8, wherein the real-time data comprises position data and path data, and determining the real-time data of the target object comprises:
determining the position data of the target objects arranged according to a first order according to an arrangement order of the importance degrees of the data sources;
determining path data of the target objects arranged in the second order according to the arrangement order of time.
11. The data processing method of claim 10, wherein the real-time data comprises image data and image features, and sending the real-time data to a terminal for presentation by the terminal comprises:
sending the position data and the path data to a terminal so that the terminal displays the position data according to a first sequence and displays the path data according to a second sequence;
and sending the image data of the target object and the image characteristics to the terminal so as to display the terminal.
12. A data processing apparatus, comprising:
the first processing module is used for acquiring first type data of all objects in a plurality of activity areas from a plurality of first data sources and acquiring the first type data to obtain processed first type data;
the second processing module is used for acquiring second type data of all the objects in the plurality of activity areas from a second data source and acquiring the second type data to obtain processed second type data;
the data integration module is used for integrating the processed first type data and the processed second type data to obtain statistical data;
and the data acquisition module is used for screening the statistical data to obtain real-time data of the target object and sending the real-time data to a terminal so as to display the real-time data on the terminal.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the data processing method of any one of claims 1-11 via execution of the executable instructions.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data processing method of any one of claims 1 to 11.
CN201811367162.3A 2018-11-16 2018-11-16 Data processing method and device, electronic equipment and storage medium Pending CN111198909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811367162.3A CN111198909A (en) 2018-11-16 2018-11-16 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811367162.3A CN111198909A (en) 2018-11-16 2018-11-16 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111198909A true CN111198909A (en) 2020-05-26

Family

ID=70745560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811367162.3A Pending CN111198909A (en) 2018-11-16 2018-11-16 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111198909A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579701A (en) * 2020-12-15 2021-03-30 中国建设银行股份有限公司 Data processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279740A (en) * 2013-05-15 2013-09-04 吴玉平 Method and system for accelerating intelligent monitoring human face identification by utilizing dynamic database
CN103942850A (en) * 2014-04-24 2014-07-23 中国人民武装警察部队浙江省总队医院 Medical staff on-duty monitoring method based on video analysis and RFID (radio frequency identification) technology
CN104318732A (en) * 2014-10-27 2015-01-28 国网冀北电力有限公司张家口供电公司 Transformer substation field worker monitoring and management system and method based on video analysis and RFID
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
CN107370983A (en) * 2016-05-13 2017-11-21 腾讯科技(深圳)有限公司 Acquisition methods and device for the whereabouts track of video monitoring system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279740A (en) * 2013-05-15 2013-09-04 吴玉平 Method and system for accelerating intelligent monitoring human face identification by utilizing dynamic database
CN103942850A (en) * 2014-04-24 2014-07-23 中国人民武装警察部队浙江省总队医院 Medical staff on-duty monitoring method based on video analysis and RFID (radio frequency identification) technology
CN104318732A (en) * 2014-10-27 2015-01-28 国网冀北电力有限公司张家口供电公司 Transformer substation field worker monitoring and management system and method based on video analysis and RFID
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
CN107370983A (en) * 2016-05-13 2017-11-21 腾讯科技(深圳)有限公司 Acquisition methods and device for the whereabouts track of video monitoring system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孔英会 等: "智能视频监控系统中物体遗留检测方法的研究", 《计算机工程与科学》 *
宋怀波 等: "基于头颈部轮廓拟合直线斜率特征的奶牛跛行检测方法", 《农业工程学报》 *
胡德文、陈芳林编著: "《生物特征识别技术与方法》", 31 August 2013 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579701A (en) * 2020-12-15 2021-03-30 中国建设银行股份有限公司 Data processing method and device

Similar Documents

Publication Publication Date Title
US11259718B1 (en) Systems and methods for automated body mass index calculation to determine value
WO2018072520A1 (en) Security check system and method for configuring security check device
Wang et al. Facial recognition system using LBPH face recognizer for anti-theft and surveillance application based on drone technology
KR20200098875A (en) System and method for providing 3D face recognition
US20170116469A1 (en) Counting and monitoring method using face detection
US20190213424A1 (en) Image processing system and image processing method
CN108304816B (en) Identity recognition method and device, storage medium and electronic equipment
DE112009000485T5 (en) Object comparison for tracking, indexing and searching
JP2012235415A (en) Image processing system, febrile person identifying method, image processing apparatus, and control method and program thereof
CN102017606A (en) Image processing device, camera, image processing method, and program
WO2017222413A1 (en) Methods and systems for object search in a video stream
US11023714B2 (en) Suspiciousness degree estimation model generation device
US20180158063A1 (en) Point-of-sale fraud detection using video data and statistical evaluations of human behavior
AU2016262874A1 (en) Systems, methods, and devices for information sharing and matching
US11308792B2 (en) Security systems integration
CN111209446A (en) Method and device for presenting personnel retrieval information and electronic equipment
US11113838B2 (en) Deep learning based tattoo detection system with optimized data labeling for offline and real-time processing
US11450186B2 (en) Person monitoring system and person monitoring method
KR20180085505A (en) System for learning based real time guidance through face recognition and the method thereof
CN111598753A (en) Suspect recommendation method and device, electronic equipment and storage medium
US20180089500A1 (en) Portable identification and data display device and system and method of using same
CN111198909A (en) Data processing method and device, electronic equipment and storage medium
US20210334758A1 (en) System and Method of Reporting Based on Analysis of Location and Interaction Between Employees and Visitors
Dessimoz et al. A dedicated framework for weak biometrics in forensic science for investigation and intelligence purposes: The case of facial information
CN111027510A (en) Behavior detection method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200526