CN112560694B - Data analysis method and device, electronic equipment and storage medium - Google Patents

Data analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112560694B
CN112560694B CN202011499709.2A CN202011499709A CN112560694B CN 112560694 B CN112560694 B CN 112560694B CN 202011499709 A CN202011499709 A CN 202011499709A CN 112560694 B CN112560694 B CN 112560694B
Authority
CN
China
Prior art keywords
target object
feature set
riding
face
face feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011499709.2A
Other languages
Chinese (zh)
Other versions
CN112560694A (en
Inventor
吴克贤
印诚宇
杨琛
张力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Leading Technology Co Ltd
Original Assignee
Nanjing Leading Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Leading Technology Co Ltd filed Critical Nanjing Leading Technology Co Ltd
Priority to CN202011499709.2A priority Critical patent/CN112560694B/en
Publication of CN112560694A publication Critical patent/CN112560694A/en
Application granted granted Critical
Publication of CN112560694B publication Critical patent/CN112560694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Game Theory and Decision Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a data analysis method, a data analysis device, electronic equipment and a storage medium, which are used for solving the problem that how to effectively discover social relationships among users and improve the utilization rate of processing resources still need to be solved. In the application, the face feature storage of the passenger is a target object which can be associated with the order. For each target object, the characteristics of the target object and the characteristics of the associated object thereof can be analyzed through big data analysis based on the face characteristics associated with the order of the target object, and the identity of the unknown object is confirmed, so that the mining of social relations among different users is realized, and the accuracy of establishing the social relations is improved.

Description

Data analysis method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of big data analysis technologies, and in particular, to a data analysis method and apparatus, an electronic device, and a storage medium.
Background
The net appointment platform serves as a bridge between passengers and drivers and can provide services for different passengers. In order to optimize and expand different function modules of the network car booking platform, the social relationship among users needs to be mined.
The inventor finds that in the related technology, social relationship mining mostly depends on the fact that users share the relevant information of the online car booking platform to other users, or the social relationship between the users is mined based on the assisted operation of different users for the same user. However, the results show that the social relationships discovered in this way are often limited, and the social relationships between different users cannot be well expressed, which is a waste of processing resources used for discovering the social relationships. Therefore, how to effectively discover social relationships among users to improve the utilization rate of processing resources still remains to be solved.
Disclosure of Invention
The embodiment of the application provides a data analysis method and device, electronic equipment and a storage medium, and aims to solve the problem that how to effectively discover social relationships among users and improve the utilization rate of processing resources still needs to be solved.
In one aspect, an embodiment of the present application provides a data analysis method, including:
acquiring pre-stored riding information of a target object, wherein the riding information comprises at least one face feature of at least one passenger;
if the riding information comprises a plurality of human face features, classifying the plurality of human face features to obtain at least one human face feature set;
determining a face feature set of the target object and a face feature set of an associated object having an association relation with the target object based on each face feature set;
for each associated object, matching the facial feature set of the associated object with the facial feature set of a known object;
associating the identity information of the matched known object with the identity information of the associated object;
and constructing an association relationship network between the target object and each associated object based on the identity information of the target object and the identity information of each associated object.
In some embodiments, determining the facial features of the target object and the facial features of the associated object having an association relationship with the target object based on each of the facial feature sets includes:
taking each face feature set as a face feature set of an object;
the face feature set with the largest element number is the face feature set of the target object;
and respectively taking the face feature sets except the face feature set of the target object as the face feature sets of the associated objects.
In some embodiments, the matching the facial feature set of the associated object with the facial feature set of the known object includes:
respectively determining the average characteristics of the face characteristic sets of the associated object and the known objects;
determining a similarity between the average feature of the associated object and an average feature of the known objects;
and selecting the known object corresponding to the maximum similarity as the known object matched with the target object.
In some embodiments, before matching the facial feature set of the associated object with the facial feature set of the known object, the method further comprises:
and screening out candidate objects with the same riding places as the related objects from a plurality of candidate objects as the known objects.
In some embodiments, after determining the facial feature set of the target object and the facial feature set of the associated object having an association relationship with the target object based on each of the facial feature sets, the method further includes:
determining the total volume of co-riding orders taken by the target object and at least one associated object together in the effective orders of the target object; determining a reference order quantity when the associated object and the target object take a car together in the effective order of the target object;
respectively executing for each of the associated objects:
determining the association degree of the target object to the associated object based on the reference order quantity, the co-passenger order total quantity and the effective order quantity of the target object;
wherein the degree of association is directly proportional to the reference order quantity, inversely proportional to the total quantity of orders multiplied by, and directly proportional to a ratio of the effective order quantity and the reference order quantity.
In some embodiments, the method further comprises:
determining the riding information of the target object based on the following method:
responding to the riding request of the target object, and acquiring the human face characteristics of each riding object related to the riding request;
and adding the human face characteristics of each riding object into the riding information.
In a second aspect, the present application also provides a data analysis apparatus, the apparatus comprising:
the system comprises a riding information acquisition module, a passenger selection module and a passenger selection module, wherein the riding information acquisition module is used for acquiring pre-stored riding information of a target object, and the riding information comprises at least one face feature of at least one passenger;
the classification processing module is used for classifying a plurality of human face features to obtain at least one human face feature set if the riding information comprises the plurality of human face features;
the identification module is used for determining a face feature set of the target object and a face feature set of an associated object having an association relation with the target object based on each face feature set;
the matching module is used for matching the face feature set of the associated object with the face feature set of the known object aiming at each associated object;
the identity determining module is used for associating the identity information of the matched known object with the identity information of the associated object;
and the relationship construction module is used for constructing an association relationship network between the target object and each associated object based on the identity information of the target object and the identity information of each associated object.
In some embodiments, the identification module is to:
taking each face feature set as a face feature set of an object;
the face feature set with the largest element number is the face feature set of the target object;
and respectively taking the face feature sets except the face feature set of the target object as the face feature sets of the associated objects.
In some embodiments, the matching module is to:
respectively determining the average characteristics of the face characteristic sets of the associated object and the known objects;
determining a similarity between the average feature of the associated object and an average feature of the known objects;
and selecting the known object corresponding to the maximum similarity as the known object matched with the target object.
In some embodiments, the apparatus further comprises:
the known object screening module may be configured to, before the matching module performs a matching operation on the facial feature set of the associated object and a facial feature set of a known object, screen out a candidate object having the same riding location as the associated object from a plurality of candidate objects as the known object.
In some embodiments, the apparatus further comprises:
the association degree determining module is used for determining the total volume of co-passenger orders of the target object and at least one associated object in a common bus in the effective orders of the target object after the identifying module determines the facial feature set of the target object and the facial feature set of the associated object having an association relation with the target object based on the facial feature sets; determining a reference order quantity when the associated object and the target object take a car together in the effective order of the target object;
respectively executing for each of the associated objects:
determining the association degree of the target object to the associated object based on the reference order quantity, the co-passenger order total quantity and the effective order quantity of the target object;
wherein the degree of association is directly proportional to the reference order quantity, inversely proportional to the total quantity of orders multiplied by, and directly proportional to a ratio of the effective order quantity and the reference order quantity.
In some embodiments, the apparatus further comprises:
a ride information maintenance module for determining the ride information of the target object based on the following method:
responding to the riding request of the target object, and acquiring the human face characteristics of each riding object related to the riding request;
and adding the human face characteristics of each riding object into the riding information.
In a third aspect, another embodiment of the present application further provides an electronic device, including at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any data analysis method provided by the embodiment of the application.
In a fourth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is used to make a computer execute any data analysis method in the embodiments of the present application.
In the embodiment of the application, the storage of the facial features of the passenger can be associated with the target object for initiating the order. For each target object, the characteristics of the target object and the characteristics of the associated object thereof can be analyzed through big data analysis based on the face characteristics associated with the order of the target object, and the identity of the unknown object is confirmed, so that the mining of social relations among different users is realized, and the accuracy of establishing the social relations is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a data analysis method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a data analysis method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a data analysis method according to an embodiment of the present application;
fig. 4 is another schematic flow chart of a data analysis method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of relationships between users discovered by the present application;
fig. 6 is a schematic structural diagram of a data analysis apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
(1) In the embodiments of the present application, the term "plurality" means two or more, and other terms are similar thereto.
(2) "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
(3) A server serving the terminal, the contents of the service such as providing resources to the terminal, storing terminal data; the server is corresponding to the application program installed on the terminal and is matched with the application program on the terminal to run. The server in the embodiment of the application can be a server of a network car booking platform.
(4) The terminal may refer to an APP (Application) of a software class or a client. The system is provided with a visual display interface and can interact with a user; is corresponding to the server, and provides local service for the client. For software applications, except some applications that are only run locally, the software applications are generally installed on a common client terminal and need to be run in cooperation with a server terminal. After the internet has developed, the more common application programs include an email client for receiving and sending emails, an instant messaging client, a car booking client, and the like. For such applications, a corresponding server and a corresponding service program are required in the network to provide corresponding services, such as database services, configuration parameter services, and the like, so that a specific communication connection needs to be established between the client terminal and the server terminal to ensure the normal operation of the application program. In the embodiment of the application, the terminal is a terminal used for providing services for a driver in the multi-finger network car booking platform and a terminal used for taking a car. Such as a mobile terminal, a vehicle mounted terminal.
In view of the problem that processing resources for discovering social relationships are wasted to a certain extent due to insufficient abundance of social relationships in the related art, the embodiment of the application provides a data analysis method, which can effectively utilize the processing resources to discover effective social relationships. In the embodiment of the application, any information related to the user is obtained under the condition that the user authorizes the permission.
In order to effectively utilize processing resources to discover effective social relationships, in the embodiment of the application, images of passengers can be collected after successful pickup when any taking order is made. Then, facial features of each passenger are obtained based on image analysis. The extracted facial features are stored in association with the order. Orders and related face features initiated by the same target object can be put into storage for use. Then, for each target object, the characteristics of the target object and the characteristics of the associated object thereof can be analyzed through big data analysis based on the face characteristics associated with the order of the target object, and the identity of the unknown object is confirmed, so that the mining of the social relationship among different users is realized.
To facilitate understanding of the data analysis method provided by the embodiments of the present application, the following description is made with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an application scenario according to an embodiment of the present application.
As shown in fig. 1, the application scenario may include, for example, a storage system 10, a server 20, and a terminal (e.g., 30_1 and 30_2 or 30_ N in fig. 1). Any suitable electronic device that the terminal may use for network access includes, but is not limited to, a computer, a laptop, a smartphone, a tablet, a smartwatch, a smartband, or other type of terminal. The storage system 10 can store user information (such as user identification, user characteristics), geographic information, and other resources required by the networked car appointment platform. The server 20 is used for realizing interaction with the terminal to implement the network car booking business and extracting relevant information so as to mine social relations. For example, the server interacts with a terminal of a vehicle user to realize operations of ordering, dispatching a driver, planning a path, comparing passenger information and the like of the user.
In implementation, the terminal can collect passenger images and extract the facial features of each passenger from the images, and then the facial features are sent to the server for storage. Or the terminal collects passenger images and sends the passenger images to the server, and the server extracts and stores the facial features of each passenger.
The server can locate the facial features of the passenger initiating the order based on the stored facial features and establish a social relationship between the passenger initiating the order and the passenger co-riding.
In the application scenario shown in fig. 1, terminals (e.g., between 30_1 and 30_2 or 30_ N) may also communicate with each other via the network 40. Network 40 may be a network used for information transfer in a broad sense and may include one or more communication networks such as a wireless communication network, the internet, a private area network, a local area network, a metropolitan area network, a wide area network, or a cellular data network.
The description in this application is detailed in terms of only a single server or terminal, but it will be understood by those skilled in the art that the single server 20, terminal and storage system 10 shown are intended to represent the operation of the solution of the present application in relation to terminals, servers and storage systems. The detailed description of a single terminal and a single server and storage system is for convenience of description at least and does not imply limitations on the number, types, or locations of terminals and servers. It should be noted that the underlying concepts of the example embodiments of the present application may not be altered if additional modules are added or removed from the illustrated environments. In addition, although a bidirectional arrow from the storage system 10 to the server 20 is shown in fig. 1 for convenience of explanation, it will be understood by those skilled in the art that the above-described data transmission and reception may also be realized through the network 40.
The server 20 may be a server, a server cluster composed of several servers, or a cloud computing center. The server 20 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform.
Fig. 2 is a schematic diagram illustrating an operating principle of a data analysis method according to an embodiment of the present disclosure. As shown in fig. 2, passenger images may be acquired based on video data acquisition and then facial features of different passengers may be identified based on a face recognition method. In implementation, the facial features extracted from the video can be acquired in real time based on a message subscription mechanism (such as ETL/kafka in fig. 2) and stored in a storage, for example, in a Hive library shown in fig. 2. Then, data mining is carried out on the data in the database to analyze the facial features of the passenger who initiates the order, and then further data mining is carried out on the basis of the facial features of the passenger who initiates the order, so that the social relationship among different users is established. The social relationships may be stored in a graph database for use by subsequent applications. For example, media resource recommendations may be made based on the constructed social relationships, coupons issued to attract more users, and so forth. Of course, in actual use, the constructed social relationship may be used according to actual application requirements, which is not limited in the embodiment of the present application.
As can be seen from fig. 2, in order to effectively discover social relationships between different users in the embodiment of the present application, two parts of content may be mainly included, and on one hand, basic data collection is performed. On the other hand, the social relationship among different users is determined by data analysis based on the collected basic data.
The following explains the data analysis method provided in the embodiments of the present application in view of these two points.
First, basic data acquisition
After any user initiates a riding request based on the network car booking platform, if the network car booking platform successfully dispatches a list, relevant drivers are guided to pick up the passengers. After the pickup drive succeeds, the vehicle can acquire passenger videos. Based on an image analysis technology, video frames containing face content can be screened from the video for subsequent face feature extraction.
If the in-vehicle built-in face recognition system detects the video, capturing a key frame containing a face; and carrying out face recognition on the key frame to obtain each face frame image. In order to effectively extract the human face features, the human face frame image can be preprocessed to obtain a qualified human face frame image. The preprocessing may include, for example, face orientation (angle) correction, so that the angles of facial images that are skewed or at different capture angles are as uniform as possible. For another example, the light of the face frame image (i.e., the brightness of the image) may be corrected, so that the face features may be extracted more accurately.
In implementation, the face frame image can be extracted by the vehicle-mounted system, and the same order and the face frame image related to the same order are sent to the networked car-booking background server based on a network, such as a 4G/5G network. Facial features are extracted from each face frame image by the server.
For example, in some embodiments, face keypoints may be extracted as face features. The number of the key points may be 68 key points, 16 key points, etc., which are not limited in the embodiments of the present application.
Of course, in other embodiments, the facial feature information in the facial frame image may also be extracted based on a neural network. That is, any information capable of characterizing the facial features is suitable for the embodiments of the present application.
In some embodiments, the implementation of obtaining the face feature information may be further implemented as: and converting the acquired facial frame image into a facial vector. For example, the acquired m × m face frame image is flattened and converted into a face vector with n ═ m × m
Figure BDA0002843219020000091
Wherein m is a positive integer. For example, the face frame image is composed of pixels, for example, the 8 x 8 pixel image is composed of 64 pixels, and the process of flattening can be implemented by connecting the 64 pixel values from left to right line by line, i.e. completely converting them into 64-dimensional vectors
Figure BDA0002843219020000101
Calculating the average vector of the acquired N face vectors
Figure BDA0002843219020000102
The average vector is the "average face" feature vector. The N face vectors may be an average value of all faces stored in the Hive library shown in fig. 2. When new face vectors are added, the average vectors can be updated in real time
Figure BDA0002843219020000103
Or periodically recalculating average vector according to face vector stored in the library
Figure BDA0002843219020000104
Of course, if the base number used for calculating the average vector V at the previous stage is relatively large (i.e., N is relatively large), the average vector V may not be updated
Figure BDA0002843219020000105
In implementation, the determination may be performed according to actual requirements, and this is not limited in the embodiments of the present application.
Then, calculate
Figure BDA0002843219020000106
Obtaining the difference between the current face and the average face, and calculating the covariance matrix C phiTAnd then, carrying out characteristic value decomposition on the C to obtain a vector which is the face characteristic mined by the subsequent social relationship of the user of the current face.
The same order and the facial features thereof can be stored in the server in an associated manner. All orders and face features of the same target object are used as riding information of the target object. The riding information of the target object can be supplemented in real time based on the human face characteristics of the new order of the target object acquired in real time.
When the method is implemented, the target object uniquely identifies one object by adopting the identity information identifier of the target object, and the identity information identifier of the target object, the order information of the target object and the face characteristics associated with each order can be stored in the riding information during storage.
By processing the information of different target objects, the riding information of different target objects can be collected, and basic data is established based on the riding information so as to facilitate subsequent social relationship mining.
Second, social relationship mining
The social relationship mining method in the embodiment of the application mainly comprises the steps of recognizing the face features of the target object and the face features of the related objects related to the target object from riding information of the target object, and establishing the social relationship based on the recognized different objects.
In practice, as shown in fig. 3, in step 301, the pre-stored riding information of the target object may be acquired.
As described above, the riding information includes at least one facial feature of at least one passenger.
That is, the riding information includes the facial features associated with each order.
If only a small number of orders are included, the amount of data may be insufficient and subsequent operations may be cancelled. And if the order quantity is larger than the preset order quantity, the riding information of the target object comprises a plurality of face features, wherein each order at least corresponds to one face feature.
In step 302, if the riding information includes a plurality of facial features, the facial features are classified to obtain at least one facial feature set.
The target object is an object for initiating an order, and the scenario for initiating an order may roughly include the following four cases.
1. The network is called for the user to reserve the car, and only the user takes the car.
2. One person calls the network to order the car for another person, and only one other person takes the car.
3. The calling network appoints the car, and the user and others take the car together.
4. The network is called for a plurality of other people to make a reservation, and the plurality of other people take a bus together.
In any scenario, facial features are extracted from each successfully executed order (hereinafter referred to as a valid order). Due to the above four situations, the riding information of the same target object includes not only the facial features of the target object but also the facial features of other people, and these passengers who ride together certainly have social relations with the target object.
Therefore, the facial features of different objects can be distinguished through classification processing, and in step 303, the facial feature set of the target object and the facial feature set of the associated object having an association relationship with the target object can be determined based on each facial feature set.
Wherein, one face feature set corresponds to one object. The method comprises the following steps that a target object initiates a riding order to mostly call a vehicle, so that a face feature set with the largest number of elements can be determined as a face feature set of the target object; thereby obtaining a plurality of facial features of the target object.
Then, the face feature sets other than the face feature set of the target object are respectively used as the face feature sets of the associated objects, so as to obtain the face feature sets of the associated objects.
In practice, the classification process for the plurality of facial features may be determined based on a cluster analysis technique. For example, the number of facial feature sets may not be limited based on a density cluster analysis method. Thereby classifying the respective sets of facial features.
And clustering analysis can also be carried out by adopting a kmeans and dbscan clustering method.
A simple classification processing method is to group similar human face features into one class. For example, as shown in fig. 4, a possible classification processing scheme is provided:
in step 401, ride information of a target object is acquired.
In step 402, candidate orders with only one human face feature in the orders are screened from the riding information.
In step 403, it is determined whether the candidate order quantity is smaller than the preset order quantity, if not, step 404 is executed, and if yes, step 405 is executed.
That is, when the order quantity of a single passenger is small, the actual situation of the target object initiating the order is not sufficiently reflected, so the classification is performed by adopting the multi-face feature manner in step 405.
In step 404, the face features with the largest total number of occurrences are counted as the face features of the target object among the face features corresponding to the candidate orders.
In step 405, the face feature with the highest frequency of occurrence is counted among all the face features of the target object as the face feature of the target object.
In the implementation of the method and the device, the face features of the target object can be determined based on the number of the face features associated with the order, and the efficiency of determining the face features of the target object can be improved.
In some embodiments, the classification process as in step 404 may be implemented as:
in step a1, all passenger order information of the target object ID in the last specified number of months is retrieved, and all candidate orders with the number of passengers of 1 (i.e. the number of identified face feature vectors of 1) are pulled.
In step a2, acquiring a passenger face feature set S associated with all candidate orders from a database;
in step a3, any face feature vector v is selectediE S, calculating the remaining vj={vjE is S, and vj≠viAnd viSetting a threshold rho, and when the Euclidean distance is smaller than the rho, considering the two face feature vectors as the face feature vectors of the same object; repeating the calculation until the S is traversed to obtain a result set S1;S1Refer to v in addition to step A3iSet of extrinsic feature vectors, i.e. S1=S-vi
In step a4, v ═ v for S ″iIs e.g. S, and
Figure BDA0002843219020000121
repeating the steps A1-A3 until all face feature vectors are calculated, and assuming that n face feature sets are obtained.
In step a5, the number of elements in each face feature set is calculated, and the set S with the largest number is selectedmaxThe tag is a target object and is associated with an id (identity) of the target object.
In step A6, the remaining n-1 result sets are sequentially marked as n-1 associated objects that are co-multiplied with the target object.
The classification process in step 405 is performed similarly to steps A3-A6. Because more face feature vectors need to be processed, in order to optimize the calculation and improve the efficiency of classification processing, the following differences exist: in step A3, if v of Euclidean distance is calculatediAnd vjSkipping occurs in the same order, i.e., different face feature vectors in the same order do not calculate euclidean distances, because one person cannot ride in the same car as the other person, and thus there is no need to calculate two vectors to determine whether the same person is present.
After the facial features associated with different target objects are classified, the facial feature set of each target object and the associated object associated with the target object can be obtained. As shown in fig. 5, each solid coil represents a set of facial features. As shown in fig. 5, a user1, a user2, and a user3 are each a target object. In fig. 5, it is shown that three associated objects are extracted by the users 1 and 2, respectively, and 2 associated objects are extracted by the user 3. In order to specifically identify the identity of the associated object, in the embodiment of the present application, the unknown user, that is, the associated object with an uncertain identity, may also be matched with the known object to determine the identity of the unknown object.
As shown in fig. 3, in step 304, a candidate object having the same riding place as the associated object may be screened out from the plurality of candidate objects as a known object.
For example, a riding place set of the associated object is obtained, and a riding place set of the candidate object is obtained; and if the intersection of the riding place set of the associated object and the riding place set of the candidate object is not empty, determining the candidate object as a known object.
The known objects are filtered to reduce the number of matching times, for example, if the candidate object and the associated object have no identical life track at all, the candidate object and the associated object are definitely not the same user, and it is not necessary to match whether the candidate object and the associated object are the same user.
In step 305, for each associated object, performing a matching operation on the facial feature set of the associated object and the facial feature set of the known object;
in step 306, the identity information of the matched known object is associated as the identity information of the associated object.
In step 307, an association relationship network between the target object and each associated object is constructed based on the identity information of the target object and the identity information of each associated object.
Continuing with the example shown in fig. 5 as an example, as shown by the dashed arrows in fig. 5, the similarity between the unknown user x and the candidate objects user2 and user3 needs to be calculated as follows:
extracting CITY CITY _ ID where network appointment order of user x and user1 together take a carx={ID1,ID2,ID3In which each ID identifies a city.
Similarly, the user2 taxi CITY is CITY _ IDuser2={ID4,ID2,ID5User3 is CITY _ IDuser3={ID4,ID5,ID6};
Due to CITY _ IDxAnd CITY _ IDuser3The two have no intersection, namely the two do not have the behavior of taking the net appointment in the same city, and the two are considered to have no relation; due to CITY _ IDxAnd CITY _ IDuser2If there is an intersection, then subsequently calculating the average face feature vector of all face feature vectors in the face feature set of the unknown user X, calculating the average face feature vector of the face feature vectors in the face feature set of the user2, and then solving the Euclidean distance between the two average face feature vectors to determine the similarity dx-user2
By analogy, assuming that the similarity between the unknown user X and each known object is calculated, the maximum value max (d) is selectedx-user2,dx-user4,…,dx-usern) That is, the two are considered to be the same person and thus are associated with each other.
Of course, in another embodiment, when any associated object is taken as an unknown user, the similarity between the average feature of the associated object and the average feature of the known object may also be determined; selecting a maximum similarity value, and if the maximum similarity value is greater than a similarity threshold value, determining that the associated object is matched with the known object; otherwise, if the maximum similarity is less than or equal to the similarity threshold, determining that the associated object does not match the known object.
Of course, the similarity between two human face features can be determined not only by the euclidean distance, but also by the cosine distance between vectors, and is also applicable to the embodiment of the present application.
Therefore, social relations of different objects are mined, and an incidence relation network can be constructed.
In other embodiments, not only the relationship between different users needs to be established, but also the acquaintance between users can be further analyzed, and the association is also used for representing hereinafter. For example, the total volume of ride orders taken by the target object and at least one associated object together in the valid orders of the target object can be determined; and determining the reference order quantity of the associated object and the target object when the associated object and the target object take a car together in the effective order of the target object.
Then, for each associated object, respectively: determining the association degree of the target object to the associated object based on the reference order quantity, the co-multiplication order total quantity and the effective order quantity of the target object; wherein the degree of association is proportional to the reference order quantity, inversely proportional to the total quantity of orders multiplied by, and proportional to the ratio of the effective order quantity to the reference order quantity.
It should be noted that the association degree is one-way, that is, the association degree of the user a to the user B is not equal to the association degree of the user B to the user a, so that the actual situation can be better reflected.
When the degree of association is determined, canImplemented to extract all the completion data of the user1 in the last month and recorded as a set Suser1(ii) a Selecting the number of passengers more than 1, namely the number of face feature vectors in the same order is more than 1, and expanding the number of face feature vectors to be recorded as a set Snum>1
"expand" means: if a certain order is that three users 1 and A and B jointly take a trolley, the order is expanded to be two data of a user1-A and a user 1-B. Two pieces of data refer to order data, and only one piece of order data is stored in a platform, for example, a user1-A-B indicates that three persons of the user are combined in one order, and the order data is split into 2 pieces for convenience in processing: user1-A, user 1-B.
From the set Snum>1The data of the completion list with the user x being multiplied is extracted and marked as Sx. If the number of the set elements is num(s), the acquaintance degree f (user1, x) of the user1 to x is marked as shown in formula (1):
Figure BDA0002843219020000151
in the formula (1), num (S)x) Represents the number of completed orders when user X and user1 are riding the car; num (S)user1) Represents the number of completed orders for user 1; num (S)num>1) Indicating the number of completed orders when the user1 takes a car with another person.
Of course, in another embodiment, the acquaintance may be replaced by a pearson correlation coefficient.
The graph database is adapted to store user relationship data, so that after social relationships are mined, the open source graph database may be used to store social relationships between users. In the graph database, the user id and the average face feature vector are used as vertexes, and the acquaintance degree of the user is used as an edge to interconnect the vertexes.
Of course, it should be noted that any database required for storing data is merely an example, and the present application is not limited thereto.
As shown in fig. 6, based on the same inventive concept, there is provided a data analysis apparatus 600 including:
a riding information obtaining module 601, configured to obtain pre-stored riding information of a target object, where the riding information includes at least one human face feature of at least one passenger;
a classification processing module 602, configured to, if the riding information includes multiple facial features, perform classification processing on the multiple facial features to obtain at least one facial feature set;
an identifying module 603, configured to determine, based on each of the facial feature sets, a facial feature set of the target object and a facial feature set of an associated object having an association relationship with the target object;
a matching module 604, configured to perform, for each associated object, a matching operation on the facial feature set of the associated object and a facial feature set of a known object;
an identity determining module 605, configured to associate the identity information of the matched known object with the identity information of the associated object;
a relationship building module 606, configured to build an association relationship network between the target object and each of the associated objects based on the identity information of the target object and the identity information of each of the associated objects.
In some embodiments, the identification module is to:
taking each face feature set as a face feature set of an object;
the face feature set with the largest number of elements is the face feature set of the target object;
and respectively taking the face feature sets except the face feature set of the target object as the face feature sets of the associated objects.
In some embodiments, the matching module is to:
respectively determining the average characteristics of the face characteristic sets of the associated object and the known objects;
determining a similarity between the average feature of the associated object and an average feature of the known objects;
and selecting the known object corresponding to the maximum similarity as the known object matched with the target object.
In some embodiments, the apparatus further comprises:
the known object screening module may be configured to, before the matching module performs a matching operation on the facial feature set of the associated object and a facial feature set of a known object, screen out a candidate object having the same riding location as the associated object from a plurality of candidate objects as the known object.
In some embodiments, the apparatus further comprises:
the association degree determining module is used for determining the total volume of co-passenger orders of the target object and at least one associated object in a common bus in the effective orders of the target object after the identifying module determines the facial feature set of the target object and the facial feature set of the associated object having an association relation with the target object based on the facial feature sets; determining a reference order quantity when the associated object and the target object take a car together in the effective order of the target object;
respectively executing for each of the associated objects:
determining the association degree of the target object to the associated object based on the reference order quantity, the total ride-sharing order quantity and the effective order quantity of the target object;
wherein the degree of association is directly proportional to the reference order quantity, inversely proportional to the total quantity of orders multiplied by, and directly proportional to a ratio of the effective order quantity and the reference order quantity.
In some embodiments, the apparatus further comprises:
a ride information maintenance module for determining the ride information of the target object based on the following method:
responding to the riding request of the target object, and acquiring the human face characteristics of each riding object related to the riding request;
and adding the human face characteristics of each riding object into the riding information.
For the implementation and beneficial effects of the operations in the data analysis apparatus, reference is made to the description of the foregoing method, and further description is omitted here.
Having described the data analysis method and apparatus of the exemplary embodiments of the present application, an electronic device according to another exemplary embodiment of the present application is next described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the data analysis method according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform the steps as in a data analysis method.
The electronic device 130 according to this embodiment of the present application is described below with reference to fig. 7. The electronic device 130 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the electronic device 130 is represented in the form of a general electronic device. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the electronic device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 136. As shown, network adapter 136 communicates with other modules for electronic device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, aspects of a data analysis method provided herein may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of a data analysis method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for data analysis of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image scaling apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image scaling apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image scaling apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image scaling device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (14)

1. A method of data analysis, the method comprising:
acquiring pre-stored riding information of a target object, wherein the riding information comprises at least one face feature of at least one passenger;
if the riding information comprises a plurality of human face features, classifying the plurality of human face features to obtain at least one human face feature set;
determining a face feature set of the target object and a face feature set of an associated object having an association relation with the target object based on each face feature set;
for each associated object, matching the facial feature set of the associated object with the facial feature set of a known object;
associating the identity information of the matched known object with the identity information of the associated object;
and constructing an association relationship network between the target object and each associated object based on the identity information of the target object and the identity information of each associated object.
2. The method of claim 1, wherein determining the face features of the target object and the face features of the associated objects having an association relationship with the target object based on each of the face feature sets comprises:
taking each face feature set as a face feature set of an object;
the face feature set with the largest element number is the face feature set of the target object;
and respectively taking the face feature sets except the face feature set of the target object as the face feature sets of the associated objects.
3. The method of claim 1, wherein matching the set of facial features of the associated object with the set of facial features of the known object comprises:
respectively determining the average characteristics of the face characteristic sets of the associated object and the known objects;
determining a similarity between the average feature of the associated object and an average feature of each of the known objects;
and selecting the known object corresponding to the maximum similarity as the known object matched with the target object.
4. The method of claim 1, wherein prior to matching the set of facial features of the associated object with the set of facial features of the known object, the method further comprises:
and screening out candidate objects with the same riding places as the related objects from a plurality of candidate objects as the known objects.
5. The method of any one of claims 1-4, wherein after determining the set of facial features of the target object and the set of facial features of the associated object having an association relationship with the target object based on each of the set of facial features, the method further comprises:
determining the total volume of co-riding orders taken by the target object and at least one associated object together in the effective orders of the target object; determining a reference order quantity when the associated object and the target object take a car together in the effective order of the target object;
respectively executing for each of the associated objects:
determining the association degree of the target object to the associated object based on the reference order quantity, the co-passenger order total quantity and the effective order quantity of the target object;
wherein the degree of association is directly proportional to the reference order quantity, inversely proportional to the total quantity of orders multiplied by, and directly proportional to a ratio of the effective order quantity and the reference order quantity.
6. The method according to any one of claims 1-4, further comprising:
determining the riding information of the target object based on the following method:
responding to the riding request of the target object, and acquiring the human face characteristics of each riding object related to the riding request;
and adding the human face characteristics of each riding object into the riding information.
7. A data analysis apparatus, characterized in that the apparatus comprises:
the system comprises a riding information acquisition module, a passenger selection module and a passenger selection module, wherein the riding information acquisition module is used for acquiring pre-stored riding information of a target object, and the riding information comprises at least one face feature of at least one passenger;
the classification processing module is used for classifying a plurality of human face features to obtain at least one human face feature set if the riding information comprises the plurality of human face features;
the identification module is used for determining a face feature set of the target object and a face feature set of an associated object having an association relation with the target object based on each face feature set;
the matching module is used for matching the facial feature set of the associated object with the facial feature set of the known object aiming at each associated object;
the identity determining module is used for associating the identity information of the matched known object with the identity information of the associated object;
and the relationship construction module is used for constructing an association relationship network between the target object and each associated object based on the identity information of the target object and the identity information of each associated object.
8. The apparatus of claim 7, wherein the identification module is configured to:
taking each face feature set as a face feature set of an object;
the face feature set with the largest element number is the face feature set of the target object;
and respectively taking the face feature sets except the face feature set of the target object as the face feature sets of the associated objects.
9. The apparatus of claim 7, wherein the matching module is configured to:
respectively determining the average characteristics of the face characteristic sets of the associated object and the known objects;
determining a similarity between the average feature of the associated object and an average feature of each of the known objects;
and selecting the known object corresponding to the maximum similarity as the known object matched with the target object.
10. The apparatus of claim 7, further comprising:
and the known object screening module is used for screening out a candidate object which has the same riding place with the associated object from a plurality of candidate objects as the known object before the matching module performs matching operation on the facial feature set of the associated object and the facial feature set of the known object.
11. The apparatus according to any one of claims 7-10, wherein the apparatus further comprises:
the association degree determining module is used for determining the total volume of co-passenger orders of the target object and at least one associated object in a common bus in the effective orders of the target object after the identifying module determines the facial feature set of the target object and the facial feature set of the associated object having an association relation with the target object based on the facial feature sets; determining a reference order quantity when the associated object and the target object take a car together in the effective order of the target object;
respectively executing for each of the associated objects:
determining the association degree of the target object to the associated object based on the reference order quantity, the co-passenger order total quantity and the effective order quantity of the target object;
wherein the degree of association is directly proportional to the reference order quantity, inversely proportional to the total quantity of orders multiplied by, and directly proportional to a ratio of the effective order quantity and the reference order quantity.
12. The apparatus according to any one of claims 7-10, wherein the apparatus further comprises:
a ride information maintenance module for determining the ride information of the target object based on the following method:
responding to the riding request of the target object, and acquiring the human face characteristics of each riding object related to the riding request;
and adding the human face characteristics of each riding object into the riding information.
13. An electronic device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of claims 1-6 by executing the instructions stored by the memory.
14. A storage medium, characterized in that the storage medium stores a computer program which, when run on a computer, causes the computer to perform the method according to any one of claims 1-6.
CN202011499709.2A 2020-12-18 2020-12-18 Data analysis method and device, electronic equipment and storage medium Active CN112560694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011499709.2A CN112560694B (en) 2020-12-18 2020-12-18 Data analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011499709.2A CN112560694B (en) 2020-12-18 2020-12-18 Data analysis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112560694A CN112560694A (en) 2021-03-26
CN112560694B true CN112560694B (en) 2022-05-17

Family

ID=75063264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011499709.2A Active CN112560694B (en) 2020-12-18 2020-12-18 Data analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112560694B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169815A (en) * 2016-03-08 2017-09-15 滴滴(中国)科技有限公司 The method and apparatus of share-car between a kind of acquaintance
CN110942619B (en) * 2018-09-21 2024-03-08 杭州海康威视系统技术有限公司 Vehicle determination method, device, system and electronic equipment
CN109493073B (en) * 2018-10-25 2021-07-16 创新先进技术有限公司 Identity recognition method and device based on human face and electronic equipment
CN111368621B (en) * 2019-09-18 2023-09-01 杭州海康威视系统技术有限公司 Method, device, equipment and storage medium for judging personnel relationship

Also Published As

Publication number Publication date
CN112560694A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
US10936915B2 (en) Machine learning artificial intelligence system for identifying vehicles
WO2020199484A1 (en) Video-based course-of-motion tracking method, apparatus, computer device, and storage medium
JP2020504358A (en) Image-based vehicle damage evaluation method, apparatus, and system, and electronic device
US11681744B2 (en) Methods and systems for updating a database based on object recognition
CN110502592A (en) Project domain topic analysis system based on big data analysis technology
US9984309B2 (en) Classifying and grouping electronic images
CN112650875A (en) House image verification method and device, computer equipment and storage medium
WO2023178930A1 (en) Image recognition method and apparatus, training method and apparatus, system, and storage medium
CN113595886A (en) Instant messaging message processing method and device, electronic equipment and storage medium
CN113568934A (en) Data query method and device, electronic equipment and storage medium
CN112560694B (en) Data analysis method and device, electronic equipment and storage medium
CN117252362A (en) Scheduling method and device based on artificial intelligence, computer equipment and storage medium
US10664457B2 (en) System for real-time data structuring and storage
CN108062576B (en) Method and apparatus for output data
CN113487357B (en) Customer file management method and system based on face recognition
CN108776840A (en) Information flow method for pushing, device, electronic equipment and computer readable storage medium
CN110245964A (en) Information-pushing method and device and storage medium
US11995733B2 (en) Method and system for linking unsolicited electronic tips to public-safety data
CN114553684B (en) Method, device, computer equipment and storage medium for network point operation and maintenance
CN115496246B (en) Intelligent searching and flexible distributing method for shared meeting room based on group difference
CN114579873A (en) Big data user portrait processing method and server in digital scene
CN117596018A (en) User identity recognition method and processing equipment
CN117392596A (en) Data processing method, device, electronic equipment and computer readable medium
CN113064935A (en) Data analysis method, apparatus and medium
CN117649305A (en) Personalized claim micro-service management method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant