CN112633244B - Social relationship identification method and device, electronic equipment and storage medium - Google Patents

Social relationship identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112633244B
CN112633244B CN202011642514.9A CN202011642514A CN112633244B CN 112633244 B CN112633244 B CN 112633244B CN 202011642514 A CN202011642514 A CN 202011642514A CN 112633244 B CN112633244 B CN 112633244B
Authority
CN
China
Prior art keywords
human body
target person
frame information
body frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011642514.9A
Other languages
Chinese (zh)
Other versions
CN112633244A (en
Inventor
邢玲
余意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202011642514.9A priority Critical patent/CN112633244B/en
Publication of CN112633244A publication Critical patent/CN112633244A/en
Application granted granted Critical
Publication of CN112633244B publication Critical patent/CN112633244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for identifying social relationships, which comprises the following steps: acquiring a first image and a second image; performing human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person; and performing human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person; matching the first target person with a third target person or a fourth target person, wherein if the matching is successful, a first matching result is obtained, and if the matching is successful, a second target person is matched with the third target person or the fourth target person, a second matching result is obtained; obtaining a first social relation predicted value corresponding to the first matching result and obtaining a second social relation predicted value corresponding to the second matching result; a social relationship of the first target person with the second target person is determined. The identification accuracy of the social relationship can be improved.

Description

Social relationship identification method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image relationship identification, in particular to a social relationship identification method and device, electronic equipment and a storage medium.
Background
With the development of artificial intelligence, intelligent monitoring, smart communities and smart cities, the development of smart cities is more mature, the intelligent cameras are more widely distributed, the search era of searching images by images enters the fine management era of data filing, and finally one person one file, one vehicle one file, people-vehicle association and person-to-person association are formed. One person to one file data is that all the snapshot records of one person are formed into one personal file data, and each person has own file data which contains the place and the image data of the snapshot when the person goes.
Social relationships are close connections between people and constitute the basic structure of society. Social relationships can be roughly classified as close relationships, general relationships, no relationships, and the like. It has many important applications such as personnel management in intelligent communities, potential partnership analysis by criminals, etc. The existing social relationship identification method is mainly based on a single image in the visual field, even if two images based on space-time activity characteristics are judged according to certain given rules, for example, under the same camera, two people with a time interval less than N (such as N = 3) seconds before and after snapshot are considered to be in a same-row relationship, the rule is simple and rough, so that certain same-row relationships which do not belong to the same snapshot big image exist in the screened-row relationships, namely two people in the same row are respectively positioned in the two previous and next snapshot big images, or one face in one snapshot big image cannot be completely identified, for the situations, the social relationship type identification method based on the single image cannot judge, and if the data is directly thrown away, a large amount of archive data can be discarded, so that the accuracy of social relationship identification is low.
Disclosure of Invention
The embodiment of the invention provides a method for identifying social relationships, which can improve the accuracy of identifying the social relationships.
In a first aspect, an embodiment of the present invention provides a method for identifying a social relationship, including:
acquiring a first image and a second image, wherein the capturing time interval between the first image and the second image does not exceed the preset time, the first image comprises a first target person and a second target person, and the second image comprises a third target person and a fourth target person;
performing human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person; and
performing human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person;
matching the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information and the fourth body frame information, wherein if the matching is successful, a first matching result is obtained, then matching the second target person with the third target person or the fourth target person is obtained, and if the matching is successful, a second matching result is obtained, wherein the first matching result comprises the first body frame information and the second body frame information, and the second matching result comprises the third body frame information and the fourth body frame information;
inputting the first matching result and the second matching result into a pre-trained social relationship recognition network for predicting the social relationship to obtain a first social relationship predicted value corresponding to the first matching result and a second social relationship predicted value corresponding to the second matching result;
determining a social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value.
Optionally, the human body detection is performed on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person, including:
and respectively inputting the first image and the second image into a pre-trained target detection network, and respectively outputting the first human body frame information, the second human body frame information, the third human body frame information and the fourth human body frame information through the target detection network.
Optionally, matching the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and obtaining a first matching result if matching is successful, including:
acquiring the human body characteristics of the first target person from the first image according to the first human body frame information;
acquiring the human body characteristics of the third target person and the human body characteristics of the fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
and carrying out feature matching on the human body features of the first target person and the human body features of the third target person or the human body features of the fourth target person, and obtaining a first matching result if the matching is successful.
Optionally, matching the second target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and obtaining a second matching result if matching is successful, including:
acquiring the human body characteristics of the second target person from the first image according to the second human body frame information;
acquiring the human body features of the third target person and the human body features of the fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
and carrying out feature matching on the human body features of the second target person and the human body features of the third target person or the human body features of the fourth target person, and obtaining a second matching result if matching is successful.
Optionally, the inputting the first matching result into a pre-trained social relationship recognition network for predicting a social relationship to obtain a first predicted social relationship value corresponding to the first matching result includes:
calculating to obtain first joint frame information based on the first human body frame information and the second human body frame information in the first matching result;
and inputting the first face, the first human body frame information, the second human body frame information and the first combined frame information into the pre-trained social relationship recognition network for predicting the social relationship to obtain the first social relationship predicted value.
Optionally, the inputting the second matching result into a pre-trained social relationship recognition network for predicting a social relationship to obtain a second social relationship predicted value corresponding to the second matching result includes:
calculating to obtain second joint frame information based on the third human body frame information and the fourth human body frame information in the second matching result;
and inputting the second face, the third body frame information, the fourth body frame information and the second joint frame information into the pre-trained social relationship recognition network for predicting the social relationship to obtain a second social relationship predicted value.
Optionally, the determining the social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value includes:
respectively comparing the first social relationship predicted value and the second social relationship predicted value with a preset intimacy degree grade to obtain a first comparison result and a second comparison result;
and determining the social relationship between the first target person and the second target person according to the first comparison result and the second comparison result.
In a second aspect, an embodiment of the present invention provides an apparatus for identifying a social relationship, including:
the acquisition module is used for acquiring a first image and a second image, wherein the capturing time interval between the first image and the second image does not exceed the preset time, the first image comprises a first target person and a second target person, and the second image comprises a third target person and a fourth target person;
the first detection module is used for carrying out human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person; and
the second detection module is used for carrying out human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person;
a matching module, configured to match the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, where if matching is successful, a first matching result is obtained, and if matching is successful, a second matching result is obtained, where the first matching result includes the first body frame information and the second body frame information, and the second matching result includes the third body frame information and the fourth body frame information;
the prediction module is used for inputting the first matching result and the second matching result into a pre-trained social relationship recognition network for predicting the social relationship to obtain a first social relationship predicted value corresponding to the first matching result and a second social relationship predicted value corresponding to the second matching result;
a determining module for determining a social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: the identification method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps in the identification method of the social relationship provided by the embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the method for identifying a social relationship provided by the embodiment of the present invention.
In the embodiment of the invention, two frames of images including the target person are obtained and are respectively subjected to human body detection, then the human body characteristics of the target person in the first image are respectively matched with the human body characteristics of the target person in the second image, after the matching is successful, the target person in the two frames of images is respectively input into a pre-trained social relationship recognition network for social relationship prediction, two social relationship predicted values are correspondingly obtained, the social relationship of the target person in the first image and the target person in the second image can be determined based on the two social relationship predicted values, the identification of the social relationship is carried out based on the two frames of images, and the accuracy of the social relationship identification can be improved without discarding the images.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for identifying social relationships according to an embodiment of the present invention;
fig. 2 is a flowchart of a first matching method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a second matching method according to an embodiment of the present invention;
fig. 4 is a flowchart of a first social relationship prediction method according to an embodiment of the present invention;
FIG. 4a is a schematic diagram illustrating training of a social relationship recognition network according to an embodiment of the present invention;
FIG. 5 is a flowchart of a second social relationship predicting method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for identifying social relationships according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a first matching device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a second matching device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a first social relationship predicting device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a second social relationship predicting apparatus according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for identifying social relationships according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
101. the method comprises the steps of obtaining a first image and a second image, wherein the capturing time interval of the first image and the second image does not exceed the preset time.
The first image includes a first target person and a second target person, and the second image includes a third target person and a fourth target person
In the embodiment of the invention, the method for identifying the social relationship can be applied to a scene of identifying the relationship of people under video monitoring. The electronic device on which the social relationship identification method is operated can acquire images and transmit data in a wired connection mode or a wireless connection mode. It should be noted that the Wireless connection manner may include, but is not limited to, a 3G/4G connection, a WiFi (Wireless-Fidelity) connection, a bluetooth connection, a wimax (Worldwide Interoperability for Microwave Access) connection, a Zigbee (low power local area network protocol) connection, an UWB (ultra wideband) connection, and other Wireless connection manners known now or developed in the future.
The first image and the second image can be collected in real time through image collecting equipment, can also be actively uploaded from a terminal through manual work and the like, and then are transmitted to a server through the wired or wireless network for storage and filing, for example, one first image or the second image can comprise N snapshot faces, the corresponding N snapshot face miniatures can be obtained after face extraction, then the snapshot faces are filed to obtain archives corresponding to each person, and the relation between the archives and the archives represents the relation between one person and the other person. The image acquisition device can comprise a camera and an electronic device which is provided with the camera and can acquire and transmit images. The first image and the second image may be images that need to be subjected to relationship analysis, and may be acquired from the image acquisition device in real time, or may be acquired from the storage server, and may be images acquired by the same image acquisition device, or may be acquired by different image acquisition devices, but a time interval of capturing the first image and the second image does not exceed a preset time, such as 3s, and the time interval of capturing the first image and the second image may be calculated by timestamps of two frames of images; at least 2 persons can be included in each frame of image, namely, the first image includes a first target person and a second target person, the second image includes a third target person and a fourth target person, the two target persons in each frame of image can refer to persons needing social relationship identification, and the social relationship can be roughly classified into close relationship, general relationship and strange relationship.
It should be noted that the terminal may include, but is not limited to, a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Computer, or a notebook Computer.
102. And carrying out human body detection on the first image to obtain first human body frame information corresponding to the first target person and second human body frame information corresponding to the second target person.
In an embodiment of the present invention, the first body frame information includes coordinate information of the human body of the first target person in the first image, the second body frame information includes coordinate information of the human body of the second target person in the first image, and the coordinate information may be obtained from the first image by a human body detection model or the like.
Optionally, the step 102 specifically includes:
and inputting the first image into a pre-trained target detection network, and outputting first human body frame information and second human body frame information through the target detection network.
In an embodiment of the present invention, the pre-trained target detection network may be a trained neural network model capable of performing target identification and positioning, such as fast R-CNN, YOLO, and the like, and the first target person and the second target person are identified from the first image through the target detection network to obtain corresponding human body rectangular frames, and coordinate information of the corresponding human body rectangular frames is output and correspondingly used as the first human body frame information and the second human body frame information, and further, a minimum rectangular frame including both the first target person and the second target person may be obtained, and then coordinate information of the minimum rectangular frame is obtained as the first human body joint frame information.
103. And carrying out human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person.
In an embodiment of the present invention, the third body frame information includes coordinate information of the body of the third target person in the second image, the fourth body frame information includes coordinate information of the body of the fourth target person in the second image, and the coordinate information may be obtained from the second image by a method such as a human body detection model.
Optionally, step 103 specifically includes:
and inputting the second image into a pre-trained target detection network, and outputting the third human body frame information and the fourth human body frame information through the target detection network.
In an embodiment of the present invention, the pre-trained target detection network may be a trained neural network model capable of performing target identification and positioning, such as fast R-CNN, YOLO, and the like, and the third target person and the fourth target person are identified from the second image through the target detection network to obtain corresponding human body rectangular frames, and coordinate information of the corresponding human body rectangular frames is output as the first human body frame information and the second human body frame information, and further, a minimum rectangular frame including both the third target person and the fourth target person may be obtained, and then coordinate information of the minimum rectangular frame is obtained as the second human body joint frame information.
104. And matching the first target person with a third target person or a fourth target person based on the first human body frame information, the second human body frame information, the third human body frame information and the fourth human body frame information, obtaining a first matching result if the matching is successful, matching the second target person with the third target person or the fourth target person if the matching is successful, and obtaining a second matching result if the matching is successful.
The first matching result includes the first body frame information and the second body frame information, and the second matching result includes the third body frame information and the fourth body frame information.
In the embodiment of the present invention, the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information may be obtained through the steps 102 and 103, the image information of the first target person and the image information of the second target person may be obtained from the first image according to the first body frame information and the second body frame information, the image information of the third target person and the image information of the fourth target person may be obtained from the second image according to the third body frame information and the fourth body frame information, and the image information of the first target person may be matched with the image information of the third target person and the image information of the fourth target person by feature matching, and matching the image information of the second target person with the image information of the third target person and the image information of the fourth target person through feature comparison, if both the image information of the second target person and the image information of the fourth target person are successfully matched, the first image and the second image simultaneously comprise the same target person, namely, the two target persons in the second image are also the two target persons in the first image, the two target persons in the first image and the second image form two pairs of parallel relations at different moments, and further, by analyzing the distance or the action of the two target persons forming the parallel relations, the social relation between the two target persons can be identified as belonging to any one of predefined relation categories, so that a social relation identification task is completed, wherein the predefined relation categories can comprise close relations, general relations and unknown relations.
Optionally, referring to fig. 2, fig. 2 is a flowchart of a first matching method provided in an embodiment of the present invention, and as shown in fig. 2, matching the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and obtaining a first matching result if matching is successful includes:
201. acquiring human body characteristics of a first target person from the first image according to the first human body frame information;
202. acquiring the human body characteristics of a third target person and the human body characteristics of a fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
203. and carrying out characteristic matching on the human body characteristics of the first target person and the human body characteristics of the third target person or the human body characteristics of the fourth target person, and obtaining a first matching result if the matching is successful.
In an embodiment of the present invention, the image information of the first target person is obtained from the first image according to the first body frame information, the image information of the third target person and the image information of the fourth target person are obtained from the second image according to the third body frame information and the fourth body frame information, then the body features of the corresponding target persons, that is, the body features of the first target person, the body features of the third target person and the body features of the fourth target person, are extracted from the image information of the first target person, the image information of the third target person and the image information of the fourth target person through a convolutional neural network (e.g., CNN, etc.), and similarity calculation is performed based on the body features to realize feature matching, specifically, the body features are then calculated through cosine similarity, and the similarity S between the body features of the first target person and the body features of the third target person is realized 1 And similarity S between the human body characteristics of the first target person and the human body characteristics of the fourth target person 2 Degree of similarity S 1 、S 2 Between 0 and 1, and the greater the similarity is; then the S is 1 、S 2 And comparing with a preset similarity threshold (such as 0.9), if the similarity threshold is larger than the similarity threshold, the matching is successful and the first matching result is obtained, otherwise, the matching is failed and mismatch information is returned.
Optionally, referring to fig. 3, fig. 3 is a flowchart of a second matching method provided in an embodiment of the present invention, and as shown in fig. 3, matching the second target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and obtaining a second matching result if matching is successful includes:
301. acquiring the human body characteristics of a second target person from the first image according to the second human body frame information;
302. acquiring the human body characteristics of a third target person and the human body characteristics of a fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
303. and carrying out characteristic matching on the human body characteristics of the second target person and the human body characteristics of the third target person or the human body characteristics of the fourth target person, and obtaining a second matching result if the matching is successful.
In an embodiment of the present invention, the image information of the second target person is obtained from the first image according to the second body frame information, the image information of the third target person and the image information of the fourth target person are obtained from the second image according to the third body frame information and the fourth body frame information, then the body features of the corresponding target persons, that is, the body features of the second target person, the body features of the third target person and the body features of the fourth target person are extracted from the image information of the second target person, the image information of the third target person and the image information of the fourth target person through a convolutional neural network (e.g., CNN, etc.), and then similarity calculation is performed based on the body features to realize feature matchingSimilarity S of human body features of the third target person 3 And similarity S between the human body characteristics of the second target person and the human body characteristics of the fourth target person 4 Degree of similarity S 3 、S 4 Between 0 and 1, and the greater the similarity is; then S is 3 、S 4 And comparing with a preset similarity threshold (such as 0.9), if the similarity threshold is greater than the similarity threshold, the matching is successful and the second matching result is obtained, otherwise, the matching is failed and mismatch information is returned.
105. And inputting the first matching result and the second matching result into a pre-trained social relationship recognition network for predicting the social relationship to obtain a first social relationship predicted value corresponding to the first matching result and a second social relationship predicted value corresponding to the second matching result.
In an embodiment of the present invention, the first matching result includes first human body frame information, second human body frame information, and first human body union frame information, the second matching result includes third human body frame information, fourth human body frame information, and second human body union frame information, corresponding images are respectively obtained from the corresponding first image and second image according to the information, a pre-trained social relationship recognition network is input to perform social relationship prediction, and a first social relationship predicted value of two target people in the first image and a second social relationship predicted value of two target people in the second image are correspondingly obtained.
Optionally, referring to fig. 4, fig. 4 is a flowchart of a first social relationship prediction method provided in an embodiment of the present invention, where as shown in fig. 4, the first matching result further includes first human body union box information formed by a first target person and a second target person, and the inputting the first matching result into a pre-trained social relationship recognition network for performing social relationship prediction to obtain a first social relationship prediction value corresponding to the first matching result includes:
401. and acquiring a corresponding first human body image, a corresponding second human body image and a corresponding first human body joint image from the first image based on the first human body frame information, the second human body frame information and the first human body joint frame information.
In an embodiment of the present invention, the first matching result includes first human body frame information, second human body frame information, and first human body joint frame information, that is, coordinates of a minimum frame formed by the human body frame of the first target person, the human body frame of the second target person, and the two target persons in the first image all include four diagonal coordinates, and the corresponding first human body image, second human body image, and first human body joint image may be obtained from the first image according to the respective four diagonal coordinates.
402. And inputting the first human body image, the second human body image and the first human body combination image into a pre-trained social relationship recognition network for predicting the social relationship to obtain a first social relationship predicted value.
In an embodiment of the present invention, the social relationship recognition network includes a first extraction network, a second extraction network, a third extraction network, and a full connection layer, and may extract a first feature of the first human body image through the first extraction network, extract a second feature of the second human body image through the second extraction network, and extract a third feature of the first human body joint image through the third extraction network.
The social relationship recognition network is pre-trained through a training set, wherein the training set can be a PISC (personal information center) data set which is one of large-scale social relationship data sets and mainly comprises images of common social relationships in daily life, and the PISC data set comprises 3 coarse-grained relationships: the intimacy, general relationship, and unconcerned relationship can be labeled, i.e., the intimacy, general relationship, and unconcerned relationship can be represented by the numbers 2, 1, and 0, respectively.
Referring to fig. 4a, fig. 4a is a schematic diagram illustrating training of the social relationship recognition network according to an embodiment of the present invention, where the first extraction network may be a residual 50 neural network, and similarly, the second extraction network may also be a residual 50 neural network, where the first extraction network and the second extraction network share network parameters (such as weights), and feature dimensions of the first extraction network and the second extraction network for feature extraction may be 2048. The third extraction network may also be a response 50, and similarly, the third feature of the first human body joint image may be extracted by the response 50.
Further, the feature dimension of the extracted third feature may also be 2048, the first feature, the second feature and the third feature may be spliced to obtain a 2048 × 3 spliced feature, and then a full-connection classification calculation may be performed on the spliced feature through a full-connection layer to obtain the first social relationship prediction value, which may be one of the numbers 2, 1 and 0 representing the social relationship, so as to obtain the social relationship between the first target person and the second target person in the first image.
Optionally, referring to fig. 5, fig. 5 is a flowchart of a second social relationship predicting method provided in the embodiment of the present invention, and as shown in fig. 5, the second matching result further includes second human body combination box information formed by a third target person and a fourth target person, and the inputting the second matching result into a pre-trained social relationship recognition network to perform social relationship prediction to obtain a second social relationship predicted value corresponding to the second matching result includes:
501. and acquiring a corresponding third human body image, a corresponding fourth human body image and a corresponding second human body joint image from the second image based on the third human body frame information, the fourth human body frame information and the second human body joint frame information.
In an embodiment of the present invention, the second matching result includes third human body frame information, fourth human body frame information, and second human body joint frame information, that is, coordinates of a minimum frame formed by the human body frame of the third target person, the human body frame of the fourth target person, and the two target persons in the second image all include four diagonal coordinates, and the corresponding third human body image, fourth human body image, and second human body joint image may be obtained from the second image according to the respective four diagonal coordinates.
502. And inputting the third human body image, the fourth human body image and the second human body joint image into a pre-trained social relationship recognition network for predicting the social relationship to obtain a second social relationship predicted value.
In this embodiment of the present invention, a social relationship recognition network similar to that in step 402 may be used to perform feature extraction from the third human body image, the fourth human body image, and the second human body joint image, and after obtaining corresponding features, perform stitching to obtain a stitching feature of 2048 × 3, and then perform full-connection classification calculation on the stitching feature through a full-connection layer to obtain the second social relationship predicted value, which may be one of numbers 2, 1, and 0 representing the social relationship, so as to obtain the social relationship between the third target person and the fourth target person in the second image, which is also the social relationship predicted value of the first target person and the second target person at different times.
106. And determining the social relationship between the first target person and the second target person based on the first social relationship predicted value and the second social relationship predicted value.
Specifically, the step 106 includes:
and comparing the first social relationship predicted value with the second social relationship predicted value, and taking the larger social relationship predicted value as the social relationship between the first target person and the second target person.
In the embodiment of the present invention, the magnitudes of the first predicted social relationship value and the second predicted social relationship value output by the network may be directly compared, and the one with the larger predicted value is determined as the social relationship between the first target person and the second target person, for example, the first predicted social relationship value outputs 0 and the second predicted social relationship value outputs 1, and then the social relationship corresponding to the second predicted social relationship value is used as the social relationship between the first target person and the second target person in the first image, that is, the general relationship, and is also the social relationship between the third target person and the fourth target person in the second image.
It is worth mentioning that the social relationship identification method of the embodiment of the invention is not only applicable to two images, but also applicable to identification of social relationships of target persons in a plurality of images captured within a period of time or a video data. For example, 50 identical-row records captured in one month exist in the archive data of two persons a and B in a cell, the image of the 50 identical-row records can be identified by the social relationship identification method provided by the embodiment of the invention, for the identical-row records in which 25 times of two persons have no communication even if they are far apart, the social relationship of the two persons is predicted to be unknown, for the identical-row records in which 23 times of two persons walk side by side, the social relationship of the two persons is predicted to be a general relationship, and for the identical-row records in which 2 times of holding by hand, the social relationship of the two persons is predicted to be an intimate relationship, and the social relationship of the two persons is finally determined to be the intimate relationship based on the prediction.
In summary, in the embodiments of the present invention, two frames of images including a target person are obtained and are respectively subjected to human body detection, then, human body features of the target person in a first image are respectively matched with human body features of the target person in a second image, after the matching is successful, the target person in the two frames of images is respectively input into a pre-trained social relationship recognition network to perform social relationship prediction, two predicted social relationship values are correspondingly obtained, based on the two predicted social relationship values, a social relationship between the target person in the first image and the target person in the second image can be determined, based on the two frames of images, the social relationship is recognized, and the accuracy of social relationship recognition can be improved without discarding the images.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an apparatus for identifying social relationships according to an embodiment of the present invention, and as shown in fig. 6, the apparatus 600 includes:
the acquisition module 601 is configured to acquire a first image and a second image, where a capturing time interval between the first image and the second image does not exceed a preset time, the first image includes a first target person and a second target person, and the second image includes a third target person and a fourth target person;
a first detection module 602, configured to perform human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person; and
a second detection module 603, configured to perform human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person;
a matching module 604, configured to match the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, where if matching is successful, a first matching result is obtained, then match the second target person with the third target person or the fourth target person, and if matching is successful, a second matching result is obtained, where the first matching result includes the first body frame information and the second body frame information, and the second matching result includes the third body frame information and the fourth body frame information;
the predicting module 605 is configured to input the first matching result and the second matching result into a pre-trained social relationship recognition network to perform social relationship prediction, so as to obtain a first social relationship predicted value corresponding to the first matching result and obtain a second social relationship predicted value corresponding to the second matching result;
a determining module 606, configured to determine a social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value.
Optionally, the first network model includes a first extraction network, a second extraction network, a third extraction network, a first full connection layer, and a first loss function, where network parameters of the first extraction network and the second extraction network are shared.
Optionally, fig. 7 is a schematic structural diagram of a first matching device according to an embodiment of the present invention, and as shown in fig. 7, the first matching device 700 includes:
a first obtaining module 701, configured to obtain a human body feature of a first target person from a first image according to first human body frame information;
a second obtaining module 702, configured to obtain, from the second image, the human body characteristics of the third target person and the human body characteristics of the fourth target person according to the third human body frame information and the fourth human body frame information;
the matching module 703 is configured to perform feature matching on the human body features of the first target person with the human body features of the third target person or with the human body features of the fourth target person, and if the matching is successful, a first matching result is obtained.
Optionally, fig. 8 is a schematic structural diagram of a second matching apparatus according to an embodiment of the present invention, and as shown in fig. 8, the second matching apparatus 800 includes:
a first obtaining module 801, configured to obtain a human body feature of a second target person from the first image according to the second human body frame information;
a second obtaining module 802, configured to obtain, from the second image, the human body characteristics of the third target person and the human body characteristics of the fourth target person according to the third human body frame information and the fourth human body frame information;
and the matching module 803 is configured to perform feature matching on the human body features of the second target person and the human body features of the third target person or the human body features of the fourth target person, and if the matching is successful, a second matching result is obtained.
Optionally, fig. 9 is a schematic structural diagram of a first social relationship predicting device according to an embodiment of the present invention, and as shown in fig. 9, the first social relationship predicting device 900 includes:
an obtaining module 901, configured to obtain a corresponding first human body image, second human body image, and first human body joint image from a first image based on the first human body frame information, the second human body frame information, and the first human body joint frame information.
The prediction module 902 is configured to input the first human body image, the second human body image, and the first human body joint image into a pre-trained social relationship recognition network to perform social relationship prediction, so as to obtain a first social relationship prediction value.
Optionally, fig. 10 is a schematic structural diagram of a second social relationship predicting device according to an embodiment of the present invention, and as shown in fig. 10, the second social relationship predicting device 1000 includes:
an obtaining module 1001, configured to obtain a third human body image, a fourth human body image, and a second human body joint image from a second image based on third human body frame information, fourth human body frame information, and second human body joint frame information;
the prediction module 1002 is configured to input the third human body image, the fourth human body image, and the second human body joint image into a pre-trained social relationship recognition network to perform social relationship prediction, so as to obtain a second social relationship prediction value.
The invention further provides an electronic device 1100, and the electronic device 1100 provided in the embodiment of the invention can implement each process implemented by one of the social relationship identification methods in the above method embodiments, and for avoiding repetition, details are not repeated here. And the same beneficial effects can be achieved.
As shown in fig. 11, fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 1100 includes: the social relationship identification method includes a processor 1101, a memory 1102, a network interface 1103, and a computer program stored in the memory 1102 and operable on the processor 1101, wherein the processor 1101 executes the computer program to implement the steps of a social relationship identification method provided by the embodiment. Specifically, the processor 1101 is configured to call the computer program stored in the memory 1102, and execute the following steps:
acquiring a first image and a second image, wherein the capturing time interval between the first image and the second image does not exceed the preset time, the first image comprises a first target person and a second target person, and the second image comprises a third target person and a fourth target person;
performing human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person; and
performing human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person;
matching the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information and the fourth body frame information, wherein if the matching is successful, a first matching result is obtained, then matching the second target person with the third target person or the fourth target person is obtained, and if the matching is successful, a second matching result is obtained, wherein the first matching result comprises the first body frame information and the second body frame information, and the second matching result comprises the third body frame information and the fourth body frame information;
inputting the first matching result and the second matching result into a pre-trained social relationship recognition network for predicting social relationship, so as to obtain a first social relationship predicted value corresponding to the first matching result and obtain a second social relationship predicted value corresponding to the second matching result;
determining a social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value.
Optionally, the performing, by the processor 1101, the human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person, and the human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person includes:
and respectively inputting the first image and the second image into a pre-trained target detection network, and respectively outputting the first human body frame information, the second human body frame information, the third human body frame information and the fourth human body frame information through the target detection network.
Optionally, the matching, performed by the processor 1101, the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and if the matching is successful, a first matching result is obtained, including:
acquiring the human body characteristics of the first target person from the first image according to the first human body frame information;
acquiring the human body characteristics of the third target person and the human body characteristics of the fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
and carrying out feature matching on the human body features of the first target person and the human body features of the third target person or the human body features of the fourth target person, and obtaining a first matching result if the matching is successful.
Optionally, the matching, performed by the processor 1101, the second target person and the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and if the matching is successful, obtaining a second matching result, where the matching includes:
acquiring the human body characteristics of the second target person from the first image according to the second human body frame information;
acquiring the human body features of the third target person and the human body features of the fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
and carrying out feature matching on the human body features of the second target person and the human body features of the third target person or the human body features of the fourth target person, and obtaining a second matching result if matching is successful.
Optionally, the performing, by the processor 1101, the first matching result further includes first human body combo box information formed by the first target person and the second target person, and the inputting the first matching result into a pre-trained social relationship recognition network for predicting a social relationship to obtain a first predicted social relationship value corresponding to the first matching result includes:
acquiring a corresponding first human body image, a second human body image and a first human body joint image from the first image based on the first human body frame information, the second human body frame information and the first human body joint frame information;
and inputting the first human body image, the second human body image and the first human body combination image into the pre-trained social relationship recognition network for predicting the social relationship to obtain the first social relationship predicted value.
Optionally, the second matching result executed by the processor 1101 further includes second human body combination box information formed by the third target person and the fourth target person, and the step of inputting the second matching result into a pre-trained social relationship recognition network for predicting a social relationship to obtain a second social relationship predicted value corresponding to the second matching result includes:
acquiring a third human body image, a fourth human body image and a second human body joint image from the second image based on the third human body frame information, the fourth human body frame information and the second human body joint frame information;
and inputting the third human body image, the fourth human body image and the second human body joint image into the pre-trained social relationship recognition network for predicting the social relationship to obtain a second social relationship predicted value.
Optionally, the determining the social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value, performed by processor 1101, includes:
and comparing the first social relation predicted value with the second social relation predicted value, and taking the larger social relation predicted value as the social relation between the first target person and the second target person.
The electronic device 1100 provided by the embodiment of the present invention can implement each implementation manner in the embodiment of the method for identifying a social relationship, and has corresponding beneficial effects, and for avoiding repetition, details are not repeated here.
It should be noted that only 1101-1103 having components are shown in the figures, but it should be understood that not all of the shown components are required to be implemented, and more or fewer components may be implemented instead. As will be understood by those skilled in the art, the electronic device 1100 is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware thereof includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device 1100 may be a computing device such as a desktop computer, a notebook, and a palmtop computer. The electronic device 1100 may interact with a user via a keyboard, mouse, remote control, touch pad, or voice control device.
The memory 1102 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 1102 may be an internal storage unit of the electronic device 1100, such as a hard disk or a memory of the electronic device 1100. In other embodiments, the memory 1102 may also be an external storage device of the electronic device 1100, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the electronic device 1100. Of course, the memory 1102 may also include both internal and external memory units of the electronic device 1100. In this embodiment, the memory 1102 is generally used for storing an operating system and various application software installed in the electronic device 1100, such as program codes of a method for identifying social relationships. In addition, the memory 1102 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 1101 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 1101 is typically used to control the overall operation of the electronic device 1100. In this embodiment, the processor 1101 is configured to execute the program code stored in the memory 1102 or process data, for example, execute a method for identifying social relationships.
The network interface 1103 may include a wireless network interface or a wired network interface, and the network interface 1103 is typically used to establish communication connections between the electronic device 1100 and other electronic devices.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor 1101, the computer program implements each process of the method for identifying a social relationship provided in the embodiment of the present invention, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program to instruct related hardware, and a program of a method for identifying social relationships may be stored in a computer-readable storage medium, and when executed, the program may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. And the terms "first," "second," and the like in the description and claims of the present application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A method for identifying social relationships is characterized by comprising the following steps:
acquiring a first image and a second image, wherein the capturing time interval between the first image and the second image does not exceed the preset time, the first image comprises a first target person and a second target person, and the second image comprises a third target person and a fourth target person;
performing human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person; and
performing human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person;
matching the first target person with the third target person or the fourth target person based on the first human body frame information, the second human body frame information, the third human body frame information and the fourth human body frame information, obtaining a first matching result if the matching is successful, matching the second target person with the third target person or the fourth target person, obtaining a second matching result if the matching is successful, wherein the first matching result comprises the first human body frame information and the second human body frame information, the second matching result comprises the third human body frame information and the fourth human body frame information, the first matching result further comprises first human body joint frame information formed by the first target person and the second target person, and the second matching result further comprises second human body joint frame information formed by the third target person and the fourth target person;
inputting the first matching result and the second matching result into a pre-trained social relationship recognition network for predicting social relationship, so as to obtain a first social relationship predicted value corresponding to the first matching result and obtain a second social relationship predicted value corresponding to the second matching result;
determining a social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value.
2. The method for identifying social relationships according to claim 1, wherein the performing human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person, and performing human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person includes:
and respectively inputting the first image and the second image into a pre-trained target detection network, and respectively outputting the first human body frame information, the second human body frame information, the third human body frame information and the fourth human body frame information through the target detection network.
3. The method for identifying social relationships according to claim 2, wherein the matching the first target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and if the matching is successful, obtaining a first matching result includes:
acquiring the human body characteristics of the first target person from the first image according to the first human body frame information;
acquiring the human body characteristics of the third target person and the human body characteristics of the fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
and carrying out feature matching on the human body features of the first target person and the human body features of the third target person or the human body features of the fourth target person, and obtaining the first matching result if the matching is successful.
4. The method for identifying social relationships according to claim 2, wherein the matching the second target person with the third target person or the fourth target person based on the first body frame information, the second body frame information, the third body frame information, and the fourth body frame information, and if the matching is successful, obtaining a second matching result includes:
acquiring the human body characteristics of the second target person from the first image according to the second human body frame information;
acquiring the human body characteristics of the third target person and the human body characteristics of the fourth target person from the second image according to the third human body frame information and the fourth human body frame information;
and carrying out feature matching on the human body features of the second target person and the human body features of the third target person or the human body features of the fourth target person, and obtaining a second matching result if matching is successful.
5. The method for identifying social relationships according to any one of claims 1 to 4, wherein the step of inputting the first matching result into a pre-trained social relationship identification network for predicting social relationships to obtain a first predicted social relationship value corresponding to the first matching result includes:
acquiring a corresponding first human body image, a second human body image and a first human body joint image from the first image based on the first human body frame information, the second human body frame information and the first human body joint frame information;
and inputting the first human body image, the second human body image and the first human body combination image into the pre-trained social relationship recognition network for predicting the social relationship to obtain the first social relationship predicted value.
6. The method for identifying social relationships according to claim 5, wherein the step of inputting the second matching result into a pre-trained social relationship identification network for predicting social relationships to obtain a second predicted social relationship value corresponding to the second matching result includes:
acquiring a corresponding third human body image, a corresponding fourth human body image and a corresponding second human body joint image from the second image based on the third human body frame information, the fourth human body frame information and the second human body joint frame information;
and inputting the third human body image, the fourth human body image and the second human body joint image into the pre-trained social relationship recognition network for predicting the social relationship to obtain a second social relationship predicted value.
7. The method for identifying social relationships according to claim 6, wherein the determining the social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value comprises:
and comparing the first social relation predicted value with the second social relation predicted value, and taking the larger social relation predicted value as the social relation between the first target person and the second target person.
8. An apparatus for identifying social relationships, comprising:
the acquisition module is used for acquiring a first image and a second image, wherein the capturing time interval between the first image and the second image does not exceed the preset time, the first image comprises a first target person and a second target person, and the second image comprises a third target person and a fourth target person;
the first detection module is used for carrying out human body detection on the first image to obtain first human body frame information corresponding to a first target person and second human body frame information corresponding to a second target person; and
the second detection module is used for carrying out human body detection on the second image to obtain third human body frame information corresponding to a third target person and fourth human body frame information corresponding to a fourth target person;
a matching module, configured to match the first target person with the third target person or the fourth target person based on the first human body frame information, the second human body frame information, the third human body frame information, and the fourth human body frame information, obtain a first matching result if the matching is successful, match the second target person with the third target person or the fourth target person, and obtain a second matching result if the matching is successful, where the first matching result includes the first human body frame information and the second human body frame information, the second matching result includes the third human body frame information and the fourth human body frame information, the first matching result further includes first human body joint frame information formed by the first target person and the second target person, and the second matching result further includes second human body joint frame information formed by the third target person and the fourth target person;
the prediction module is used for inputting the first matching result and the second matching result into a pre-trained social relationship recognition network for predicting the social relationship to obtain a first social relationship predicted value corresponding to the first matching result and a second social relationship predicted value corresponding to the second matching result;
a determination module to determine a social relationship between the first target person and the second target person based on the first predicted social relationship value and the second predicted social relationship value.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the steps in a method for identifying social relationships according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of a method for identifying social relationships as claimed in any one of claims 1 to 7.
CN202011642514.9A 2020-12-31 2020-12-31 Social relationship identification method and device, electronic equipment and storage medium Active CN112633244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642514.9A CN112633244B (en) 2020-12-31 2020-12-31 Social relationship identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642514.9A CN112633244B (en) 2020-12-31 2020-12-31 Social relationship identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112633244A CN112633244A (en) 2021-04-09
CN112633244B true CN112633244B (en) 2023-03-03

Family

ID=75290569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642514.9A Active CN112633244B (en) 2020-12-31 2020-12-31 Social relationship identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112633244B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117010026B (en) * 2023-06-30 2024-04-09 北京交通大学 Abnormal character relation detection and identification method based on federal knowledge network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829072A (en) * 2018-12-26 2019-05-31 深圳云天励飞技术有限公司 Construct atlas calculation and relevant apparatus
CN110033388A (en) * 2019-03-06 2019-07-19 百度在线网络技术(北京)有限公司 Method for building up, device and the server of social networks
CN111209776A (en) * 2018-11-21 2020-05-29 杭州海康威视系统技术有限公司 Method, device, processing server, storage medium and system for identifying pedestrians
CN111506825A (en) * 2020-03-12 2020-08-07 浙江工业大学 Visual analysis method for character relationship based on social photos

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000300A1 (en) * 2015-07-02 2017-01-05 Xiaoou Tang Methods and systems for social relation identification
CN109543078A (en) * 2018-10-18 2019-03-29 深圳云天励飞技术有限公司 Social relationships determine method, apparatus, equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209776A (en) * 2018-11-21 2020-05-29 杭州海康威视系统技术有限公司 Method, device, processing server, storage medium and system for identifying pedestrians
CN109829072A (en) * 2018-12-26 2019-05-31 深圳云天励飞技术有限公司 Construct atlas calculation and relevant apparatus
CN110033388A (en) * 2019-03-06 2019-07-19 百度在线网络技术(北京)有限公司 Method for building up, device and the server of social networks
CN111506825A (en) * 2020-03-12 2020-08-07 浙江工业大学 Visual analysis method for character relationship based on social photos

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于照片的社交关系可视化方法;陈佳舟 等;《小型微型计算机系统》;20201031;第41卷(第10期);第2194-2199页 *

Also Published As

Publication number Publication date
CN112633244A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
US8463025B2 (en) Distributed artificial intelligence services on a cell phone
CN110909630B (en) Abnormal game video detection method and device
WO2024041479A1 (en) Data processing method and apparatus
CN112699297A (en) Service recommendation method, device and equipment based on user portrait and storage medium
CN107786848A (en) The method, apparatus of moving object detection and action recognition, terminal and storage medium
CN111160202A (en) AR equipment-based identity verification method, AR equipment-based identity verification device, AR equipment-based identity verification equipment and storage medium
CN111488855A (en) Fatigue driving detection method, device, computer equipment and storage medium
CN114550053A (en) Traffic accident responsibility determination method, device, computer equipment and storage medium
TW202125332A (en) Method and device for constructing target motion trajectory, and computer storage medium
CN113255557A (en) Video crowd emotion analysis method and system based on deep learning
CN112633244B (en) Social relationship identification method and device, electronic equipment and storage medium
CN111783674A (en) Face recognition method and system based on AR glasses
CN112668509B (en) Training method and recognition method of social relation recognition model and related equipment
CN110795980A (en) Network video-based evasion identification method, equipment, storage medium and device
Shukla et al. Automatic attendance system based on CNN–LSTM and face recognition
CN114038067B (en) Coal mine personnel behavior detection method, equipment and storage medium
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN115424335A (en) Living body recognition model training method, living body recognition method and related equipment
CN114373071A (en) Target detection method and device and electronic equipment
CN113468948A (en) View data based security and protection control method, module, equipment and storage medium
CN112364683A (en) Case evidence fixing method and device
CN112861711A (en) Regional intrusion detection method and device, electronic equipment and storage medium
CN112633063A (en) Person action tracking system and method thereof
CN111694979A (en) Archive management method, system, equipment and medium based on image
Thanh-Du et al. An attendance checking system on mobile devices using transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant