CN107194817B - User social information display method and device and computer equipment - Google Patents

User social information display method and device and computer equipment Download PDF

Info

Publication number
CN107194817B
CN107194817B CN201710199079.9A CN201710199079A CN107194817B CN 107194817 B CN107194817 B CN 107194817B CN 201710199079 A CN201710199079 A CN 201710199079A CN 107194817 B CN107194817 B CN 107194817B
Authority
CN
China
Prior art keywords
image
user
face
scanning
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710199079.9A
Other languages
Chinese (zh)
Other versions
CN107194817A (en
Inventor
杨田从雨
陈宇
张�浩
华有为
薛丰
肖鸿志
冯绪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710199079.9A priority Critical patent/CN107194817B/en
Publication of CN107194817A publication Critical patent/CN107194817A/en
Priority to PCT/CN2018/073824 priority patent/WO2018177002A1/en
Application granted granted Critical
Publication of CN107194817B publication Critical patent/CN107194817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention relates to a method and a device for displaying social information of a user and computer equipment. The method comprises the following steps: acquiring a frame image containing a face image in a preset area of a scanning visual area; extracting face feature data of a face image contained in the frame image; inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identifier corresponding to the user image; and acquiring and displaying social information associated with the user identifier. The method, the device and the computer equipment for displaying the social information of the user can simplify the operation of displaying the social information of the user, and improve the convenience and the efficiency of displaying the social information.

Description

User social information display method and device and computer equipment
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and apparatus for displaying social information of a user, and a computer device.
Background
Social information of a user of the social software includes profiles set by the user on the social software, published dynamic messages, and the like. Dynamic messages may be visual information in various forms such as text, audio, video, or web links. Traditional social software is different in size for displaying information of a specific user, and is usually used for providing a virtual key for displaying the social information of the specific user on a display interface of a social webpage or a social application, and displaying the social information of the user when a click command acting on the virtual key is received.
Because the clicking operation on the virtual key is not enough in the traditional process of displaying the social information of the user. Therefore, the operation of the method for displaying the social information of the user is complicated.
Disclosure of Invention
Based on the above, it is necessary to provide a method, a device and a computer device for displaying social information of a user, which are capable of rapidly displaying social information of the user, aiming at the technical problem that the operation of the social information displaying method of the user is complicated.
A display method of social information of a user comprises the following steps:
acquiring a frame image containing a face image in a preset area of a scanning visual area;
extracting face feature data of a face image contained in the frame image;
inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identifier corresponding to the user image;
and acquiring and displaying social information associated with the user identifier.
A display device for social information of a user, comprising:
the frame image acquisition module is used for acquiring a frame image containing a face image in a preset area of the scanning visual area;
the face feature data extraction module is used for extracting face feature data of a face image contained in the frame image;
The user identification inquiring module is used for inquiring a user image matched with the face image according to the face characteristic data and acquiring a user identification corresponding to the user image;
and the display module is used for acquiring social information associated with the user identifier and displaying the social information.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of:
acquiring a frame image containing a face image in a preset area of a scanning visual area;
extracting face feature data of a face image contained in the frame image;
inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identifier corresponding to the user image;
and acquiring and displaying social information associated with the user identifier.
According to the method, the device and the computer equipment for displaying the social information of the user, the frame image containing the face image in the preset area of the scanning visual area is obtained; extracting face feature data of a face image contained in the frame image; inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identification corresponding to the user image; and acquiring and displaying social information associated with the user identification. The display of the social information of the user can be realized by only aiming the camera at the face of the user, and the operation of displaying the social information of the user is simplified.
Drawings
FIG. 1 is an application environment diagram of a method for displaying user social information in one embodiment;
FIG. 2 is an internal block diagram of a terminal in one embodiment;
FIG. 3 is a flow diagram of a method of presentation of user social information in one embodiment;
FIG. 4 is a flowchart of a method for displaying social information of a user according to another embodiment;
FIG. 5 is an interface schematic of an image scan portal provided by a social networking application in one embodiment;
FIG. 6 is a schematic diagram of a scanning interface in one embodiment;
FIG. 7 is an interface diagram of a presentation of social information in one embodiment;
FIG. 8 is a block diagram of a display of user social information in one embodiment;
FIG. 9 is a block diagram of a display device of user social information in another embodiment;
FIG. 10 is a block diagram of a user social information presentation apparatus in yet another embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The method for displaying the social information of the user, provided by the embodiment of the invention, can be applied to an application environment shown in fig. 1. Referring to fig. 1, a terminal 110 may establish a communication connection with a server 120 through a network. Terminal 110 includes, but is not limited to, a cell phone, a palm game, a tablet, a personal digital assistant, or a portable wearable device, among others. The terminal 110 may acquire a frame image including a face image in a preset area of the scan visual area; extracting face feature data of a face image contained in the frame image; inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identification corresponding to the user image; and obtains social information associated with the user identification from a local cache or on the server 120 and presents the social information. Social information associated with users of the social networking application may be stored in the server 120, including, but not limited to, user profiles, user latest dynamic information, and the like.
Fig. 2 is a schematic diagram of an internal structure of a terminal in one embodiment. The terminal comprises a processor, a nonvolatile storage medium, an internal memory, a network interface, a display screen and a camera which are connected through a system bus. The nonvolatile storage medium of the terminal stores an operating system and a display device of social information of a user. The display device of the user social information is used for realizing the display method of the user social information provided by the following embodiments. The processor of the terminal is operative to provide computing and control capabilities supporting the operation of the entire terminal. The internal memory of the terminal provides an environment for the operation of the presentation device of the user social information in the nonvolatile storage medium, and the internal memory can store computer readable instructions which, when executed by the processor, can cause the processor to execute a presentation method of the user social information. The network interface of the terminal is used for carrying out network communication with the server, such as sending face feature data or pictures to the server, receiving social information sent by the server, and the like. The camera of the terminal is used for scanning a target object in a visible area to generate a frame image. The display screen of the terminal may be a touch screen, such as a capacitive screen or an electronic screen, and the corresponding instruction may be generated by receiving a click operation acting on a control displayed on the touch screen. For example, a clicking operation of a control for entering an image scanning state, which acts on the touch screen, is received, a scanning instruction is generated, and a real scene in a visible area is scanned according to the scanning instruction.
It will be appreciated by those skilled in the art that the structure shown in fig. 2 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the terminal to which the present application is applied, and that a particular terminal may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, as shown in fig. 3, a method for displaying social information of a user is provided, and the method is applied to the terminal shown in fig. 1 for illustration. Comprising the following steps:
step S302, acquiring a frame image including a face image in a preset area of the scan visual area.
In one embodiment, the terminal may invoke a camera to start a camera scanning mode, scan a target object in a visible area in real time, and generate a frame image in real time according to a certain frame rate, where the generated frame image may be cached locally at the terminal. The visual area refers to an area which is displayed on a display interface of the terminal and can be scanned by a camera. The preset area is a certain local area on the visual area, for example, a local area which may be located at a middle position of the visual area. The terminal may detect whether a face image exists in the generated frame image at a position corresponding to a preset area of the scan visual area, and if so, acquire the generated frame image.
In one embodiment, the camera may be a camera internal to the terminal or an external camera associated with the terminal. For example, the terminal may be a smart phone, the camera may be a camera on an intelligent wearable device (such as smart glasses), and the terminal receives a real scene scanned by the camera through connection with the intelligent wearable device, and generates a frame image.
Step S304, extracting face characteristic data of a face image contained in the frame image.
In one embodiment, the image data of the image in the preset area may be extracted, and whether the image data includes facial feature data may be detected, if yes, it is determined that the frame image includes a facial image in the corresponding preset area. And further extracting face feature data from the image data. The face feature data may be one or more feature information for reflecting the sex of a person, the contour of a face, a hairstyle, glasses, a nose, a mouth, and a distance between facial organs.
Step S306, inquiring the user image matched with the face image according to the face characteristic data, and acquiring the user identification corresponding to the user image.
In one embodiment, a corresponding user image is pre-set for each user identification in the social application. Wherein the user identification may be a login account of a user of the social application. Social applications include, but are not limited to, instant messaging (Instant Messaging, IM) applications, SNS (Social Network Service, social networking sites) applications, or live applications, among others. The user identifier is used for identifying the user in the social application and has uniqueness, and can be composed of one or more of numbers, letters, special characters and the like with preset digits. The user image may be a real face image for reflecting a corresponding user, and has a corresponding relationship with the user identifier. The image selected by the user definition can be selected from the personal data and the picture information of the history publication uploaded by the user, or the system automatically analyzes a selected picture to be used as the corresponding user head portrait. The terminal can query the user image matched with the face image from the local cache and/or a background server corresponding to the social application, and obtain the user identification corresponding to the matched user head portrait.
Step S308, obtaining and displaying social information associated with the user identification.
In one embodiment, the user identification is set to associate social information of the corresponding user. Social information includes profiles set by users on social software, posted dynamic messages, and the like. Wherein the personal data comprises information such as names, nicknames, sexes, birthdays, head portraits, interests, places and the like; dynamic messages include content or dynamics that users post on the platform of the social application, and may be visual information in various forms such as text, audio, video, or web links.
In one embodiment, the dynamic message may exist in the form of a Feeds page. The information distribution website integrates all or part of the website information into an RSS (Really Simple Syndication, simple information syndication) file, which is called Feed. The dynamic messages for each user may be ordered in reverse order at the Feeds by publication time. The terminal may obtain social information associated with the user identification from a local cache or server and display the social information. Specifically, part or all of the personal data associated with the user identifier may be presented and the dynamic messages presented in reverse order.
According to the display method of the social information of the user, provided by the embodiment, the frame image containing the face image in the preset area of the scanning visual area is obtained; extracting face feature data of a face image contained in the frame image; inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identification corresponding to the user image; and acquiring and displaying social information associated with the user identification. The display of the social information of the user can be realized by only aiming the camera at the face of the user, the operation of displaying the social information of the user is simplified, and the convenience of displaying the social information is improved.
In one embodiment, before the step S302, the method for displaying social information of a user further includes: entering an image scanning state through an image scanning portal provided by the social network application; step S302 includes: and acquiring a frame image containing a face image in a preset area in a scanning visible area in an image scanning state.
In one embodiment, the terminal may provide an image scan portal on an interface of the social application. Specifically, a scanning instruction can be generated through detected operations of a control for starting face scanning to view social information of the social application, preset gestures or voices for starting face scanning to view social information and the like, and a camera associated with the terminal is started according to the scanning instruction to enter an image scanning state from the image scanning entrance.
The terminal can scan the scanning visible area of the camera in real time through the camera in the image scanning state, and display the scanned frame image on a display screen of the terminal. The scanned object is a human face, and when the frame image presented in real time is detected and the frame image is included in the preset area, the frame image is acquired and is used for matching social users.
In one embodiment, entering the image scanning state through the image scanning entrance may be an image scanning state in which augmented reality is performed, and in the image scanning state, virtual reality processing is performed on the frame image including the face image in the preset area in the acquired scanning visible area, so that the processed frame image is used as a background image of social information associated with subsequent presentation.
In one embodiment, the image scanning state may be conveniently entered by providing an image scanning portal to enter the image scanning state in which a frame image is acquired in which a preset area within the scanning viewable area contains a face image. Meanwhile, the coupling of the face of the user and the social information of the user is realized, the face is used as an entrance of the social information of the user, and the accuracy of the display of the social information is improved.
In one embodiment, step S308 includes: and acquiring social information associated with the user identification, and displaying the social information on a frame image in a scanning visible area in an image scanning state.
In one embodiment, the obtained social information identifies social information with open rights for a user of a current login user of the terminal. The open rights include a full open right, a partial open right, a no open right, etc. to the social information. The user may set an external open right for social information posted by the user, including setting the same or different open rights for users having all or part of social relationships such as friends relationship with the user and users not having all or part of social relationships. For example, a partial open right may be set to allow users without social relationships to expose dynamic information or partial personal information that they post, and so on.
The terminal can acquire social information associated with the user identifier corresponding to the matched user image, the social information is the social information allowed to be acquired by the open authority corresponding to the user identifier, and the social information is displayed on the frame image in the scanning visible area.
In one embodiment, the frame image acquired in the scanning state can be used as a background image for displaying the related social information, the acquired social information is displayed in a superimposed manner on the background image, the social information is combined with the real scene in the image scanning state, the social information is displayed in an augmented reality manner, the social information of the person can be projected out of the periphery of the face, and the display diversity of the social information is improved.
In one embodiment, step S302 includes: detecting whether the similarity between the continuously generated preset number of frame images is larger than a similarity threshold value, if yes, acquiring one frame image containing a face image in a preset area of a scanning visual area from the continuously generated preset number of frame images.
In one embodiment, frame images may be generated at a default lower frame rate, the currently generated frame images are compared to their previous preset number of frame images, and the similarity between the currently generated frame images and the previous preset number of frame images is detected.
The terminal is further provided with a similarity threshold value, the detected similarity is compared with the similarity threshold value, and if the similarity between the current frame image and the previous preset number of frame images is larger than the similarity threshold value, the current image scanning state is judged to be in a stable state. Selecting one frame image from the current frame image and the previous preset number of frame images, and detecting that the frame image contains a face image in a preset area.
And acquiring one frame image containing the face image in a preset area of the scanning visual area from the preset number of frame images when the similarity is larger than the preset similarity. The stability of image scanning can be improved.
In one embodiment, step S302 includes: detecting whether the offset of a camera of a target object in a scanning visual area is smaller than an offset threshold value within a preset duration, and if so, acquiring a generated frame image containing a face image in a preset area of the scanning visual area from the plurality of frame images within the preset duration.
In one embodiment, the terminal may obtain the offset of the camera detected in real time by a detection device of camera offset data associated with the camera. The offset is used for reflecting real-time variation of the camera in spaces such as front, back, upper, lower, left, right and the like. The detection device may be a gyroscope built into the terminal. And comparing the detected offset of each part between preset time periods with a preset offset threshold value, and judging that the terminal is in a current image scanning state of the terminal and is in a stable state when the detected offset of each part is smaller than the offset threshold value. And when the image is judged to be in a stable state, acquiring one frame image containing a face image in a preset area of the scanning visual area from a plurality of generated frame images within a preset time length.
By detecting the offset of the camera within the preset duration, when the offset is smaller than a preset offset threshold, one frame image containing a face image in a preset area of a scanning visual area is obtained from a plurality of frame images generated within the preset duration, and the stability of image scanning can be improved.
In one embodiment, step S304 includes: and extracting face characteristic data of the face image contained in the frame image, wherein the proportion of the face image contained in the frame image in the preset area exceeds the preset proportion, and the definition of the face image exceeds the definition threshold.
The terminal can further detect the proportion of the face image contained in the preset area to the preset area and the definition of the face image in the frame image.
In one embodiment, the preset area is a fixed area in the scan-visible area. The image of the predetermined area on the frame image is thus also of a corresponding fixed duty cycle on the frame image. The number of the pixels contained in each face image in the preset area can be identified, the duty ratio of the number of the pixels in the pixels contained in the whole frame image is detected, and the duty ratio of the face image in the preset area can be calculated according to the duty ratio of the face image in the frame image and the duty ratio of the preset area in the scanning visible area. And detecting the duty cycle and the magnitude of the preset duty cycle.
The terminal further detects whether the definition of the face image exceeds a preset definition threshold. Wherein the sharpness is used to reflect the illumination and resolution of the image. The greater the resolution, the higher the sharpness when the illumination intensity is within a certain intensity range. When the ratio of the face image in the preset area exceeds the preset proportion and the definition exceeds the preset definition threshold, the face characteristic data of the face image are extracted, so that the quality of the extracted face characteristic data is ensured.
In one embodiment, as shown in fig. 4, another method for presenting social information of a user is provided. The method specifically comprises the following steps:
in step S402, an image scanning portal provided by the social network application enters an image scanning state.
In one embodiment, the terminal may provide a control for opening face scanning to view social information on an interface of the social application, where the control is an image scanning entry that enters an image scanning state. When the clicking operation acted on the control is detected, a scanning instruction can be generated, a camera associated with the terminal is started according to the scanning instruction, an image scanning state is entered from the image scanning entrance, and the real scene in the scanning visible area is scanned. And presenting the scanned live-action in a frame image on a display screen of the terminal.
For example, as shown in FIG. 5, the image scan portal may be presented by being presented on an interface of a social type portal selection of a social application. The personal information display area 510 may display personal information of the user of the social application logged in by the terminal, and the area 520 may provide entries of social types including "say", "photo", "video", "live", "check-in", "dynamic album", "log", and "AR camera", and display in the form of corresponding controls. And entering a viewing interface of the corresponding social type from a corresponding entry by receiving clicking operation of the controls of the social types, and displaying social information of relevant types of friends and/or non-friends of the logged-in user. The AR camera is an image scanning portal provided by the social network application. By receiving a click operation on the "AR camera" control, an image scanning state may be entered from the portal. It should be noted that the various social type entries shown in FIG. 5 are merely one example, and the present embodiment is not limited to this particular social type presentation. Social types may also be increased or decreased based on the embodiment shown in FIG. 5. While "AR camera" is just one of the social type names that one embodiment provides, in other embodiments it may be presented in other forms, such as using a normal camera as an image scanning portal.
Step S404, obtaining a frame image of which the preset area in the scanning visible area contains a face image in an image scanning state.
In one embodiment, the terminal scans the scanning visible area in an image scanning state, generates a frame image in real time according to a preset frame rate, and displays the frame image in a display screen of the terminal. The terminal can perform augmented reality processing on the real scene in the scanning visible area in a presentation form of augmented reality (Augmented Reality, AR), generate a frame image according to the processed real scene, cache the frame image, and present the frame image in a display screen of the terminal.
Specifically, the real scene in the scanned visible area is the real scene containing the face in the preset area presented on the terminal display interface, and the frame image is generated in real time according to the preset frame rate. As shown in fig. 6, in the scanning state, the terminal projects a prompt message for aiming at the scanning of the face image. For example, the projected prompt is "please aim the viewfinder at the face of the friend to be scanned and start scanning". The "viewfinder" is the predetermined area 610 in the scan-visible area. The user may aim the camera at the face 620 to be identified in the live view so that it is presented in the preset area 610 of the presentation interface.
In one embodiment, the extraction of the face feature data of the face image included in the frame image may be performed when the current scanning state is detected to be in a stable state. Wherein, whether the current image scanning state of the terminal is in a stable state can be determined by detecting the similarity between the continuously generated preset number of frame images. Or whether the current image scanning state is in a stable state is judged by detecting the offset of the camera.
Specifically, the user can keep the camera aligned to the face to be recognized for a preset time period, the terminal can detect whether the similarity between the continuously generated frame images with preset number exceeds a preset similarity threshold value within the preset time period, and if so, the current image scanning state is judged to be in a stable state. Or, whether the offset of the camera is smaller than the offset threshold value within the preset time period can be detected, and if yes, the current image scanning state is judged to be in a stable state. When it is determined that the operation is not in the steady state, the routine returns to the step S404 described above. And when the frame image is judged to be in a stable state, extracting face characteristic data of a face image contained in the preset area.
In one embodiment, step S404 includes: detecting whether the similarity between the continuously generated preset number of frame images is larger than a similarity threshold value or not in an image scanning state, and if so, acquiring one frame image containing a face image in a preset area of a scanning visual area from the continuously generated preset number of frame images.
The preset number may be 10, 20, etc., and may be determined according to a frame rate, for example, the number of frame images generated within a preset time period (for example, 1 second, 1.5 seconds, 2 seconds, etc.).
The terminal may calculate a degree of similarity between a currently generated frame image and a buffered preset number of previous generation thereof in an image scanning state. If the calculated similarity between the current frame image and the previous preset number of frame images is larger than the preset similarity threshold value, judging that the current image scanning state is in a stable state. Selecting one frame image from the current frame image and the previous preset number of frame images, and detecting that the frame image contains a face image in a preset area. Specifically, the currently generated frame image may be used as the selected frame image. Or the frame image with the clearest face image contained in the preset area in the generated preset number of frame images can be used as the selected frame image.
In one embodiment, step S404 includes: detecting whether the offset of a camera of a target object in a scanning visible area is smaller than an offset threshold value in an image scanning state, and if so, acquiring a generated frame image containing a face image in a preset area of the scanning visible area from the preset time.
The method can detect the offset of the camera within the preset time period in real time through a gyroscope built in the terminal in the image scanning state, and if the offset is smaller than a preset offset threshold value, the current image scanning state is judged to be in a stable state. And acquiring one of the generated frame images including the face image in the preset area of the scanning visual area from the preset time length.
The preset duration may be a default or custom duration, for example, 1.5 seconds. The currently generated frame image may be taken as the selected frame image. Or the frame image with the clearest face image contained in the preset area can be used as the selected frame image in the generated plurality of frame images.
Step S406, extracting face feature data of a face image contained in the frame image.
In one embodiment, a scanning instruction may be received in an image scanning state, and face feature data of a face image included in a frame image may be extracted according to the scanning instruction.
Specifically, a control for starting a scanning instruction of face recognition scanning can be displayed on a display interface in an image scanning state, when a clicking operation of the control is received, a scanning instruction is generated, and face feature data of a face image contained in a preset area of a generated frame image is extracted from the preset area after the scanning instruction is received. As shown in fig. 6, when a click operation on the "start scanning" control 630 is detected, a scanning instruction is generated, and face feature data of a face image included in the preset area 610 in the frame image is extracted.
The face characteristic data is extracted according to the received scanning instruction, so that whether the frame image contains the face or not does not need to be recognized in real time, and the resource occupation of the terminal for face recognition can be reduced.
In one embodiment, step S406 includes: and extracting face characteristic data of the face image contained in the frame image, wherein the proportion of the face image contained in the frame image in the preset area exceeds the preset proportion, and the definition of the face image exceeds the definition threshold.
The terminal can score the generated frame images according to the proportion of the face images in the preset area and the definition of the face images in the generated preset number of frame images, and select the frame image with the highest score and exceeding a preset score threshold value, and extract the face feature data of the face images from the frame images. When the score exceeds the preset score, the proportion of the face image contained in the corresponding frame image in the preset area exceeds the preset proportion, and the definition of the face image exceeds the definition threshold. The quality of the face feature data can be further improved by extracting the face feature data of the face image contained in the frame image which is highest in scoring and exceeds a preset score threshold value.
Step S408, inquiring the user image matched with the face image according to the face characteristic data, and acquiring the user identification corresponding to the user image.
In one embodiment, the terminal may preferentially read the locally cached user avatars of the users in the social application and detect whether one of the user avatars matches the face image. If yes, the user identification corresponding to the locally matched user head portrait is obtained. Otherwise, the face feature data can be uploaded to a connected background server of the social application, whether one user head portrait of one user is matched with the face image or not is inquired through the server, and a user identification corresponding to the user head portrait matched on the server is obtained.
Specifically, the matching degree between the face feature data of the face image and the face feature data contained in the user image to be compared can be achieved through comparison. When the face feature data contained in one of the user head portraits is detected for the first time, and the matching degree between the face feature data and the face feature data of the face image exceeds a preset matching degree threshold value, the face feature data and the face feature data are judged to be matched. And obtaining the corresponding user identification according to the corresponding relation between the user head portrait and the user identification. The terminal or the server can also directly store the face characteristic data contained in each user head portrait, so that the face characteristic data of the user head portrait can be directly obtained without repeated extraction from the user head portrait when the matching degree comparison is carried out.
If the matched user head portraits are not found, prompt information of the users which are not found to be matched can be displayed on the scanning interface. For example, a prompt message "do not find a corresponding friend, please rescan" may be projected on the scanned interface.
In one embodiment, the scope of the user profile of the query includes: and a user head image corresponding to the user identification of the social relationship such as friend relation chain exists between the user identification of the current login user of the terminal. The user head images corresponding to the user identifications of the current login user of the terminal and the user identifications of which the social relationship does not exist can be further included, namely the user head images corresponding to the user identifications of all the login users on the server.
Step S410, obtaining social information associated with a user identifier; social information is presented on a frame image within a scan viewable area in an image scan state.
In one embodiment, the obtained social information is social information with open rights set for a user identification of a current login user of the terminal. The frame image may be subjected to transparency and/or blurring, etc., so that the frame image as a background image has a certain transparency, so as to improve the definition of the social information superimposed and displayed thereon. Specifically, transparency or blurring and other treatments can be performed on a local image for overlapping and displaying social information in a frame image, and the transparency of the frame image can be adjusted to a preset transparency.
In one embodiment, information such as a camera angle and offset of a terminal camera can be detected, and the obtained social information is displayed on a frame image in a scanning visible area according to a display style matched with the information such as the camera angle and offset, so that the diversity of social information display is further improved.
Specifically, the acquired profile information and dynamic message may be projected around a predetermined area of the frame image. As shown in FIG. 7, the presentation area of social information may be divided into a profile information presentation area 640 and a dynamic message presentation area 650. And the profile information display area 630 is provided at the upper and lower portions of the preset area; the dynamic message display area 640 is projected at the lower portion of the preset area. The profile information presentation area 630 may present one or more brief information of the user's nickname, avatar, birthday reminder, etc. Such as displaying a nickname, birthday reminder, etc. of the user on the upper portion of the face 620 in the live view, and displaying a head portrait of the user on the lower portion of the face 620 in the live view. The dynamic message display area 640 can display various forms of visual information such as text, audio, video or web page links, etc. published by the user on the platform of the social application in reverse order according to the published time. And may respond accordingly to the received instructions for interoperating with social information. The interaction operation comprises operations such as sliding operation, detailed information viewing operation, comment or praise of the social information and the like of the displayed social information. The present embodiment is not limited to this particular form of social information presentation. Based on the embodiment shown in fig. 7, the specific social information displayed can be increased or decreased, and social information can be displayed according to other display layouts. In other embodiments, the social information may be presented in other forms, such as presenting part or all of the profile information in a lower portion of the preset area, and/or presenting part or all of the dynamic message to be acquired in an upper portion of the preset area, etc.
According to the method for displaying the social information of the user, the image scanning entrance provided by the social network application enters an image scanning state, a frame image containing a face image in a preset area in a scanning visible area is obtained in the image scanning state, and face feature data of the face image contained in the frame image are extracted; inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identification corresponding to the user image; and displaying the social information on the frame image in the scanning visible area in the image scanning state. The method has the advantages that the operation of displaying the social information of the user is simplified, meanwhile, the displayed social information is combined with the frame image containing the face image in the preset area, an augmented reality display mode is formed, and the accuracy of displaying the social information is improved.
In one embodiment, after step S410, further comprising: and closing the display of the social information when the frame image generated in real time is detected to be in the preset area and does not contain the face image.
And in the process of displaying the social information, scanning the real scenes in the preset area of the visual area is also kept, and a frame image is generated. And detecting whether the preset area of the frame image contains the face image. If not, the camera is deviated from the face aligned at present, and at the moment, the terminal can close the display of the queried social information.
In one embodiment, when the preset area of the frame image is detected to not contain the face image, the deviation time length of the camera is counted, and when the deviation time length reaches within a preset deviation time length threshold, the situation that the same face image is contained in the preset area of the frame image scanned in real time is not detected, and then the display of the queried social information is closed. The deviation time period threshold may be any suitable time period, for example, 1 second or 2 seconds, and may be the same as the preset time period described above.
By reserving the corresponding deviation duration threshold, the situation of temporary deviation such as camera shake and the like which are suddenly generated can be prevented, and the stability of social information display is improved. In addition, matching of the newly aligned faces can be performed within the reserved deviation duration threshold value, preparation is made for displaying social information of the corresponding new user, and continuity of switching of the social information of different users can be improved.
In one embodiment, after step S410, further comprising: receiving an instruction generated by performing interactive operation on social information; and responding to the instruction, jumping to a social information display interface corresponding to the instruction, and stopping image scanning.
The terminal may also receive instructions generated by interactions acting on the social information. The interactive operation comprises a sliding operation, a detailed information viewing operation, a comment or praise operation of the social information and the like of the displayed social information.
The terminal can jump to a display interface of the corresponding social information according to the instruction, and the displayed social information is subjected to proper amplification processing so as to be convenient for a user to check. If the interactive operation is a comment operation on the dynamic message published by the user, the detailed display interface of the dynamic message is skipped. Meanwhile, image scanning can be stopped, tracking of real scenes is canceled, and after a closing instruction of the displayed social information is received, the image scanning state can be restored so as to display the social information of the next user.
In one embodiment, after obtaining the user identifier corresponding to the user image, the method for displaying social information of the user further includes: acquiring user sign data associated with a user identifier; user sign data is displayed on a frame image displayed in the scan viewable area.
The terminal may further retrieve whether there is sign data associated with the user identification based on the obtained user identification. The vital sign data includes, but is not limited to, one or more of athletic data, healthcare data, and the like. The exercise data comprise the walking steps of a user, riding mileage, consumed heat and the like; the healthcare data includes data of the user's heartbeat, body temperature, blood glucose parameters, etc. The terminal may query other applications having an association with the user identification and obtain the sign data detected by the other applications. The other application may be a sports-type application or a healthcare-type application.
For example, the terminal may detect whether the user identifier is also used to identify the user identity of a certain sports class application, and if so, may obtain the user sign data associated with the user identifier from a local cache or from a background server corresponding to the sports class application.
The terminal may display the social information and the user sign data at the same time, and display the user sign data on a frame image displayed in the scan-visible area. For example, the sign data may be presented within the user information area 630 as shown in FIG. 6. By further displaying the user sign data, the richness of information presentation to the matched user can be improved.
In one embodiment, as shown in fig. 8, a display device for social information of a user is provided. The apparatus includes a frame image acquisition module 802, a face feature data extraction module 804, a user identification query module 806, and a presentation module 808. Wherein:
a frame image acquisition module 802, configured to acquire a frame image including a face image in a preset area of the scan visual area.
The face feature data extraction module 804 is configured to extract face feature data of a face image included in the frame image.
The user identifier query module 806 is configured to query a user image matching with the face image according to the face feature data, and obtain a user identifier corresponding to the user image.
And the display module 808 is used for acquiring and displaying social information associated with the user identifier.
In one embodiment, as shown in fig. 9, another apparatus for displaying social information of a user is provided, and the apparatus further includes:
the image scanning module 810 is configured to enter an image scanning state through an image scanning portal provided by the social network application.
The frame image obtaining module 802 is further configured to obtain, in an image scanning state, a frame image in which a preset area in the scanning visible area includes a face image.
In one embodiment, the user identification query module 806 is further configured to obtain social information associated with the user identification, and display the social information on a frame image within the scan-visible area in the image scan state.
In one embodiment, the frame image obtaining module 802 is further configured to detect whether the similarity between the continuously generated preset number of frame images is greater than a similarity threshold, and if so, obtain one frame image including a face image in a preset area of the scanning visual area from among the continuously generated preset number of frame images; or detecting whether the offset of the camera of the target object in the scanning visible area is smaller than an offset threshold value within a preset duration, and if so, acquiring one frame image containing a face image in a preset area of the scanning visible area from the generated multiple frame images within the preset duration.
In one embodiment, the face feature data extraction module 804 is further configured to extract face feature data of a face image included in a frame image, where a proportion of the face image included in the frame image in a preset area exceeds a preset proportion, and a sharpness of the face image exceeds a sharpness threshold.
In one embodiment, as shown in fig. 10, there is provided a display device of social information of a user, the device further comprising:
the sign data obtaining module 812 is configured to obtain sign data of a user associated with the user identifier.
The presentation module 808 is also configured to display user sign data on a frame image displayed in the scan viewable area.
According to the display device for the social information of the user, the frame image containing the face image in the preset area of the scanning visual area is obtained; extracting face feature data of a face image contained in the frame image; inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identification corresponding to the user image; and acquiring and displaying social information associated with the user identification. The display of the social information of the user can be realized by only aiming the camera at the face of the user, the operation of displaying the social information of the user is simplified, and the convenience and the efficiency of displaying the social information are improved.
In one embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
acquiring a frame image containing a face image in a preset area of a scanning visual area; extracting face feature data of a face image contained in the frame image; inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identification corresponding to the user image; and acquiring and displaying social information associated with the user identification.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (14)

1. The method for displaying the social information of the user is characterized by comprising the following steps of:
acquiring a frame image containing a face image in a preset area of a scanning visual area;
extracting face feature data of a face image contained in the frame image;
inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identifier corresponding to the user image;
acquiring social information associated with the user identifier, and displaying the social information in the scanning visible area;
when the frame image generated in real time is detected not to contain the face image in the preset area, calculating the deviation time length of the camera, and closing the display of the social information if the deviation time length reaches a preset deviation time length threshold; the deviation time length refers to a continuous time length of the preset area without the face image in the image scanning process.
2. The method according to claim 1, further comprising, before the acquiring the frame image including the face image in the preset area of the scan-visible area:
entering an image scanning state through an image scanning portal provided by the social network application;
the acquiring the frame image containing the face image in the preset area of the scanning visual area comprises the following steps:
and acquiring a frame image containing a face image in a preset area in the scanning visible area in the image scanning state.
3. The method of claim 2, wherein the obtaining and presenting social information associated with the user identification comprises:
and acquiring social information associated with the user identification, and displaying the social information on a frame image in the scanning visible area in the image scanning state.
4. The method according to claim 1, wherein the acquiring a frame image including a face image in a preset area of the scan-visible area includes:
detecting whether the similarity between the continuously generated preset number of frame images is larger than a similarity threshold value, if so, acquiring one frame image containing a face image in a preset area of a scanning visual area from the continuously generated preset number of frame images; or (b)
Detecting whether the offset of a camera of a target object in a scanning visual area is smaller than an offset threshold value within a preset duration, and if so, acquiring one frame image containing a face image in a preset area of the scanning visual area from the generated multiple frame images within the preset duration.
5. The method according to claim 1, wherein the extracting face feature data of a face image included in the frame image includes:
and extracting face characteristic data of the face image contained in the frame image, wherein the proportion of the face image contained in the frame image in the preset area exceeds the preset proportion, and the definition of the face image exceeds the definition threshold.
6. The method of claim 1, further comprising, after obtaining the user identification corresponding to the user image:
acquiring user sign data associated with the user identifier;
and displaying the user sign data on the frame image displayed in the scanning visible area.
7. A display device for social information of a user, comprising:
the frame image acquisition module is used for acquiring a frame image containing a face image in a preset area of the scanning visual area;
The face feature data extraction module is used for extracting face feature data of a face image contained in the frame image;
the user identification inquiring module is used for inquiring a user image matched with the face image according to the face characteristic data and acquiring a user identification corresponding to the user image;
the display module is used for acquiring social information associated with the user identifier and displaying the social information in the scanning visible area; the method is also used for counting the deviation time length of the camera when the frame image generated in real time is detected to not contain the face image in the preset area, and closing the display of the social information if the deviation time length reaches a preset deviation time length threshold; the deviation time length refers to a continuous time length of the preset area without the face image in the image scanning process.
8. The apparatus of claim 7, wherein the apparatus further comprises:
the image scanning module is used for entering an image scanning state through an image scanning entrance provided by the social network application;
the frame image acquisition module is also used for acquiring a frame image containing a face image in a preset area in the scanning visible area in the image scanning state.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the user identification inquiring module is further used for acquiring social information associated with the user identification, and displaying the social information on a frame image in the scanning visible area in the image scanning state.
10. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the frame image acquisition module is further used for detecting whether the similarity between the continuously generated preset number of frame images is larger than a similarity threshold value, if yes, acquiring one frame image containing a face image in a preset area of a scanning visual area from the continuously generated preset number of frame images; or detecting whether the offset of the camera of the target object in the scanning visible area is smaller than an offset threshold value within a preset duration, and if so, acquiring one frame image containing a face image in a preset area of the scanning visible area from the generated multiple frame images within the preset duration.
11. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the face feature data extraction module is further configured to extract face feature data of a face image included in the frame image, where a proportion of the face image included in the frame image in the preset area exceeds a preset proportion, and a definition of the face image exceeds a definition threshold.
12. The apparatus of claim 7, wherein the apparatus further comprises:
the sign data acquisition module is used for acquiring the sign data of the user associated with the user identifier;
the display module is further configured to display the user sign data on a frame image displayed in the scan-visible area.
13. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the following steps when executing the program:
acquiring a frame image containing a face image in a preset area of a scanning visual area;
extracting face feature data of a face image contained in the frame image;
inquiring a user image matched with the face image according to the face characteristic data, and acquiring a user identifier corresponding to the user image;
acquiring social information associated with the user identifier, and displaying the social information in the scanning visible area;
when the frame image generated in real time is detected not to contain the face image in the preset area, calculating the deviation time length of the camera, and closing the display of the social information if the deviation time length reaches a preset deviation time length threshold; the deviation time length refers to a continuous time length of the preset area without the face image in the image scanning process.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN201710199079.9A 2017-03-29 2017-03-29 User social information display method and device and computer equipment Active CN107194817B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710199079.9A CN107194817B (en) 2017-03-29 2017-03-29 User social information display method and device and computer equipment
PCT/CN2018/073824 WO2018177002A1 (en) 2017-03-29 2018-01-23 Social information display method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710199079.9A CN107194817B (en) 2017-03-29 2017-03-29 User social information display method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN107194817A CN107194817A (en) 2017-09-22
CN107194817B true CN107194817B (en) 2023-06-23

Family

ID=59871655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710199079.9A Active CN107194817B (en) 2017-03-29 2017-03-29 User social information display method and device and computer equipment

Country Status (2)

Country Link
CN (1) CN107194817B (en)
WO (1) WO2018177002A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194817B (en) * 2017-03-29 2023-06-23 腾讯科技(深圳)有限公司 User social information display method and device and computer equipment
CN108153822A (en) * 2017-12-04 2018-06-12 珠海市魅族科技有限公司 A kind of correlating method and device, terminal and readable storage medium storing program for executing
KR102543656B1 (en) * 2018-03-16 2023-06-15 삼성전자주식회사 Screen controlling method and electronic device supporting the same
CN108764053A (en) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN109815804A (en) * 2018-12-19 2019-05-28 平安普惠企业管理有限公司 Exchange method, device, computer equipment and storage medium based on artificial intelligence
CN112733575A (en) * 2019-10-14 2021-04-30 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111010527B (en) * 2019-12-19 2021-05-14 易谷网络科技股份有限公司 Method and related device for establishing video call through short message link
CN111064658B (en) * 2019-12-31 2022-04-19 维沃移动通信有限公司 Display control method and electronic equipment
CN111460032A (en) * 2020-03-23 2020-07-28 郑州春泉节能股份有限公司 Cross-platform data synchronization method for epidemic situation prevention and control device
CN111598128B (en) * 2020-04-09 2023-05-12 腾讯科技(上海)有限公司 Control state identification and control method, device, equipment and medium of user interface
CN111813281A (en) * 2020-05-28 2020-10-23 维沃移动通信有限公司 Information acquisition method, information output method, information acquisition device, information output device and electronic equipment
CN113835582B (en) * 2021-09-27 2024-03-15 青岛海信移动通信技术有限公司 Terminal equipment, information display method and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980570A (en) * 2011-09-06 2013-03-20 上海博路信息技术有限公司 Live-scene augmented reality navigation system
CN103412953A (en) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 Social contact method on the basis of augmented reality
CN103426003A (en) * 2012-05-22 2013-12-04 腾讯科技(深圳)有限公司 Implementation method and system for enhancing real interaction
CN104572732A (en) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 Method and device for inquiring user identification and method and device for acquiring user identification
CN105302428A (en) * 2014-07-29 2016-02-03 腾讯科技(深圳)有限公司 Social network-based dynamic information display method and device
CN105320407A (en) * 2015-11-12 2016-02-10 上海斐讯数据通信技术有限公司 Pictured people social moment information acquisition method and apparatus
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN106484737A (en) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 A kind of network social intercourse method and network social intercourse device
CN106503262A (en) * 2016-11-22 2017-03-15 张新民 Social face memory recognition methodss and device

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009259238A (en) * 2008-03-26 2009-11-05 Fujifilm Corp Storage device for image sharing and image sharing system and method
CN102916986A (en) * 2011-08-01 2013-02-06 环达电脑(上海)有限公司 Searching system and searching method for face recognition
CN103186590A (en) * 2011-12-30 2013-07-03 牟颖 Method for acquiring identity information of wanted criminal on run through mobile phone
CN103365922A (en) * 2012-03-30 2013-10-23 北京千橡网景科技发展有限公司 Method and device for associating images with personal information
CN102819726B (en) * 2012-06-27 2016-08-24 宇龙计算机通信科技(深圳)有限公司 photo processing system and method for mobile terminal
CN103513890B (en) * 2012-06-28 2016-04-13 腾讯科技(深圳)有限公司 A kind of exchange method based on picture, device and server
CN103076879A (en) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 Multimedia interaction method and device based on face information, and terminal
CN103207890A (en) * 2013-02-21 2013-07-17 北京百纳威尔科技有限公司 Method and device for acquiring contact person information
CN103218600B (en) * 2013-03-29 2017-05-03 四川长虹电器股份有限公司 Real-time face detection algorithm
CN103532826A (en) * 2013-07-10 2014-01-22 北京百纳威尔科技有限公司 User state setting method and device in instant communication tool
CN103744858B (en) * 2013-12-11 2017-09-22 深圳先进技术研究院 A kind of information-pushing method and system
CN103810248B (en) * 2014-01-17 2017-02-08 百度在线网络技术(北京)有限公司 Method and device for searching for interpersonal relationship based on photos
CN104618803B (en) * 2014-02-26 2018-05-08 腾讯科技(深圳)有限公司 Information-pushing method, device, terminal and server
CN104820665A (en) * 2014-03-17 2015-08-05 腾讯科技(北京)有限公司 Method, terminal and server for exhibiting recommendation information
CN105303149B (en) * 2014-05-29 2019-11-05 腾讯科技(深圳)有限公司 The methods of exhibiting and device of character image
CN104780167B (en) * 2015-03-27 2018-11-27 深圳创维数字技术有限公司 A kind of account login method and terminal
CN104852908A (en) * 2015-04-22 2015-08-19 中国建设银行股份有限公司 Recommending method and apparatus for service information
CN104808921A (en) * 2015-05-08 2015-07-29 三星电子(中国)研发中心 Information reminding method and device
CN105117463B (en) * 2015-08-24 2019-08-06 北京旷视科技有限公司 Information processing method and information processing unit
CN105574498A (en) * 2015-12-15 2016-05-11 重庆凯泽科技有限公司 Face recognition system and recognition method based on customs security check
CN105574155A (en) * 2015-12-16 2016-05-11 广东欧珀移动通信有限公司 Photo search method and device
CN105488726A (en) * 2015-12-23 2016-04-13 北京奇虎科技有限公司 Method and device for inviting friend to join social group
CN105591885B (en) * 2016-01-21 2019-01-08 腾讯科技(深圳)有限公司 resource sharing method and device
CN106203391A (en) * 2016-07-25 2016-12-07 上海蓝灯数据科技股份有限公司 Face identification method based on intelligent glasses
CN106294681B (en) * 2016-08-05 2019-11-05 腾讯科技(深圳)有限公司 The methods, devices and systems of multiple-exposure
CN107194817B (en) * 2017-03-29 2023-06-23 腾讯科技(深圳)有限公司 User social information display method and device and computer equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980570A (en) * 2011-09-06 2013-03-20 上海博路信息技术有限公司 Live-scene augmented reality navigation system
CN103426003A (en) * 2012-05-22 2013-12-04 腾讯科技(深圳)有限公司 Implementation method and system for enhancing real interaction
CN103412953A (en) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 Social contact method on the basis of augmented reality
CN104572732A (en) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 Method and device for inquiring user identification and method and device for acquiring user identification
CN105302428A (en) * 2014-07-29 2016-02-03 腾讯科技(深圳)有限公司 Social network-based dynamic information display method and device
CN106484737A (en) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 A kind of network social intercourse method and network social intercourse device
CN105320407A (en) * 2015-11-12 2016-02-10 上海斐讯数据通信技术有限公司 Pictured people social moment information acquisition method and apparatus
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN106503262A (en) * 2016-11-22 2017-03-15 张新民 Social face memory recognition methodss and device

Also Published As

Publication number Publication date
CN107194817A (en) 2017-09-22
WO2018177002A1 (en) 2018-10-04

Similar Documents

Publication Publication Date Title
CN107194817B (en) User social information display method and device and computer equipment
CN108234591B (en) Content data recommendation method and device based on identity authentication device and storage medium
US9984729B2 (en) Facial detection, recognition and bookmarking in videos
EP3063730B1 (en) Automated image cropping and sharing
CN110263642B (en) Image cache for replacing portions of an image
CN107911736B (en) Live broadcast interaction method and system
CN108108012B (en) Information interaction method and device
CN109905593B (en) Image processing method and device
CN115735229A (en) Updating avatar garments in messaging systems
US8538093B2 (en) Method and apparatus for encouraging social networking through employment of facial feature comparison and matching
CN105354231B (en) Picture selection method and device, and picture processing method and device
CN110753933A (en) Reactivity profile portrait
US10499097B2 (en) Methods, systems, and media for detecting abusive stereoscopic videos by generating fingerprints for multiple portions of a video frame
KR102370699B1 (en) Method and apparatus for acquiring information based on an image
US11856255B2 (en) Selecting ads for a video within a messaging system
CN112866577B (en) Image processing method and device, computer readable medium and electronic equipment
CN113850627A (en) Elevator advertisement display method and device and electronic equipment
US9407864B2 (en) Data processing method and electronic device
JP6318289B1 (en) Related information display system
CN110661693A (en) Methods, computing device-readable storage media, and computing devices facilitating media-based content sharing performed in a computing device
CN113965792A (en) Video display method and device, electronic equipment and readable storage medium
CN117453635A (en) Image deletion method, device, electronic equipment and readable storage medium
CN117459662A (en) Video playing method, video identifying method, video playing device, video playing equipment and storage medium
CN114285988A (en) Display method, display device, electronic equipment and storage medium
CN117315756A (en) Image recognition method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant