CN111813281A - Information acquisition method, information output method, information acquisition device, information output device and electronic equipment - Google Patents

Information acquisition method, information output method, information acquisition device, information output device and electronic equipment Download PDF

Info

Publication number
CN111813281A
CN111813281A CN202010471234.XA CN202010471234A CN111813281A CN 111813281 A CN111813281 A CN 111813281A CN 202010471234 A CN202010471234 A CN 202010471234A CN 111813281 A CN111813281 A CN 111813281A
Authority
CN
China
Prior art keywords
information
target object
communication prompt
communication
prompt information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010471234.XA
Other languages
Chinese (zh)
Inventor
漆一磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010471234.XA priority Critical patent/CN111813281A/en
Publication of CN111813281A publication Critical patent/CN111813281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an information acquisition method, an information output device and electronic equipment. Belongs to the technical field of communication. An embodiment of the method comprises: acquiring figure tag information of a target object, wherein the target object is an object in a viewing interface of the intelligent glasses, and the figure tag information is obtained by carrying out face recognition on the target object; acquiring communication prompt information corresponding to a target object based on the character tag information; and sending the communication prompt information to the intelligent glasses. The embodiment can conveniently and quickly acquire the information, and the acquired information can effectively provide help for the communication process of the user.

Description

Information acquisition method, information output method, information acquisition device, information output device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an information acquisition method, an information output device and electronic equipment.
Background
In many scenarios, a user needs to acquire related information of a person. For example, in a social scenario, a user often needs to determine information about the identity, hobbies, etc. of people in the field of view in order to communicate with others.
In the prior art, the related information of a person can be determined by acquiring a person image in a current scene and comparing the image with a person image in an album or a head portrait of a friend in a social application, so as to communicate with the person image. In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: the information acquisition is not convenient and fast enough, and even sufficient and effective information cannot be acquired, so that help cannot be effectively provided for the communication process of the user.
Content of application
The embodiment of the application aims to provide an information acquisition method, an information output device and electronic equipment, and can solve the technical problems that the information acquisition is not convenient and quick enough, and the acquired information cannot provide effective help for the communication process of a user.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an information obtaining method, which is applied to an electronic device, and includes: acquiring figure tag information of a target object, wherein the target object is an object in a viewing interface of the intelligent glasses, and the figure tag information is obtained by carrying out face recognition on the target object; acquiring communication prompt information corresponding to a target object based on the character tag information; and sending the communication prompt information to the intelligent glasses.
In a second aspect, an embodiment of the present application provides an information output method, applied to smart glasses, including: acquiring target object information in a framing picture; sending the target object information or the person label information matched with the target object information to electronic equipment, and receiving communication prompt information returned by the electronic equipment; and outputting the communication prompt information.
In a third aspect, an embodiment of the present application provides an information obtaining apparatus, which is applied to an electronic device, and includes: the information processing device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is configured to acquire character tag information of a target object, the target object is an object in a view finding interface of smart glasses, and the character tag information is obtained after face recognition is carried out on the target object; a second obtaining unit configured to obtain the communication prompt information corresponding to the target object based on the person tag information; a first sending unit configured to send the communication prompt information to the smart glasses.
In a fourth aspect, an embodiment of the present application provides an information output apparatus, which is applied to smart glasses, and includes: an acquisition unit configured to acquire target object information in a finder screen; a sending unit configured to send the target object information or the person tag information matched with the target object information to an electronic device and receive communication prompt information returned by the electronic device; an output unit configured to output the communication prompt information
In a fifth aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, implement the steps of the method described in the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application provide a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method as described in the first or second aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method described in the first aspect or the second method.
In the embodiment of the application, the person tag information of the target object in the viewing interface of the intelligent glasses is obtained, then the communication prompt information corresponding to the target object is obtained based on the person tag information, and finally the communication prompt information is sent to the intelligent glasses. Therefore, the social contact prompt information of the target object in the view interface of the intelligent glasses can be conveniently and quickly acquired, the acquired social contact prompt information can effectively provide help for the user in the communication process, and the trouble that the user cannot recognize faces is solved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a flowchart of an information acquisition method provided in an embodiment of the present application;
FIG. 2 is a flow chart of an information output method provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of an information acquisition apparatus provided in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of an information output apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device suitable for implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The information acquisition method, the information output device and the electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Please refer to fig. 1, which shows a flowchart of an information obtaining method provided in an embodiment of the present application. The information acquisition method provided by the embodiment of the application can be applied to electronic equipment. In practice, the electronic device may be a smartphone, a tablet computer, a laptop, etc.
The process 100 of the information acquisition method provided by the embodiment of the application includes the following steps:
step 101, obtaining the person label information of the target object.
In this embodiment, the electronic device may be in communication connection with the smart glasses and receive information sent by the smart glasses. Among them, smart glasses are an emerging head wearable smart device. The intelligent glasses have an independent operating system like an intelligent mobile phone, can be used for installing programs such as software and games by a user, can finish functions such as adding schedules, map navigation, interacting with friends, taking pictures and videos, and developing video calls with the friends through control, and can realize wireless network access through a mobile communication network. The smart glasses may be equipped with an image capture device (e.g., a camera) to capture images in the viewing interface.
In this embodiment, the target object is an object in the viewing interface of the smart glasses. The person tag information may be obtained by performing face recognition on the target object. The persona tag information may characterize the identity of the target object. For example, the persona tag information may include, but is not limited to, name, gender, age, occupation, and the like.
In some scenarios, the smart glasses may perform face recognition on the target object to obtain the person tag information. At this moment, the electronic equipment can directly acquire the person label information after face recognition from the intelligent glasses. In this scenario, the smart glasses may have a face recognition function. For example, the smart glasses may have a face detection model and a face recognition model stored therein. The face detection model and the face recognition model can be obtained by pre-training through a machine learning method. The face detection frame in the viewing interface can be detected through the face detection model, and the face in the face detection frame can be identified through the face identification model to obtain the face characteristics. And the person label information can be obtained through the face characteristics.
In practice, different people may have different facial features and people label information. Therefore, in this scenario, a plurality of pre-stored face features labeled with the person label information may be pre-stored in the smart glasses. The plurality of pre-stored facial features may include facial features of the user and other people (e.g., friends, relatives, colleagues, clients, etc.). The intelligent glasses can perform face recognition on a target object in a view finding picture to obtain target face characteristics, so that the target face characteristics are matched with all prestored face characteristics, and the figure label information corresponding to the prestored face characteristics which are matched (if the similarity is greater than a preset value) is used as the figure label information of the target object.
It should be noted that the smart glasses may obtain a plurality of pre-stored face features labeled with the person label information in a plurality of ways in advance.
As an example, the user may upload the head portrait of the user or others and the related person tag information to the cloud server in a self-service uploading manner. The server can extract the face features from the head portraits and establish the corresponding relation between the face features and the character label information. When the intelligent glasses are started, the face features and the corresponding person label information stored by the server can be obtained in a network transmission mode. The face features acquired from the server can be stored as prestored face features, so that the stored face features can be used as comparison reference.
As yet another example, a user may set person tag information for a person image in an album of an electronic device (e.g., a cell phone) to which smart glasses are connected. Then, accessible transmission methods such as bluetooth transmit personage label information and the personage image that corresponds to intelligent glasses. After the intelligent glasses obtain the figure images, the human face features can be extracted from the figure images and stored as prestored human face features, and meanwhile, corresponding figure label information is stored.
As yet another example, smart glasses may have an image collection function. The intelligent glasses can collect figure images and identify human face characteristics. The user may annotate the captured image with persona tag information. Therefore, the intelligent glasses can take the face features in the acquired figure images as prestored face features, and correspondingly store the prestored face features and figure label information remarked by the user.
In other scenarios, the electronic device may perform a face recognition operation on the target object to obtain the person tag information. At this time, the electronic device may acquire the target object information of the viewing interface from the smart glasses. The target object information here may be an image containing the target object, which may be a part or the whole of the viewing interface. And then the electronic equipment can perform face recognition on the target object in the image so as to obtain the person label information.
In this scenario, the electronic device may store the face detection model and the face recognition model. The face detection frame of the target object can be detected through the face detection model, and the face in the face detection frame can be identified through the face identification model to obtain the face characteristics. And the person label information can be obtained through the face characteristics. For example, the electronic device may be configured to store a plurality of pre-stored face features labeled with the person label information in advance. The face features of the target object can be matched with the pre-stored face features, and the figure label information corresponding to the pre-stored face features which are matched (for example, the similarity is greater than a preset value) is used as the figure label information of the target object.
And 102, acquiring the communication prompt information corresponding to the target object based on the person tag information.
In this embodiment, the communication prompt information may be used to provide topics available for communication during the communication between the user and the target object. For example, the communication prompt information may include, but is not limited to, point of interest information of the target object, backlog with the target object, and the like. The interest point information may be information such as topics in which the target object is interested. By acquiring the interest point information, the user can be helped to find out topic points for communication and exchange, and the communication capacity can be remarkably improved. By acquiring the backlog, the user can be helped to know the backlog of both parties, and important items are prevented from being omitted.
In this embodiment, since the personal tag information may represent the identity of the target object, and people with different identities generally have different hobbies, etc., the electronic device may determine the interaction prompt information based on the personal tag information. For example, the correspondence between objects of different identities and the communication guidance information may be set in advance, and the communication guidance information corresponding to the target object may be determined based on the correspondence.
In some optional implementation manners of this embodiment, the interaction prompt information corresponding to the target object may be obtained through the following steps:
first, information related to a target object is acquired from usage information of an application installed in an electronic device based on personal tag information.
The use information may refer to various information created, recorded, received and generated during the use of the application by the user, such as interaction information, behavior information and the like. The related information may be information associated with the target object in the usage information, and may include, but is not limited to, a call record, a short message record, a chat record with the target object, and reply information of the target object in a social platform (e.g., a circle of friends).
And secondly, inputting the acquired related information into an information generation model obtained by pre-training to obtain the communication prompt information corresponding to the target object. Here, the information generation model may be used to represent the correspondence between the related information and the interaction prompt information. The information generation model may be trained in advance based on a machine learning method (e.g., supervised training method).
Here, various existing models for generating text may be employed to train the information generation model. In practice, an LSTM (Long Short-Term Memory), encoder-decoder (coder-decoder) model, or the like can be employed.
It should be noted that, based on the relevant information of the target object, the interaction prompt information corresponding to the target object may also be determined in other manners. As an example, various types of keywords may be extracted from the found related information, and the extracted keywords may be used as the communication prompt information or combined into the communication prompt information. For example, if the chat history with the target object contains the name of a star and the frequency of the name is greater than a preset value, the star can be used as the communication prompt message. For another example, if the memo includes the name of the target object and the corresponding to-do-item record, the keyword may be extracted from the to-do-item record, so as to use the keyword as the interaction prompt information.
In some optional implementation manners of this embodiment, in a case that the finder screen includes a plurality of target objects, the electronic device may query, from a social application installed in the electronic device, a target group in which the plurality of target objects are located in common, based on the person tag information of each target object. Then, the related information of each target object, such as the information of the message sent by each target object, can be obtained from the chat records of the target group. In practice, when the related information of each target object is obtained from the chat records, the screening may be performed according to time, for example, to screen the related information in a certain time period (e.g., in the last month, etc.). Therefore, the communication prompt information of different target objects can be respectively determined based on the related information of different target objects.
And 103, sending the communication prompt information to the intelligent glasses.
In this embodiment, after obtaining the communication prompt information, the electronic device may send the communication prompt information to the smart glasses, so that the smart glasses output (for example, display or voice broadcast) the communication prompt information. Therefore, when the user communicates with other people, the topics available for the user to communicate can be automatically prompted. Especially, when the user can not remember the identity of the opposite party and the experience of the opposite party, the information which can be communicated can be provided for the user in time, and help is brought to the user.
In some optional implementations of this embodiment, the communication prompt information may include point of interest information. And after the communication prompt information corresponding to the target object is obtained, the electronic equipment can also inquire the associated information of the interest point information from the Internet. For example, if the point of interest information is a star, the latest information of the star can be queried from the internet. The associated information may then be sent to the smart glasses to cause the smart glasses to display the associated information in the lenses. Therefore, the information content displayed in the intelligent glasses can be further enriched.
According to the method provided by the embodiment of the application, the person tag information of the target object in the view finding interface of the intelligent glasses is obtained, then the communication prompt information corresponding to the target object is obtained based on the person tag information, and finally the communication prompt information is sent to the intelligent glasses. Therefore, the social contact prompt information of the target object in the view interface of the intelligent glasses can be conveniently and quickly acquired, the acquired social contact prompt information can effectively provide help for the user in the communication process, and the trouble that the user cannot recognize faces is solved.
Further referring to fig. 2, a flowchart of an information output method provided by an embodiment of the present application is shown. The information output method can be applied to smart glasses. In practice, smart glasses are an emerging head-wearable smart device. The intelligent glasses have an independent operating system like an intelligent mobile phone, can be used for installing programs such as software and games by a user, can finish functions such as adding schedules, map navigation, interacting with friends, taking pictures and videos, and developing video calls with the friends through control, and can realize wireless network access through a mobile communication network. The smart glasses may be equipped with an image capture device (e.g., a camera) to capture images in the viewing interface.
The process 200 of the information output method provided by the embodiment of the application includes the following steps:
in step 201, target object information in a viewfinder image is acquired.
In this embodiment, the smart glasses may be installed with an image capturing device (such as a camera) to capture a current view picture, which may be used as a viewfinder picture. The object in the viewfinder frame may be referred to as a target object. The target object information may be an image containing the target object, which may be a part or the whole of the viewing interface.
Step 202, sending target object information or character tag information matched with the target object information to the electronic equipment, and receiving communication prompt information returned by the electronic equipment.
In this embodiment, the smart glasses may send target object information or person tag information matched with the target object information to an electronic device (e.g., a smart phone, a tablet computer, etc.) in communication connection therewith, and receive the communication prompt information returned by the electronic device. The communication prompt information can be used for providing information prompt in the communication process of the user and the target object. For example, point of interest information, backlogs for both parties, etc. may be included, but are not limited to. The interest point information may be information such as topics in which the target object is interested.
In some scenarios, the smart glasses may have face recognition functionality. The intelligent glasses can perform face recognition on the target object in the intelligent glasses, so that the character tag information is obtained. Then, the person label information can be sent to the electronic equipment, so that the communication prompt information returned by the electronic equipment is received.
As an example, a face detection model and a face recognition model may be stored in the smart glasses. The face detection model and the face recognition model can be obtained by pre-training through a machine learning method. The intelligent glasses can detect a face detection frame in a viewing interface through the face detection model, and can identify the face in the face detection frame through the face identification model to obtain the face characteristics. Therefore, the person label information can be matched through the human face characteristics.
In practice, different people may have different facial features and people label information. Therefore, in some optional implementation manners of the embodiment, a plurality of pre-stored face features marked with the person label information may be pre-stored in the smart glasses. The plurality of pre-stored facial features may include facial features of the user and other people (e.g., friends, relatives, colleagues, clients, etc.). The intelligent glasses can perform face recognition on a target object in a view finding picture to obtain target face characteristics, so that the target face characteristics are matched with all prestored face characteristics, and the figure label information corresponding to the prestored face characteristics which are matched (if the similarity is greater than a preset value) is used as the figure label information of the target object.
It should be noted that a plurality of pre-stored face features labeled with the person label information may be obtained in a variety of ways.
As an example, the user may upload the head portrait of the user or others and the related person tag information to the cloud server in a self-service uploading manner. The server can extract the face features from the head portraits and establish the corresponding relation between the face features and the character label information. When the intelligent glasses are started, the face features and the corresponding person label information stored by the server can be obtained in a network transmission mode. The face features acquired from the server can be stored as prestored face features, so that the stored face features can be used as comparison reference.
As yet another example, a user may set person tag information for a person image in an album of an electronic device (e.g., a cell phone) to which smart glasses are connected. Then, accessible transmission methods such as bluetooth transmit personage label information and the personage image that corresponds to intelligent glasses. After the intelligent glasses obtain the figure images, the human face features can be extracted from the figure images and stored as prestored human face features, and meanwhile, corresponding figure label information is stored.
As yet another example, smart glasses may have an image collection function. The intelligent glasses can collect figure images and identify human face characteristics. The user may annotate the captured image with persona tag information. Therefore, the intelligent glasses can take the face features in the acquired figure images as prestored face features, and correspondingly store the prestored face features and figure label information remarked by the user.
Here, the above manner of determining the communication prompt information by the electronic device based on the person tag information may refer to step 102 in the embodiment corresponding to fig. 1, and this embodiment is not described again.
In other scenarios, the smart glasses may send the target object information to the electronic device. Thereby receiving the communication prompt information returned by the electronic equipment. Here, the above manner of determining, by the electronic device, the communication prompt information based on the target object information may refer to step 102 in the embodiment corresponding to fig. 1, and details of this embodiment are not repeated.
And step 203, outputting the communication prompt information.
In this embodiment, after receiving the displayed communication prompt message returned by the electronic device, the smart glasses may use the communication prompt message in one or more ways. As an example, the smart glasses may display the communication prompt in the lenses. As yet another example, a microphone may be mounted in the smart glasses or a headset may be connected thereto. The intelligent glasses can broadcast the communication prompt information through a microphone or an earphone.
In some optional implementations of this embodiment, the smart glasses may first acquire the imaging position of the target object, that is, locate the specific coordinates of the person imaging. Then, the display position of the communication prompt information in the lens can be determined based on the imaging position. In practice, a position near the imaging position and not obstructing the imaging position may be taken as the display position. Then, the communication prompt information can be displayed at the display position, so that the target object and the communication prompt information can be displayed at the same time, and a user can read the communication prompt information corresponding to the target object conveniently.
According to the method provided by the embodiment of the application, the target object information in the view finding picture is obtained, then the target object information or the person label information matched with the target object information is sent to the electronic equipment, the communication prompt information returned by the electronic equipment is received, and finally the communication prompt information is output. Therefore, the social prompt information of the target object in the viewing interface can be conveniently and quickly output through the intelligent glasses, the output social prompt information can effectively provide help for the user's communication process, and the trouble that the user cannot recognize faces is solved.
It should be noted that, in the information obtaining method provided in the embodiment of the present application, the execution main body may be an information obtaining apparatus, or a control module in the information obtaining apparatus, configured to execute the loaded information obtaining method. In the embodiment of the present application, an information obtaining apparatus executes a loaded information obtaining method as an example, and the information obtaining method provided in the embodiment of the present application is described.
As shown in fig. 3, the information acquisition apparatus 300 of the present embodiment includes: a first acquisition unit 301 configured to acquire person tag information of a target object, the target object being an object in a viewing interface of smart glasses, the person tag information being obtained based on face recognition of the target object; a second obtaining unit 302 configured to obtain the interaction prompt information corresponding to the target object based on the person tag information; a first sending unit 303 configured to send the communication prompt information to the smart glasses.
In some optional implementations of the present embodiment, the second obtaining unit 302 is further configured to: inquiring the related information of the target object from the use information of the application installed in the electronic equipment based on the character tag information; and determining the communication prompt information corresponding to the target object based on the relevant information and an information generation model obtained by pre-training, wherein the information generation model is used for representing the corresponding relation between the relevant information and the communication prompt information.
In some optional implementations of this embodiment, the related information includes at least one of: call records, short message records, chat records, and reply messages in the social platform.
In some optional implementations of the present embodiment, the second obtaining unit 302 is further configured to: when a plurality of target objects are included in the view-finding picture, inquiring a target group where the plurality of target objects are commonly located from a social application installed on the electronic equipment based on the character tag information of each target object; and acquiring the related information of each target object from the chat records of the target group.
In some optional implementation manners of this embodiment, the communication prompt information includes information about a point of interest; and, the above-mentioned apparatus also includes: the query unit is configured to query the relevant information of the interest point information from the Internet; a second transmitting unit configured to transmit the association information to the smart glasses.
The information acquisition device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The information acquisition device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information acquisition device provided in the embodiment of the present application can implement each process implemented by the information acquisition method in the method embodiment of fig. 1, and is not described here again to avoid repetition.
According to the device provided by the embodiment of the application, the person label information of the target object in the view finding interface of the intelligent glasses is obtained, then the communication prompt information corresponding to the target object is obtained based on the person label information, and finally the communication prompt information is sent to the intelligent glasses. Therefore, the social contact prompt information of the target object in the view interface of the intelligent glasses can be conveniently and quickly acquired, the acquired social contact prompt information can effectively provide help for the user in the communication process, and the trouble that the user cannot recognize faces is solved.
It should be noted that, in the information output method provided in the embodiment of the present application, the execution main body may be an information output device, or a control module in the information output device, which is used for executing the loaded information output method. In the embodiment of the present application, an information output device executes a loaded information output method as an example, and the information output method provided in the embodiment of the present application is described. The information output device can be applied to intelligent glasses.
As shown in fig. 4, the information output apparatus 400 of the present embodiment includes: an acquisition unit 401 configured to acquire target object information in a finder screen; a sending unit 402 configured to send the target object information or the person tag information matched with the target object information to an electronic device, and receive communication prompt information returned by the electronic device; an output unit 403 configured to output the communication prompt information.
In some optional implementation manners of this embodiment, the personal tag information is obtained through the following steps: carrying out face recognition on a target object in a view finding picture to obtain target face characteristics; and matching the target face features with each prestored face feature which is marked with figure label information in advance, and acquiring figure label information of the prestored face features which are matched with the target face features.
In some optional implementations of the present embodiment, the output unit 403 is further configured to: acquiring an imaging position of the target object; determining the display position of the communication prompt information in the lens based on the imaging position; and displaying the communication prompt information at the display position.
In some optional implementations of the present embodiment, the output unit 403 is further configured to: and voice broadcasting the communication prompt information.
The information output device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The information output device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information output device provided in the embodiment of the present application can implement each process implemented by the information output method in the method embodiment of fig. 2, and is not described here again to avoid repetition.
The device provided by the above embodiment of the application acquires the target object information in the view finding picture, then sends the target object information or the person tag information matched with the target object information to the electronic equipment, receives the communication prompt information returned by the electronic equipment, and finally outputs the communication prompt information. Therefore, the social prompt information of the target object in the viewing interface can be conveniently and quickly output through the intelligent glasses, the output social prompt information can effectively provide help for the user's communication process, and the trouble that the user cannot recognize faces is solved.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 510, a memory 509, and a program or an instruction stored in the memory 509 and capable of being executed on the processor 510, where the program or the instruction is executed by the processor 510 to implement each process of the above-mentioned embodiment of the information acquisition method or the information output method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and the like.
Those skilled in the art will appreciate that the electronic device 500 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 510 is configured to obtain person tag information of a target object, where the target object is an object in a viewing interface of smart glasses, and the person tag information is obtained based on face recognition of the target object; acquiring communication prompt information corresponding to the target object based on the character tag information; an output unit 503, configured to send the communication prompt information to the smart glasses.
In the embodiment of the application, the person tag information of the target object in the viewing interface of the intelligent glasses is obtained, then the communication prompt information corresponding to the target object is obtained based on the person tag information, and finally the communication prompt information is sent to the intelligent glasses. Therefore, the social contact prompt information of the target object in the view interface of the intelligent glasses can be conveniently and quickly acquired, the acquired social contact prompt information can effectively provide help for the user in the communication process, and the trouble that the user cannot recognize faces is solved.
Optionally, the processor 510 is further configured to query, based on the personal tag information, related information of the target object from usage information of an application installed in the electronic device; and determining the communication prompt information corresponding to the target object based on the relevant information and an information generation model obtained by pre-training, wherein the information generation model is used for representing the corresponding relation between the relevant information and the communication prompt information.
Optionally, the processor 510 is further configured to, when a plurality of target objects are included in the finder screen, query, based on the person tag information of each of the target objects, a target group in which the plurality of target objects are located in common from a social application installed in the electronic device; and acquiring the related information of each target object from the chat records of the target group.
Optionally, the processor 510 is further configured to, after the obtaining of the communication prompt information of the target object, query association information of the point of interest information from the internet; and an output unit 503, further configured to send the associated information to the smart glasses. Therefore, the information content displayed in the intelligent glasses can be further enriched.
In addition, when the electronic device is a pair of smart glasses, the processor 510 may be configured to obtain target object information in a viewfinder frame. The radio frequency unit 501 may be configured to send the target object information or the person tag information matched with the target object information to an electronic device, and receive an interaction prompt message returned by the electronic device. The display unit 506 may be configured to display the communication prompt information.
In the embodiment of the application, the target object information in the view finding picture is acquired, then the target object information or the person label information matched with the target object information is sent to the electronic equipment, the communication prompt information returned by the electronic equipment is received, and finally the communication prompt information is output. Therefore, the social prompt information of the target object in the viewing interface can be conveniently and quickly output through the intelligent glasses, the output social prompt information can effectively provide help for the user's communication process, and the trouble that the user cannot recognize faces is solved.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned information obtaining method or the information displaying method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned information acquisition method or information display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (13)

1. An information acquisition method applied to an electronic device, comprising:
acquiring figure tag information of a target object, wherein the target object is an object in a view finding interface of the intelligent glasses, and the figure tag information is obtained by carrying out face recognition on the target object;
acquiring communication prompt information corresponding to the target object based on the character tag information;
and sending the communication prompt information to the intelligent glasses.
2. The method of claim 1, wherein the obtaining of the interaction prompt information corresponding to the target object based on the person tag information comprises:
inquiring relevant information of the target object from use information of an application installed in the electronic equipment based on the person tag information;
and determining the communication prompt information corresponding to the target object based on the relevant information and an information generation model obtained by pre-training, wherein the information generation model is used for representing the corresponding relation between the relevant information and the communication prompt information.
3. The method of claim 2, wherein the related information comprises at least one of: call records, short message records, chat records, and reply messages in the social platform.
4. The method of claim 2, wherein the querying the relevant information of the target object from the usage information of the application installed in the electronic device based on the personal tag information comprises:
under the condition that a plurality of target objects are included in the framing picture, inquiring a target group where the plurality of target objects are commonly located from a social application installed on the electronic equipment based on the character tag information of each target object;
and acquiring the related information of each target object from the chat records of the target group.
5. The method of claim 1, wherein the communication prompt comprises point of interest information; and the number of the first and second groups,
after the obtaining of the communication prompt information of the target object, the method further includes:
inquiring the relevant information of the interest point information from the Internet;
and sending the associated information to the smart glasses.
6. An information output method is applied to intelligent glasses and is characterized by comprising the following steps:
acquiring target object information in a framing picture;
sending the target object information or the person label information matched with the target object information to electronic equipment, and receiving communication prompt information returned by the electronic equipment;
and outputting the communication prompt information.
7. The method of claim 6, wherein the persona tag information is obtained by:
carrying out face recognition on a target object in a view finding picture to obtain target face characteristics;
and matching the target face features with all prestored face features which are marked with figure label information in advance, and acquiring figure label information of the prestored face features matched with the target face features.
8. The method of claim 6, wherein the outputting the communication prompt comprises:
acquiring an imaging position of the target object;
determining the display position of the communication prompt information in the lens based on the imaging position;
and displaying the communication prompt information at the display position.
9. The method of claim 6, wherein the outputting the communication prompt comprises:
and broadcasting the communication prompt information by voice.
10. An information acquisition apparatus applied to an electronic device, comprising:
the information processing device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is configured to acquire character tag information of a target object, the target object is an object in a view finding interface of smart glasses, and the character tag information is obtained after face recognition is carried out on the target object;
a second obtaining unit configured to obtain the communication prompt information corresponding to the target object based on the person tag information;
a first sending unit configured to send the communication prompt information to the smart glasses.
11. An information output device is applied to intelligent glasses and is characterized by comprising:
an acquisition unit configured to acquire target object information in a finder screen;
a sending unit configured to send the target object information or the person tag information matched with the target object information to an electronic device and receive communication prompt information returned by the electronic device;
an output unit configured to output the communication prompt information.
12. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the method of any one of claims 1-9.
13. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the method according to any one of claims 1-9.
CN202010471234.XA 2020-05-28 2020-05-28 Information acquisition method, information output method, information acquisition device, information output device and electronic equipment Pending CN111813281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010471234.XA CN111813281A (en) 2020-05-28 2020-05-28 Information acquisition method, information output method, information acquisition device, information output device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010471234.XA CN111813281A (en) 2020-05-28 2020-05-28 Information acquisition method, information output method, information acquisition device, information output device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111813281A true CN111813281A (en) 2020-10-23

Family

ID=72847827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010471234.XA Pending CN111813281A (en) 2020-05-28 2020-05-28 Information acquisition method, information output method, information acquisition device, information output device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111813281A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927350A (en) * 2014-04-04 2014-07-16 百度在线网络技术(北京)有限公司 Smart glasses based prompting method and device
CN104375650A (en) * 2014-12-02 2015-02-25 上海恩凡物联网科技有限公司 Social contact identification method and system based on intelligent wearable device
CN104850213A (en) * 2014-02-13 2015-08-19 索尼公司 Wearable electronic device and information processing method used for the same
US20160055371A1 (en) * 2014-08-21 2016-02-25 Coretronic Corporation Smart glasses and method for recognizing and prompting face using smart glasses
US20160295038A1 (en) * 2004-01-30 2016-10-06 Ip Holdings, Inc. Image and Augmented Reality Based Networks Using Mobile Devices and Intelligent Electronic Glasses
CN106096912A (en) * 2016-06-03 2016-11-09 广州视源电子科技股份有限公司 The face identification method of intelligent glasses and intelligent glasses
CN106156788A (en) * 2015-04-24 2016-11-23 小米科技有限责任公司 Face identification method, device and intelligent glasses
CN106201424A (en) * 2016-07-08 2016-12-07 北京甘为乐博科技有限公司 A kind of information interacting method, device and electronic equipment
CN107194817A (en) * 2017-03-29 2017-09-22 腾讯科技(深圳)有限公司 Methods of exhibiting, device and the computer equipment of user social contact information
CN108600632A (en) * 2018-05-17 2018-09-28 Oppo(重庆)智能科技有限公司 It takes pictures reminding method, intelligent glasses and computer readable storage medium
CN108958869A (en) * 2018-07-02 2018-12-07 京东方科技集团股份有限公司 A kind of intelligent wearable device and its information cuing method
CN109061903A (en) * 2018-08-30 2018-12-21 Oppo广东移动通信有限公司 Data display method, device, intelligent glasses and storage medium
CN109509109A (en) * 2017-09-15 2019-03-22 阿里巴巴集团控股有限公司 The acquisition methods and device of social information
CN109685610A (en) * 2018-12-14 2019-04-26 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and storage medium
CN110062362A (en) * 2018-01-19 2019-07-26 尹寅 A kind of Bluetooth pairing connection method of intelligent glasses
CN110837299A (en) * 2019-11-11 2020-02-25 上海萃钛智能科技有限公司 Activity management intelligent device, system and method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160295038A1 (en) * 2004-01-30 2016-10-06 Ip Holdings, Inc. Image and Augmented Reality Based Networks Using Mobile Devices and Intelligent Electronic Glasses
CN104850213A (en) * 2014-02-13 2015-08-19 索尼公司 Wearable electronic device and information processing method used for the same
CN103927350A (en) * 2014-04-04 2014-07-16 百度在线网络技术(北京)有限公司 Smart glasses based prompting method and device
US20160055371A1 (en) * 2014-08-21 2016-02-25 Coretronic Corporation Smart glasses and method for recognizing and prompting face using smart glasses
CN104375650A (en) * 2014-12-02 2015-02-25 上海恩凡物联网科技有限公司 Social contact identification method and system based on intelligent wearable device
CN106156788A (en) * 2015-04-24 2016-11-23 小米科技有限责任公司 Face identification method, device and intelligent glasses
CN106096912A (en) * 2016-06-03 2016-11-09 广州视源电子科技股份有限公司 The face identification method of intelligent glasses and intelligent glasses
CN106201424A (en) * 2016-07-08 2016-12-07 北京甘为乐博科技有限公司 A kind of information interacting method, device and electronic equipment
CN107194817A (en) * 2017-03-29 2017-09-22 腾讯科技(深圳)有限公司 Methods of exhibiting, device and the computer equipment of user social contact information
CN109509109A (en) * 2017-09-15 2019-03-22 阿里巴巴集团控股有限公司 The acquisition methods and device of social information
CN110062362A (en) * 2018-01-19 2019-07-26 尹寅 A kind of Bluetooth pairing connection method of intelligent glasses
CN108600632A (en) * 2018-05-17 2018-09-28 Oppo(重庆)智能科技有限公司 It takes pictures reminding method, intelligent glasses and computer readable storage medium
CN108958869A (en) * 2018-07-02 2018-12-07 京东方科技集团股份有限公司 A kind of intelligent wearable device and its information cuing method
CN109061903A (en) * 2018-08-30 2018-12-21 Oppo广东移动通信有限公司 Data display method, device, intelligent glasses and storage medium
CN109685610A (en) * 2018-12-14 2019-04-26 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and storage medium
CN110837299A (en) * 2019-11-11 2020-02-25 上海萃钛智能科技有限公司 Activity management intelligent device, system and method

Similar Documents

Publication Publication Date Title
CN105095873A (en) Picture sharing method and apparatus
RU2640632C2 (en) Method and device for delivery of information
US20120086792A1 (en) Image identification and sharing on mobile devices
CN106331761A (en) Live broadcast list display method and apparatuses
CN105302315A (en) Image processing method and device
CN105069075A (en) Photo sharing method and device
CN107784045B (en) Quick reply method and device for quick reply
CN104112119A (en) Face identification-based communication method and apparatus
CN107423386B (en) Method and device for generating electronic card
CN106453528A (en) Method and device for pushing message
US20170249513A1 (en) Picture acquiring method, apparatus, and storage medium
CN106777016B (en) Method and device for information recommendation based on instant messaging
CN106547850B (en) Expression annotation method and device
CN105549300A (en) Automatic focusing method and device
CN105095868A (en) Picture matching method and apparatus
CN105335714A (en) Photograph processing method, device and apparatus
CN110019897B (en) Method and device for displaying picture
CN106130873A (en) Information processing method and device
CN106331328B (en) Information prompting method and device
CN105611341A (en) Image transmission method, device and system
CN105426904A (en) Photo processing method, apparatus and device
CN110213062B (en) Method and device for processing message
CN112087653A (en) Data processing method and device and electronic equipment
CN105701245A (en) Picture recommendation method and device
CN108027821A (en) Handle the method and device of picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023

RJ01 Rejection of invention patent application after publication