CN110174924B - Friend making method based on wearable device and wearable device - Google Patents

Friend making method based on wearable device and wearable device Download PDF

Info

Publication number
CN110174924B
CN110174924B CN201811157912.4A CN201811157912A CN110174924B CN 110174924 B CN110174924 B CN 110174924B CN 201811157912 A CN201811157912 A CN 201811157912A CN 110174924 B CN110174924 B CN 110174924B
Authority
CN
China
Prior art keywords
image
camera module
face image
friend
account information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811157912.4A
Other languages
Chinese (zh)
Other versions
CN110174924A (en
Inventor
郑发
施锐彬
饶盛添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201811157912.4A priority Critical patent/CN110174924B/en
Publication of CN110174924A publication Critical patent/CN110174924A/en
Application granted granted Critical
Publication of CN110174924B publication Critical patent/CN110174924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)

Abstract

The invention relates to the technical field of wearable equipment, and discloses a friend making method based on wearable equipment and the wearable equipment, wherein the friend making method comprises the following steps: when a friend making instruction input by a user is detected, a first face image contained in a first image shot by a first camera module arranged at the top side of the host and a second face image contained in a second image shot by a second camera module arranged at the bottom side of the host are identified; acquiring first account information corresponding to a first face image and second account information corresponding to a second face image; and setting the account relation between the first account information and the second account information as a friend relation. By implementing the embodiment of the invention, different account information corresponding to the face images contained in different images can be identified according to different images shot by the double camera modules of the wearable device, the identified different account relationships can be set as the friend relationships, and the user does not need to manually input the account information of the friend to be searched, so that the speed of adding the friend is improved.

Description

Friend making method based on wearable device and wearable device
Technical Field
The invention relates to the technical field of wearable equipment, in particular to a friend making method based on wearable equipment and the wearable equipment.
Background
At present, the mode that the user used wearable equipment such as smart watch, motion bracelet to add the friend usually is: the account number of the user is logged in the wearable device, the searching information of the friend to be added is input into the wearable device, and the wearable device searches the account number of the friend to be added according to the searching information, so that the user can successfully add the friend to be added. However, in practice, it is found that the above-mentioned process of adding friends is very cumbersome, and takes a lot of time, thereby reducing the speed of adding friends.
Disclosure of Invention
The embodiment of the invention discloses a friend making method based on wearable equipment and the wearable equipment, which can improve the speed of adding friends.
The embodiment of the invention discloses a friend making method based on wearable equipment in a first aspect, wherein the wearable equipment comprises an intelligent host, the intelligent host comprises a host top side and a host bottom side which are arranged oppositely, a first camera module is arranged on the host top side, a second camera module is arranged on the host bottom side, and the method comprises the following steps:
when a friend making instruction input by a user is detected, controlling the first camera module and the second camera module to shoot to obtain a first image shot by the first camera module and a second image shot by the second camera module;
identifying a first face image contained in the first image and a second face image contained in the second image, and acquiring first account information corresponding to the first face image and second account information corresponding to the second face image;
and determining account relation between the first account information and the second account information as friend relation.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the controlling the first camera module and the second camera module to shoot when a friend making instruction input by a user is detected, and obtaining a first image shot by the first camera module and a second image shot by the second camera module, the method further includes:
when sound in the environment where the wearable device is located is obtained, whether the sound is human voice is judged through a human voice identification technology;
if the voice is the voice, performing semantic recognition on the voice, determining character information corresponding to the voice, and detecting whether the character information contains an instruction word corresponding to a friend making instruction;
and if so, determining that the text information is the friend making instruction input by the user of the wearable equipment.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the identifying a first face image included in the first image and a second face image included in the second image, and acquiring first account information corresponding to the first face image and second account information corresponding to the second face image includes:
detecting whether the first image contains a first face image or not;
if the first image comprises the first face image, identifying first identity information of a user corresponding to the first face image, acquiring first account information according to the first identity information, and judging whether the second image comprises a second face image;
and if the second image comprises the second face image, identifying second identity information of the user corresponding to the second face image, and acquiring second account information according to the second identity information.
As an alternative implementation manner, in the first aspect of the embodiment of the present invention, the detecting whether the first image includes a first face image includes:
identifying semantic features in the first image through a deep learning algorithm;
judging whether the semantic features comprise semantic features corresponding to human faces or not;
and if so, determining that the first image comprises the first face image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, when a friend making instruction input by a user is detected, the controlling the first camera module and the second camera module to shoot to obtain a first image shot by the first camera module and a second image shot by the second camera module includes:
when a friend making instruction input by a user is detected, controlling the first camera module to shoot to obtain a first image containing the first face image, and detecting whether a shooting range of the second camera module contains a second face image;
if yes, controlling the second camera module to shoot to obtain a second image containing the second face image;
if not, when the shooting range of the second camera module comprises the second face image, controlling the second camera module to shoot to obtain a second image comprising the second face image.
A second aspect of an embodiment of the present invention discloses a wearable device, where the wearable device includes an intelligent host, the intelligent host includes a host top side and a host bottom side that are arranged opposite to each other, the host top side is provided with a first camera module, the host bottom side is provided with a second camera module, and the intelligent host further includes:
the shooting unit is used for controlling the first camera module and the second camera module to shoot when a friend making instruction input by a user is detected, and obtaining a first image shot by the first camera module and a second image shot by the second camera module;
the identification unit is used for identifying a first face image contained in the first image and a second face image contained in the second image, and acquiring first account information corresponding to the first face image and second account information corresponding to the second face image;
and the first determining unit is used for determining the account relationship between the first account information and the second account information as the friend relationship.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the wearable device further includes:
the judging unit is used for controlling the first camera module and the second camera module to shoot when the shooting unit detects a friend making instruction input by a user, and judging whether the sound is voice or not through a voice recognition technology before a first image shot by the first camera module and a second image shot by the second camera module are obtained and when the sound in the environment where the wearable equipment is located is obtained;
the detection unit is used for carrying out semantic recognition on the voice when the judgment result of the judgment unit is yes, determining character information corresponding to the voice, and detecting whether the character information contains an instruction word corresponding to a friend making instruction or not;
and the second determining unit is used for determining that the text information is the friend making instruction input by the user of the wearable device when the detection result of the detecting unit is positive.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the identification unit includes:
the first detection subunit is used for detecting whether the first image contains a first face image;
a determining subunit, configured to, if a result detected by the first detecting subunit is yes, identify first identity information of a user corresponding to the first facial image, acquire first account information according to the first identity information, and determine whether the second image includes a second facial image;
and the identification subunit is configured to identify second identity information of the user corresponding to the second face image and acquire second account information according to the second identity information when the judgment result of the judgment subunit is yes.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first detection subunit includes:
the recognition module is used for recognizing semantic features in the first image through a deep learning algorithm;
the judging module is used for judging whether the semantic features comprise semantic features corresponding to human faces or not;
and the determining module is used for determining that the first image contains a first face image when the judgment result of the judging module is positive.
As an alternative implementation, in a second aspect of the embodiment of the present invention, the shooting unit includes:
the second detection subunit is used for controlling the first camera module to shoot a first image containing the first face image when a friend making instruction input by a user is detected, and detecting whether a shooting range of the second camera module contains a second face image or not;
the first shooting subunit is used for controlling the second camera module to shoot a second image containing the second face image when the detection result of the second detection subunit is positive;
and the second shooting subunit is used for controlling the second shooting module to shoot to obtain a second image containing the second face image when the detection result of the second detection subunit is negative and the shooting range of the second shooting module contains the second face image.
A third aspect of an embodiment of the present invention discloses another wearable device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect.
A fourth aspect of the present embodiments discloses a computer-readable storage medium storing a program code, where the program code includes instructions for performing part or all of the steps of any one of the methods of the first aspect.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when a friend making instruction input by a user is detected, a first camera module arranged on the top side of a host and a second camera module arranged on the bottom side of the host are controlled to shoot, so that a first image shot by the first camera module and a second image shot by the second camera module are obtained; identifying a first face image contained in the first image and a second face image contained in the second image, and acquiring first account information corresponding to the first face image and second account information corresponding to the second face image; and setting the account relation between the first account information and the second account information as a friend relation. Therefore, by implementing the embodiment of the invention, different account information corresponding to the face images contained in different images can be identified according to different images shot by the double camera modules of the wearable device, the identified relationship of different accounts is set as the friend relationship, and the user does not need to manually input the account information of the friend to be searched, so that the speed of adding the friend is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a friend making method based on a wearable device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a wearable device based on dual camera modules according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of another friend making method based on a wearable device according to an embodiment of the disclosure;
fig. 4 is a schematic flow chart of another friend making method based on a wearable device according to the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a wearable device disclosed in the embodiment of the invention;
FIG. 6 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present invention;
FIG. 7 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present invention;
fig. 8 is a schematic structural diagram of another wearable device disclosed in the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a friend making method based on wearable equipment and the wearable equipment, which can improve the speed of adding friends. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a friend making method based on a wearable device according to an embodiment of the present invention. As shown in fig. 1, the wearable device-based friend making method may include the steps of:
101. when a friend making instruction input by a user is detected, the intelligent host controls the first camera module and the second camera module to shoot, and a first image shot by the first camera module and a second image shot by the second camera module are obtained.
In the embodiment of the invention, the wearable device comprises an intelligent host, the intelligent host comprises a host top side and a host bottom side which are oppositely arranged, the host top side is provided with a first camera shooting module, and the host bottom side is provided with a second camera shooting module.
In the embodiment of the invention, the wearable equipment can be a telephone watch, a sports bracelet and the like, the first camera module and the second camera module of the wearable equipment can be a front camera and a rear camera of the wearable equipment respectively, and when the first camera module is the front camera, the second camera module is the rear camera; when the first camera module is a rear camera, the second camera module is a front camera.
In the embodiment of the present invention, the method for the user to input the friend making instruction may be that the user triggers a friend making module of a display interface of the wearable device, so as to trigger the friend making instruction, or that the user triggers the friend making instruction by pressing a button on the wearable device, or that the user inputs a voice including the friend making instruction by using a voice, which is not limited in the embodiment of the present invention.
As an optional implementation manner, before the smart host executes step 101, the following steps may also be executed:
the intelligent host detects whether a key representing a friend making instruction is pressed;
if yes, the intelligent host calculates the target time length of the pressed key representing the friend making instruction through a timer, and judges whether the target time length is greater than the preset time length;
if the time length is not longer than the preset time length, the intelligent host can think that the key representing the friend making instruction is touched by mistake;
if the time length is longer than the preset time length, the intelligent host can think that the user inputs the friend-making command by pressing the key representing the friend-making command.
By the implementation of the implementation mode, the efficiency of inputting the friend making instruction by the user can be improved on the premise of ensuring that the friend making instruction is not triggered by mistake.
Referring to fig. 2, fig. 2 is a schematic view illustrating a structure of a wearable device based on dual camera modules according to an embodiment of the present invention. The wearable device comprises an intelligent host 201, a chassis support 203 and a side belt 204, wherein the chassis support 203 is connected to the side belt 204, and the intelligent host 201 is movably arranged on the chassis support 203.
This intelligent host 201 includes the host computer top side and the host computer bottom side of relative setting, and to single display screen intelligent host computer, the one side that is equipped with display screen (not marked in the figure) is the host computer top side, and to the intelligent host computer of many display screens, the one side that is equipped with the main display screen is the host computer top side. Set up the first module 2011 of making a video recording on the host computer top side, set up the second module 2012 of making a video recording on the host computer bottom side, first module 2011 of making a video recording and second module 2012 make the shooting orientation of both opposite owing to locate host computer top side and host computer bottom side respectively.
In the embodiment of the present invention, the intelligent host 201 is rotatably disposed on the chassis support 203 to obtain different postures, so that the first camera module 2011 and the second camera module 2012 obtain different shooting angles. Specifically, the smart host 201 is rotatably disposed on the chassis support 203 through the rotating shaft 202, and a coaxial rotating design of the rotating shaft 202 is shared among a side belt end of the side belt 204, the first end of the chassis support 203 and one end of the smart host 201, so that the component design of the wearable device can be reduced, the component assembly process of the wearable device is simplified, and the structure of the wearable device is more compact.
102. The intelligent host identifies a first face image contained in the first image and a second face image contained in the second image, and acquires first account information corresponding to the first face image and second account information corresponding to the second face image.
In the embodiment of the invention, a social application client can be pre-built in the intelligent host of the wearable device, and the first account information and the second account information acquired by the intelligent host can be used based on the social application client. When the first camera module is a front camera and the second camera module is a rear camera, the first face image contained in the first image can be a face image of a current user of the wearable device, so that the first account information corresponding to the first face image identified by the intelligent host can be account information of the current user of the wearable device, the second face image contained in the second image can be a face image of a friend to be added, which needs to be added by the current user of the wearable device, and the second account information corresponding to the second face image identified by the intelligent host can be account information of the friend to be added.
103. And the intelligent host determines the account relation between the first account information and the second account information as the friend relation.
In the embodiment of the invention, the intelligent host of the wearable device can directly set the account relationship between the first account information and the second account information as the friend relationship, and the user can also set the authority information of the account for adding friends, for example, the user can set that the intelligent host is allowed to directly add friends, or the intelligent host is allowed to inquire before adding friends, or the intelligent host is not allowed to add any account as friends.
As an optional implementation manner, before the smart host executes step 103, the following steps may also be executed:
the intelligent host detects whether the account relation between the first account information and the second account information is a friend relation;
if not, the intelligent host executes step 103;
if so, the intelligent host outputs prompt information, and the prompt information is used for prompting whether the user of the wearable device needs to remark the added friends or not;
when an instruction which is input by a user and used for indicating that the added friend needs to be remarked is detected, the intelligent host acquires remark information input by the user and binds the remark information with account information of the added friend of the user of the wearable device.
By the implementation of the implementation mode, repeated friend adding operation of the first account information and the second account information can be avoided, and a user of the wearable device can timely confirm added friend remark information.
In the method described in fig. 1, different account information corresponding to face images contained in different images can be identified according to different images shot by the dual camera modules of the wearable device, and the identified relationship between the different accounts is set as a friend relationship, so that a user does not need to manually input account information of friends to be searched, and the speed of adding friends is increased. The images of the user containing the wearable device and the friend to be added can be acquired by using the two cameras of the wearable device, and the shooting process of the intelligent host of the wearable device is simplified. In addition, the acquired first account information and the acquired second account information can be directly determined as the friend relationship, so that the process of manually confirming the addition of friends by the user is simplified.
Example two
Referring to fig. 3, fig. 3 is a schematic flow chart illustrating another friend making method based on a wearable device according to an embodiment of the present invention. The friend making method based on the wearable device described in fig. 3 can be applied to the wearable device described in the foregoing embodiment. As described in the foregoing embodiments, the wearable device includes the smart host, and the smart host includes the host top side and the host bottom side that set up relatively, and the host top side is provided with the first module of making a video recording, and the host bottom side is provided with the second module of making a video recording. As shown in fig. 3, the wearable device-based friend making method may include the steps of:
301. when the sound in the environment where the wearable device is located is obtained, the intelligent host judges whether the sound is the voice through a voice recognition technology, and if so, the step 302 is executed; if not, the flow is ended.
In the embodiment of the invention, because various noises, such as automobile horn sound, sound of air conditioner operation, sound of people walking and the like, can occur in the environment where the wearable device is located, in order to reduce the power consumption of the wearable device, the wearable device needs to recognize the human sound from various sounds received by the microphone, so that the intelligent host only performs voice recognition on the human sound. The intelligent host can recognize the voice from various sounds in the external environment through the voice recognition technology.
302. The intelligent host carries out semantic recognition on the voice, determines character information corresponding to the voice, detects whether the character information contains an instruction word corresponding to a friend making instruction or not, and if so, executes the steps 303 to 305; if not, the flow is ended.
In the embodiment of the invention, the instruction word corresponding to the friend making instruction can be preset by the intelligent host, and can also be set by the user of the wearable device. If the user needs to input the friend-making instruction in a voice mode, the intelligent host can recognize the text information corresponding to the voice, and the text information can contain instruction words corresponding to the friend-making instruction.
303. The intelligent host determines that the text information is a friend making instruction input by a user of the wearable device.
In the embodiment of the present invention, by implementing the steps 301 to 303, whether the sound detected by the smart host is the voice can be identified through a voice identification technology, and if the sound is the voice, the smart host can trigger the wearable identified friend-making function according to the instruction word corresponding to the friend-making instruction contained in the voice, so that convenience in starting the friend-making function by the smart host of the wearable device is improved.
304. The intelligent host controls the first camera module to shoot to obtain a first image containing a first face image, detects whether a shooting range of the second camera module contains a second face image or not, and if so, executes step 305; if not, step 306 to step 308 are executed.
In the embodiment of the invention, the first camera module of the wearable device can be a front camera or a rear camera of the wearable device, so that when the first camera module is the front camera, the first face image can be a face image of a current user of the wearable device; when the first camera module is a rear camera, the first face image can be a face image of a friend to be added, which needs to be added by a current user of the wearable device. There can be two regions on the display interface of wearable equipment's intelligent host computer, and the preview image that the first module of making a video recording of first region was shot can be previewed, and the preview image that the second module of making a video recording of second region was shot can be previewed.
As an optional implementation manner, the manner in which the smart host controls the first camera module to capture the first image including the first facial image may include the following steps:
the intelligent host acquires a pre-stored rotation speed;
the intelligent host controls the intelligent host of the wearable device to rotate around a rotating shaft of the wearable device at a constant speed at the rotating speed, and detects whether a first preview image corresponding to the first camera module on a display interface of the wearable device contains a face image or not;
if contain, the intelligent host computer of wearable equipment of intelligent host computer control stops rotatoryly to control first module of making a video recording and shoot and obtain the first image that contains first face image.
Wherein, implement this kind of implementation, can look over the preview image in the shooting region of first module of making a video recording at the display interface of intelligent host computer, confirm whether the shooting region of first module of making a video recording contains facial image through preview image, if the shooting region of first module of making a video recording contains facial image, intelligent host computer that the wearable equipment can be controlled to the intelligent host computer stop rotatory to make first module of making a video recording shoot the first image that contains facial image, thereby improve the intelligent host computer recognition image of wearable equipment in the face rate of accuracy.
305. The intelligent host controls the second camera module to shoot a second image containing a second face image, and executes the steps 307 to 308.
306. When the intelligent host identifies that the shooting range of the second camera module contains the second face image, the intelligent host controls the second camera module to shoot to obtain a second image containing the second face image.
In the embodiment of the invention, if the second face image is not included in the shooting range of the second camera module after the first camera module shoots the first image, the intelligent host needs to control the intelligent host of the wearable device to continuously rotate around the rotating shaft of the wearable device, and when the second face image is included in the shooting range of the second camera module, the intelligent host controls the intelligent host of the wearable device to stop rotating.
In the embodiment of the present invention, by implementing the steps 304 to 306, the first camera module and the second camera module of the wearable device may be adjusted, so that the shooting ranges of the first camera module and the second camera module both include the face image, thereby improving the accuracy of the intelligent host of the wearable device in recognizing the face image in the image.
307. The intelligent host identifies a first face image contained in the first image and a second face image contained in the second image, and acquires first account information corresponding to the first face image and second account information corresponding to the second face image.
308. And the intelligent host determines the account relation between the first account information and the second account information as the friend relation.
In the method described in fig. 3, different account information corresponding to the face images contained in different images can be identified according to different images shot by the dual camera modules of the wearable device, and the identified relationship between the different accounts is set as a friend relationship, so that a user does not need to manually input account information of friends to be searched, and the speed of adding friends is increased. And voice information input by a user can be converted into character information, so that the intelligent host can more accurately judge whether the voice information contains an instruction word corresponding to the friend making instruction. In addition, the intelligent host can change the shooting range of the first camera module and the second camera module according to the difference of the positions of the user and the friend to be added, so that the speed of recognizing the account information of the user and the friend to be added by the intelligent host of the wearable device is increased.
EXAMPLE III
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another friend making method based on a wearable device according to an embodiment of the present invention. The friend making method based on the wearable device described in fig. 4 can be applied to the wearable device described in the foregoing embodiment. As described in the foregoing embodiments, the wearable device includes the smart host, and the smart host includes the host top side and the host bottom side that set up relatively, and the host top side is provided with the first module of making a video recording, and the host bottom side is provided with the second module of making a video recording. As shown in fig. 4, the wearable device-based friend making method may include the steps of:
401. when a friend making instruction input by a user is detected, the intelligent host controls the first camera module and the second camera module to shoot, and a first image shot by the first camera module and a second image shot by the second camera module are obtained.
402. The intelligent host computer detects whether the first image contains a first face image, if so, the step 403 is executed; if not, the flow is ended.
In the embodiment of the invention, the images shot by the first camera module and the second camera module of the wearable device may or may not contain the face image, so that the intelligent host needs to detect the first image and the second image shot by the first camera module and the second camera module and judge whether the first image and the second image both contain the face image
As an alternative embodiment, the manner of detecting whether the first image includes the first face image by the smart host may include the following steps:
the intelligent host identifies semantic features in the first image through a deep learning algorithm;
the intelligent host judges whether the semantic features comprise semantic features corresponding to the human face or not;
if so, the intelligent host computer determines that the first image comprises the first face image.
By the implementation of the implementation mode, after the semantic features corresponding to the human face are identified in the first image through the deep learning algorithm, the first human face image contained in the first image is continuously identified, and the accuracy of the intelligent host of the wearable device in identifying the human face image is improved.
403. The intelligent host identifies first identity information of a user corresponding to the first face image, acquires first account information according to the first identity information, judges whether the second image comprises the second face image, and executes steps 404-405 if the second image comprises the second face image; if not, the flow is ended.
In the embodiment of the invention, the identity information can be stored in the service equipment (such as a cloud server) in advance, the account information can also be stored in the server in advance, the intelligent host of the wearable equipment can be bound with the service equipment in advance, the intelligent host can identify the first face image and determine the first identity information corresponding to the first face image from the service equipment, and the intelligent host can also acquire the first account information from the service equipment according to the first identity information.
404. And the intelligent host identifies second identity information of the user corresponding to the second face image and acquires second account information according to the second identity information.
In an embodiment of the present invention, after the first account information included in the first image is detected, the second account information included in the second image may be acquired by implementing the steps 402 to 404, so that the smart host identifies the second account information in the second image after determining that the first image includes the first account information, thereby avoiding that the smart host continues to identify the second account information in the second image when the first image does not include the first account information, and improving account identification efficiency of the smart host of the wearable device.
405. And the intelligent host determines the account relation between the first account information and the second account information as the friend relation.
In the method described in fig. 4, different account information corresponding to the face images contained in different images can be identified according to different images shot by the dual camera modules of the wearable device, and the identified relationship between the different accounts is set as a friend relationship, so that a user does not need to manually input account information of friends to be searched, and the speed of adding friends is increased. The semantic features of the face contained in the image can be determined through a deep learning algorithm, so that the accuracy of recognizing the face image by an intelligent host of the wearable device is improved. In addition, the account information can be sequentially acquired according to the identified identity information, and the accuracy of acquiring the account information by the intelligent host of the wearable device is improved.
Example four
Referring to fig. 5, fig. 5 is a schematic structural diagram of a wearable device according to an embodiment of the present invention. As shown in fig. 5, the wearable device comprises an intelligent host, the intelligent host comprises a host top side and a host bottom side which are oppositely arranged, a first camera module is arranged on the host top side, and a second camera module is arranged on the host bottom side. As shown in fig. 5, the wearable device may include a smart host that includes:
and the shooting unit 501 is used for controlling the top side of the host and the second camera module to shoot when a friend making instruction input by a user is detected, so as to obtain a first image shot by the first camera module and a second image shot by the second camera module.
In the embodiment of the invention, the wearable device comprises an intelligent host, the intelligent host comprises a host top side and a host bottom side which are oppositely arranged, the host top side is provided with a first camera shooting module, and the host bottom side is provided with a second camera shooting module.
As an optional implementation, the shooting unit 501 may further be configured to:
detecting whether a key representing a friend making instruction is pressed;
if yes, calculating the target time length of the pressed key representing the friend making instruction through a timer, and judging whether the target time length is greater than the preset time length;
if the time length is not longer than the preset time length, the button representing the friend making instruction is considered to be touched by mistake;
if the time length is longer than the preset time length, the user can be considered to input the friend making instruction by pressing the key representing the friend making instruction.
By the implementation of the implementation mode, the efficiency of inputting the friend making instruction by the user can be improved on the premise of ensuring that the friend making instruction is not triggered by mistake.
The identifying unit 502 is configured to identify a first face image included in the first image captured by the capturing unit 501 and a second face image included in the second image, and acquire first account information corresponding to the first face image and second account information corresponding to the second face image.
A first determining unit 503, configured to determine, as a friend relationship, an account relationship between the first account information and the second account information that are identified by the identifying unit 502.
As an optional implementation manner, the first determining unit 503 may be further configured to:
detecting whether the account relation between the first account information and the second account information is a friend relation;
if not, determining the account relationship between the first account information and the second account information identified by the identifying unit 502 as the friend relationship;
if yes, outputting prompt information, wherein the prompt information is used for prompting whether the user of the wearable device needs to remark the added friends or not;
when an instruction which is input by a user and used for indicating that the added friend needs to be remarked is detected, remark information input by the user is obtained, and the remark information is bound with account information of the added friend of the user of the wearable device.
By the implementation of the implementation mode, repeated friend adding operation of the first account information and the second account information can be avoided, and a user of the wearable device can timely confirm added friend remark information.
Therefore, by implementing the wearable device described in fig. 5, different account information corresponding to the face images contained in different images can be identified according to different images shot by the dual camera modules of the wearable device, and the identified relationships of the different accounts are set to be friend relationships, so that a user does not need to manually input account information of friends to be searched, and the speed of adding friends is increased. The user who contains the wearable equipment and the image of the friend to be added can also be obtained simultaneously by using the double camera modules of the wearable equipment, and the shooting process of the wearable equipment is simplified. In addition, the acquired first account information and the acquired second account information can be directly determined as the friend relationship, so that the process of manually confirming the addition of friends by the user is simplified.
EXAMPLE five
Referring to fig. 6, fig. 6 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. The wearable device shown in fig. 6 is optimized by the wearable device shown in fig. 5. Compared to the wearable device shown in fig. 5, the wearable device shown in fig. 6 may further include:
the judging unit 504 is configured to control the first camera module and the second camera module to shoot when the friend making instruction input by the user is detected by the shooting unit 501, and judge whether the sound is voice through a voice recognition technology before a first image shot by the first camera module and a second image shot by the second camera module are obtained and when the sound in the environment where the wearable device is located is obtained.
A detecting unit 505, configured to perform semantic recognition on the voice, determine text information corresponding to the voice, and detect whether an instruction word corresponding to the friend making instruction is included in the text information when the result of the determination by the determining unit 504 is yes.
A second determining unit 506, configured to determine that the text information is a friend making instruction input by the user of the wearable device when the result detected by the detecting unit 505 is yes.
In the embodiment of the invention, whether the sound detected by the intelligent host is the voice can be identified through the voice identification technology, if the sound is the voice, the intelligent host can trigger the friend making function of wearable identification according to the instruction word corresponding to the friend making instruction contained in the voice, so that the convenience of starting the friend making function by the wearable equipment is improved.
As an alternative embodiment, the photographing unit 501 of the wearable device shown in fig. 6 may include:
the second detection subunit 5011 is configured to, when a friend making instruction input by a user is detected, control the first camera module to capture a first image including a first face image, and detect whether a second face image is included in a capture range of the second camera module;
the first shooting subunit 5012 is configured to, when the result of the detection by the second detecting subunit 5011 is yes, control the second camera module to shoot a second image including a second face image;
and the second shooting subunit 5013 is configured to, if the result of the detection by the second detecting subunit 5011 is negative, and if it is recognized that the shooting range of the second camera module includes the second face image, control the second camera module to shoot a second image including the second face image.
Wherein, implement this kind of embodiment, the first module of making a video recording and the second module of making a video recording that can adjust wearable equipment to make the first shooting range of making a video recording module and the second module of making a video recording all contain facial image, thereby improve the intelligent host computer of wearable equipment and discern the rate of accuracy of facial image in the image.
As an optional implementation manner, a manner in which the shooting unit 501 controls the first camera module to shoot the first image including the first face image may specifically be:
acquiring a pre-stored rotation speed;
controlling an intelligent host of the wearable device to rotate around a rotating shaft of the wearable device at a constant speed at the rotating speed, and detecting whether a first preview image corresponding to the first camera module on a display interface of the wearable device contains a face image or not;
if contain, the intelligent host computer of control wearable equipment stops to rotate to control first camera module and shoot and obtain the first image that contains first face image.
Wherein, implement this kind of implementation, can look over the preview image in the shooting region of first module of making a video recording at the display interface of intelligent host computer, confirm whether the shooting region of first module of making a video recording contains facial image through preview image, if the shooting region of first module of making a video recording contains facial image, intelligent host computer that the wearable equipment can be controlled to the intelligent host computer stop rotatory to make first module of making a video recording shoot the first image that contains facial image, thereby improve the intelligent host computer recognition image of wearable equipment in the face rate of accuracy.
Therefore, by implementing the wearable device described in fig. 6, different account information corresponding to the face images contained in different images can be identified according to different images shot by the dual camera modules of the wearable device, and the identified relationships of the different accounts are set to be friend relationships, so that a user does not need to manually input account information of friends to be searched, and the speed of adding friends is increased. And voice information input by a user can be converted into character information, so that the intelligent host can more accurately judge whether the voice information contains an instruction word corresponding to the friend making instruction. In addition, the intelligent host can change the shooting range of the first camera module and the second camera module according to the difference of the positions of the user and the friend to be added, so that the speed of recognizing the account information of the user and the friend to be added by the intelligent host of the wearable device is increased.
EXAMPLE six
Referring to fig. 7, fig. 7 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. The wearable device shown in fig. 7 is optimized by the wearable device shown in fig. 6. Compared to the wearable device shown in fig. 6, the identification unit 502 of the wearable device shown in fig. 7 may include:
the first detecting subunit 5021 is configured to detect whether the first image captured by the capturing unit 501 includes a first face image.
The determining subunit 5022 is configured to, if the result of the detection by the first detecting subunit 5021 is yes, identify first identity information of the user corresponding to the first face image, obtain first account information according to the first identity information, and determine whether the second image includes the second face image.
The identifying subunit 5023 is configured to identify second identity information of the user corresponding to the second face image and obtain second account information according to the second identity information when the result determined by the determining subunit 5022 is yes.
In the embodiment of the invention, after the first account information contained in the first image is detected, the second account information contained in the second image can be acquired, so that the intelligent host can identify the second account information in the second image after the first account information contained in the first image is determined, the intelligent host can be prevented from continuously identifying the second account information in the second image when the first account information is not contained in the first image, and the account identification efficiency of the intelligent host of the wearable device is improved.
As an alternative embodiment, the first detection subunit 5021 of the smart host shown in fig. 7 may include:
the recognition module 50211 is used for recognizing semantic features in the first image shot by the shooting unit 501 through a deep learning algorithm;
the judging module 50212 is configured to judge whether the semantic features identified by the identifying module 50211 include semantic features corresponding to a human face;
a determining module 50213, configured to determine that the first image includes the first face image when the determination result of the determining module 50212 is yes.
By the implementation of the implementation mode, after the semantic features corresponding to the human face are identified in the first image through the deep learning algorithm, the first human face image contained in the first image is continuously identified, and the accuracy of the intelligent host of the wearable device in identifying the human face image is improved.
Therefore, by implementing the wearable device described in fig. 7, different account information corresponding to the face images contained in different images can be identified according to different images shot by the dual camera modules of the wearable device, and the identified relationships of the different accounts are set to be friend relationships, so that a user does not need to manually input account information of friends to be searched, and the speed of adding friends is increased. The semantic features of the face contained in the image can be determined through a deep learning algorithm, so that the accuracy of recognizing the face image by the intelligent host is improved. In addition, the account information can be sequentially acquired according to the identified identity information, and the accuracy of acquiring the account information by the intelligent host of the wearable device is improved.
EXAMPLE seven
Referring to fig. 8, fig. 8 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present invention. As shown in fig. 8, the wearable device comprises an intelligent host, the intelligent host comprises a host top side and a host bottom side which are oppositely arranged, a first camera module is arranged on the host top side, and a second camera module is arranged on the host bottom side. As shown in fig. 8, the smart host may include:
a memory 801 in which executable program code is stored;
a processor 802 coupled with the memory 801;
wherein the processor 802 calls the executable program code stored in the memory 801 to perform some or all of the steps of the methods in the above method embodiments.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores program codes, wherein the program codes comprise instructions for executing part or all of the steps of the method in the above method embodiments.
Embodiments of the present invention also disclose a computer program product, wherein, when the computer program product is run on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "an embodiment of the present invention" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in embodiments of the invention" appearing in various places throughout the specification are not necessarily all referring to the same embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
The friend making method based on the wearable device and the wearable device disclosed by the embodiment of the invention are described in detail, a specific example is applied in the description to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A friend making method based on wearable equipment is characterized in that the wearable equipment comprises an intelligent host, a side belt and a chassis support, the chassis support is connected to the side belt, the intelligent host comprises a host top side and a host bottom side which are oppositely arranged, a first camera module is arranged on the host top side, a second camera module is arranged on the host bottom side, the intelligent host is rotatably arranged on the chassis support through a rotating shaft, and a coaxial rotating design of the rotating shaft is formed among one side belt end of the side belt, a first end of the chassis support and one end of the intelligent host, so that the first camera module and the second camera module obtain different shooting angles, and the method comprises the following steps:
when a friend making instruction input by a user is detected, controlling the first camera module and the second camera module to shoot to obtain a first image shot by the first camera module and a second image shot by the second camera module;
identifying a first face image contained in the first image and a second face image contained in the second image, and acquiring first account information corresponding to the first face image and second account information corresponding to the second face image;
determining account relation between the first account information and the second account information as friend relation;
when a friend-making instruction input by a user is detected, the first camera module and the second camera module are controlled to shoot to obtain a first image shot by the first camera module and a second image shot by the second camera module, and the friend-making method comprises the following steps:
when a friend making instruction input by a user is detected, controlling the first camera module to shoot to obtain a first image containing the first face image, and detecting whether a shooting range of the second camera module contains a second face image;
if yes, controlling the second camera module to shoot to obtain a second image containing the second face image;
if not, when the shooting range of the second camera module comprises the second face image, controlling the second camera module to shoot to obtain a second image comprising the second face image;
control first module of making a video recording shoots and obtains the first image that contains first person's face image includes:
acquiring a pre-stored rotation speed;
controlling the intelligent host to rotate around the rotating shaft at the rotating speed at a constant speed, and detecting whether a first preview image corresponding to the first camera module on a display interface contains a face image or not;
if the first image-taking module contains the first face image, the intelligent host is controlled to stop rotating, and the first image-taking module is controlled to take the first image containing the first face image.
2. The method according to claim 1, wherein when a friend making instruction input by a user is detected, the first camera module and the second camera module are controlled to shoot, and before a first image shot by the first camera module and a second image shot by the second camera module are obtained, the method further comprises:
when sound in the environment where the wearable device is located is obtained, whether the sound is human voice is judged through a human voice identification technology;
if the voice is the voice, performing semantic recognition on the voice, determining character information corresponding to the voice, and detecting whether the character information contains an instruction word corresponding to a friend making instruction;
and if so, determining that the text information is the friend making instruction input by the user of the wearable equipment.
3. The method according to claim 1 or 2, wherein the recognizing a first face image included in the first image and a second face image included in the second image and acquiring first account information corresponding to the first face image and second account information corresponding to the second face image includes:
detecting whether the first image contains a first face image or not;
if the first image comprises the first face image, identifying first identity information of a user corresponding to the first face image, acquiring first account information according to the first identity information, and judging whether the second image comprises a second face image;
and if the second image comprises the second face image, identifying second identity information of the user corresponding to the second face image, and acquiring second account information according to the second identity information.
4. The method of claim 3, wherein the detecting whether the first image includes a first face image comprises:
identifying semantic features in the first image through a deep learning algorithm;
judging whether the semantic features comprise semantic features corresponding to human faces or not;
and if so, determining that the first image comprises the first face image.
5. The utility model provides a wearable equipment, its characterized in that, wearable equipment includes intelligent host computer, sideband and chassis support, chassis support connect in the sideband, the intelligent host computer includes relative host computer top side and the host computer bottom side that sets up, the host computer top side is provided with the first module of making a video recording, the host computer bottom side is provided with the second module of making a video recording, the intelligent host computer is located with rotatable mode on the chassis support, wherein, the intelligent host computer rotate through the pivot set up in on the chassis support, just a sideband tip of sideband, the first end of chassis support and the coaxial rotation design of one end three of intelligent host computer between for the sharing pivot makes the first module of making a video recording, the second module of making a video recording obtain different shooting angles, the intelligent host computer still includes:
the shooting unit is used for controlling the first camera module and the second camera module to shoot when a friend making instruction input by a user is detected, and obtaining a first image shot by the first camera module and a second image shot by the second camera module;
the identification unit is used for identifying a first face image contained in the first image and a second face image contained in the second image, and acquiring first account information corresponding to the first face image and second account information corresponding to the second face image;
a first determining unit, configured to determine an account relationship between the first account information and the second account information as a friend relationship;
the photographing unit includes:
the second detection subunit is used for controlling the first camera module to shoot a first image containing the first face image when a friend making instruction input by a user is detected, and detecting whether a shooting range of the second camera module contains a second face image or not;
the first shooting subunit is used for controlling the second camera module to shoot a second image containing the second face image when the detection result of the second detection subunit is positive;
the second shooting subunit is used for controlling the second camera module to shoot to obtain a second image containing the second face image when the detection result of the second detection subunit is negative and the shooting range of the second camera module contains the second face image;
the second detection subunit is configured to control the first image pickup module to capture a first image including the first face image in a specific manner:
acquiring a pre-stored rotation speed; controlling the intelligent host to rotate around the rotating shaft at the rotating speed at a constant speed, and detecting whether a first preview image corresponding to the first camera module on a display interface contains a face image or not; if the first image-taking module contains the first face image, the intelligent host is controlled to stop rotating, and the first image-taking module is controlled to take the first image containing the first face image.
6. The wearable device of claim 5, wherein the smart host further comprises:
the judging unit is used for controlling the first camera module and the second camera module to shoot when the shooting unit detects a friend making instruction input by a user, and judging whether the sound is voice or not through a voice recognition technology before a first image shot by the first camera module and a second image shot by the second camera module are obtained and when the sound in the environment where the wearable equipment is located is obtained;
the detection unit is used for carrying out semantic recognition on the voice when the judgment result of the judgment unit is yes, determining character information corresponding to the voice, and detecting whether the character information contains an instruction word corresponding to a friend making instruction or not;
and the second determining unit is used for determining that the text information is the friend making instruction input by the user of the wearable device when the detection result of the detecting unit is positive.
7. Wearable device according to claim 5 or 6, characterized in that the identification unit comprises:
the first detection subunit is used for detecting whether the first image contains a first face image;
a determining subunit, configured to, if a result detected by the first detecting subunit is yes, identify first identity information of a user corresponding to the first facial image, acquire first account information according to the first identity information, and determine whether the second image includes a second facial image;
and the identification subunit is configured to identify second identity information of the user corresponding to the second face image and acquire second account information according to the second identity information when the judgment result of the judgment subunit is yes.
8. The wearable device of claim 7, wherein the first detection subunit comprises:
the recognition module is used for recognizing semantic features in the first image through a deep learning algorithm;
the judging module is used for judging whether the semantic features comprise semantic features corresponding to human faces or not;
and the determining module is used for determining that the first image contains a first face image when the judgment result of the judging module is positive.
CN201811157912.4A 2018-09-30 2018-09-30 Friend making method based on wearable device and wearable device Active CN110174924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811157912.4A CN110174924B (en) 2018-09-30 2018-09-30 Friend making method based on wearable device and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811157912.4A CN110174924B (en) 2018-09-30 2018-09-30 Friend making method based on wearable device and wearable device

Publications (2)

Publication Number Publication Date
CN110174924A CN110174924A (en) 2019-08-27
CN110174924B true CN110174924B (en) 2021-03-30

Family

ID=67689134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811157912.4A Active CN110174924B (en) 2018-09-30 2018-09-30 Friend making method based on wearable device and wearable device

Country Status (1)

Country Link
CN (1) CN110174924B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182202B (en) * 2019-11-08 2022-05-27 广东小天才科技有限公司 Content identification method based on wearable device and wearable device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970830A (en) * 2014-03-31 2014-08-06 小米科技有限责任公司 Information recommendation method and device
CN104766608A (en) * 2014-01-07 2015-07-08 深圳市中兴微电子技术有限公司 Voice control method and voice control device
CN105429848A (en) * 2015-10-23 2016-03-23 广东小天才科技有限公司 Method and system for adding friends via taking photos, and social system of social server
CN106557742A (en) * 2016-10-24 2017-04-05 宇龙计算机通信科技(深圳)有限公司 Group sets up and management method and system
CN206627775U (en) * 2017-04-11 2017-11-10 台州蜂时电子科技有限公司 Intelligent dial plate and intelligent watch
CN107644209A (en) * 2017-09-21 2018-01-30 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8249218B2 (en) * 2009-01-29 2012-08-21 The Invention Science Fund I, Llc Diagnostic delivery service
CN104932660B (en) * 2014-03-17 2018-10-12 联想(北京)有限公司 The method of a kind of electronic equipment and control electronics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766608A (en) * 2014-01-07 2015-07-08 深圳市中兴微电子技术有限公司 Voice control method and voice control device
CN103970830A (en) * 2014-03-31 2014-08-06 小米科技有限责任公司 Information recommendation method and device
CN105429848A (en) * 2015-10-23 2016-03-23 广东小天才科技有限公司 Method and system for adding friends via taking photos, and social system of social server
CN106557742A (en) * 2016-10-24 2017-04-05 宇龙计算机通信科技(深圳)有限公司 Group sets up and management method and system
CN206627775U (en) * 2017-04-11 2017-11-10 台州蜂时电子科技有限公司 Intelligent dial plate and intelligent watch
CN107644209A (en) * 2017-09-21 2018-01-30 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Also Published As

Publication number Publication date
CN110174924A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
JP6388706B2 (en) Unmanned aircraft shooting control method, shooting control apparatus, and electronic device
WO2017185630A1 (en) Emotion recognition-based information recommendation method and apparatus, and electronic device
CN109167877B (en) Terminal screen control method and device, terminal equipment and storage medium
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN107666536B (en) Method and device for searching terminal
US10824891B2 (en) Recognizing biological feature
CN110177242B (en) Video call method based on wearable device and wearable device
CN112532885B (en) Anti-shake method and device and electronic equipment
CN110705356B (en) Function control method and related equipment
US20210201478A1 (en) Image processing methods, electronic devices, and storage media
CN108647633B (en) Identification tracking method, identification tracking device and robot
CN108090424B (en) Online teaching investigation method and equipment
CN111182204B (en) Shooting method based on wearable device and wearable device
CN110177239B (en) Video call method based on remote control and wearable device
CN108780568A (en) A kind of image processing method, device and aircraft
CN109684993B (en) Face recognition method, system and equipment based on nostril information
CN106060383B (en) A kind of method and system that image obtains
CN110174924B (en) Friend making method based on wearable device and wearable device
CN112189330A (en) Shooting control method, terminal, holder, system and storage medium
CN112887615B (en) Shooting method and device
CN111079503B (en) Character recognition method and electronic equipment
CN109740557B (en) Object detection method and device, electronic equipment and storage medium
CN112437231A (en) Image shooting method and device, electronic equipment and storage medium
CN111756960B (en) Shooting control method based on wearable device and wearable device
CN110174923B (en) Friend making method based on wearable device and wearable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant