CN111768785B - Control method of smart watch and smart watch - Google Patents

Control method of smart watch and smart watch Download PDF

Info

Publication number
CN111768785B
CN111768785B CN201911004176.3A CN201911004176A CN111768785B CN 111768785 B CN111768785 B CN 111768785B CN 201911004176 A CN201911004176 A CN 201911004176A CN 111768785 B CN111768785 B CN 111768785B
Authority
CN
China
Prior art keywords
payment
watch
screen
smart watch
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911004176.3A
Other languages
Chinese (zh)
Other versions
CN111768785A (en
Inventor
施锐彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201911004176.3A priority Critical patent/CN111768785B/en
Publication of CN111768785A publication Critical patent/CN111768785A/en
Application granted granted Critical
Publication of CN111768785B publication Critical patent/CN111768785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3276Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being read by the M-device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Abstract

A control method of a smart watch and the smart watch, the smart watch includes a watch body provided with a screen, the watch body can rotate within 360 degrees at any angle when standing up, the method includes: detecting an externally emitted voice signal; identifying the sound source direction corresponding to the voice signal; recognizing the semantics of the voice signal; and if the semantic representation needs to watch the screen, controlling the upright watch body to rotate until the screen faces the sound source direction. According to the embodiment of the application, the screen of the intelligent watch can be conveniently controlled towards the user under the condition that the arm does not need to be twisted, and the experience sense of the user when the user uses the intelligent watch is promoted.

Description

Control method of smart watch and smart watch
Technical Field
The application relates to the technical field of intelligent watches, in particular to a control method of an intelligent watch and the intelligent watch.
Background
Currently, smart watches can implement multiple functions through their built-in intelligent systems to can bring good use experience to the user. In practice, it is found that the realization of some functions of the smart watch often depends on the display of the screen of the smart watch, and when a user needs to watch the content displayed on the screen of the smart watch, the user usually needs to turn the arm wearing the smart watch so as to enable the screen of the smart watch to face the user, and the fatigue of the arm can be increased when the arm is turned for a long time, so that the experience of the user when using the smart watch is reduced.
Disclosure of Invention
The embodiment of the application discloses a control method of a smart watch and the smart watch, which can conveniently control the screen of the smart watch to face a user without twisting an arm, and are beneficial to improving the experience of the user when the user uses the smart watch.
A first aspect of an embodiment of the present application discloses a method for controlling a smart watch, where the smart watch includes a watch body provided with a screen, and the watch body can rotate by any angle within a range of 360 ° when standing up, and the method includes:
detecting an externally sent voice signal;
recognizing a sound source direction corresponding to the voice signal;
recognizing the semantics of the voice signal;
and if the semantic representation needs to watch the screen, controlling the erected watch body to rotate until the screen faces to the sound source direction.
As an optional implementation manner, in the first aspect of the embodiment of the present application, a front camera is disposed on a top side of a watch body where the screen is located, and a rear camera is disposed on a bottom side of the watch body, where the method further includes:
if the semantic representation utilizes the front camera to carry out self-shooting, controlling the erected watch body to rotate to the screen and the front camera to face the sound source direction at the same time; and controlling the front camera to capture a first human image towards the sound source direction; and identifying whether the face features of the first person image are matched with the face features of the wearer of the intelligent watch, if so, controlling the front camera to shoot the first person image, acquiring a self-shot image of the front camera and outputting the self-shot image to the screen for display.
As another optional implementation manner, in the first aspect of the embodiment of the present application, a front camera is disposed on a top side of a watch body where the screen is located, and a rear camera is disposed on a bottom side of the watch body, where the method further includes:
if the semantic meaning shows that the rear camera is used for carrying out self-shooting, controlling the erected meter body to rotate until the rear camera faces the sound source direction; controlling the rear camera to face the sound source direction to capture a second person image; and identifying whether the face features of the second figure image are matched with the face features of the wearer of the intelligent watch, if so, controlling the rear camera to shoot the second figure image, acquiring a self-shot image of the rear camera and outputting the self-shot image to the screen for display.
As another optional implementation manner, in the first aspect of the embodiment of the present application, a front camera is disposed on a top side of a watch body where the screen is located, and a rear camera is disposed on a bottom side of the watch body, where the method further includes:
if the semantic representation executes code scanning payment operation, controlling the erected meter body to rotate until the rear camera faces a certain payment code, detecting whether the front camera captures a preset legal payment face image of the smart watch, and if the front camera captures the preset legal payment face image of the smart watch, controlling the rear camera to perform code scanning operation on the payment code, acquiring a payment interface and outputting the payment interface to the screen for display; the payment interface at least comprises a payment amount and a payee account identification;
and reporting the payment amount, the payee account identification and the legal payment face image to a third party payment platform so that the third party payment platform deducts the payment amount from a payment account corresponding to the legal payment face image and adds the deducted payment amount to a payee account corresponding to the payee account identification.
As another optional implementation manner, in the first aspect of this embodiment of the present application, after the obtaining of the payment interface and the outputting to the screen display, and before the reporting of the payment amount, the payee account id, and the legal payment face image to the third party payment platform, the method further includes:
and detecting whether a preset blinking action occurs to the user to which the legal payment face image belongs through the front-facing camera, and if so, executing the step of reporting the payment amount, the payee account identification and the legal payment face image to a third-party payment platform.
The second aspect of the embodiment of this application discloses a smart watch, smart watch is including the table body that is equipped with the screen, can carry out the rotation of 360 arbitrary angles within range when the table body is set up, smart watch includes:
a first detection unit for detecting an externally emitted voice signal;
the first identification unit is used for identifying the sound source direction corresponding to the voice signal;
a second recognition unit for recognizing the semantic meaning of the voice signal;
a first control unit, configured to control the erected watch body to rotate to a position where the screen faces the sound source direction when the second identification unit identifies that the semantic representation requires viewing of the screen.
As an optional implementation manner, in the second aspect of the embodiment of the present application, a front camera is disposed on a top side of a watch body where the screen is located, a rear camera is disposed on a bottom side of the watch body, and the first control unit is further configured to control the erected watch body to rotate to the screen and the front camera simultaneously face the sound source direction when the second identification unit identifies that the semantic representation performs self-photographing by using the front camera;
the smart watch further includes:
the second control unit is used for controlling the front camera to capture a first human image towards the sound source direction;
a third identification unit, configured to identify whether a facial feature of the first person image matches a facial feature of a wearer of the smart watch;
and the third control unit is used for controlling the front camera to shoot the first person image when the third identification unit identifies that the face characteristics of the first person image are matched with the face characteristics of the wearer of the intelligent watch, so as to obtain a self-shot image of the front camera and output the self-shot image to the screen for display.
As another optional implementation manner, in the second aspect of the embodiment of the present application, a front camera is disposed on a top side of a watch body where the screen is located, a rear camera is disposed on a bottom side of the watch body, and the first control unit is further configured to control the erected watch body to rotate to the rear camera toward the sound source direction when the second identification unit identifies that the semantic representation utilizes the rear camera to perform self-photographing;
the smart watch further includes:
the fourth control unit is further used for controlling the rear camera to capture a second person image towards the sound source direction;
the fourth identification unit is further used for identifying whether the face features of the second person image are matched with the face features of the wearer of the smart watch;
and the fifth control unit is further used for controlling the rear camera to shoot the second character image when the fourth identification unit identifies that the face characteristics of the second character image are matched with the face characteristics of the wearer of the smart watch, acquiring a self-portrait image of the rear camera and outputting the self-portrait image to the screen for display.
As another optional implementation manner, in the second aspect of the embodiment of the present application, a front camera is disposed on a top side of a meter body where the screen is located, a rear camera is disposed on a bottom side of the meter body, and the first control unit is further configured to, when the semantic representation performs a code scanning payment operation, control the standing meter body to rotate until the rear camera faces a certain payment code;
the smart watch further includes:
the second detection unit is used for detecting whether the front camera captures a preset legal payment face image of the smart watch or not after the first control unit controls the standing watch body to rotate to the position where the rear camera faces a certain payment code;
the sixth control unit is used for controlling the rear camera to perform code scanning operation on the payment code to obtain a payment interface and outputting the payment interface to the screen for display when the second detection unit detects that the front camera captures a legal payment face image preset by the smart watch; wherein the payment interface at least comprises a payment amount and a payee account identifier;
and the payment processing unit is used for reporting the payment amount, the payee account identification and the legal payment face image to a third-party payment platform so that the third-party payment platform deducts the payment amount from a payment account corresponding to the legal payment face image and adds the deducted payment amount to a payee account corresponding to the payee account identification.
As another optional implementation manner, in a second aspect of this embodiment of the present application, the smart watch further includes:
and the third detection unit is used for detecting whether a preset blinking motion occurs in the user to which the legal payment face image belongs through the front-facing camera before the payment processing unit reports the payment amount, the payee account identification and the legal payment face image to a third-party payment platform after the sixth control unit obtains a payment interface and outputs the payment interface to the screen for display, and if the preset blinking motion occurs, the payment processing unit is triggered to report the payment amount, the payee account identification and the legal payment face image to the third-party payment platform.
The third aspect of the embodiments of the present application discloses another smart watch, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of any one of the methods for controlling a smart watch disclosed in the first aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium, which stores a computer program, where the computer program enables a computer to execute all or part of the steps in any one of the methods for controlling a smart watch disclosed in the first aspect of the embodiments of the present application.
A fifth aspect of embodiments of the present application discloses a computer program product, which, when running on a computer, causes the computer to execute all or part of the steps in any one of the methods for controlling a smart watch in the first aspect of embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the intelligent watch comprises a watch body provided with a screen, and the watch body can rotate at any angle within a range of 360 degrees when being erected; the intelligent watch can detect an externally sent voice signal firstly; then, identifying the sound source direction corresponding to the voice signal; recognizing the semantics of the speech signal; if the semantic representation is recognized to require viewing of the screen, the raised table body is controlled to rotate until the screen faces the sound source direction. It is thus clear that, implement this application embodiment, can interact with intelligent wrist-watch through the mode of pronunciation to make intelligent wrist-watch automatic identification the direction that the user was located, thereby can be in need of not turning back the screen orientation user of controlling intelligent wrist-watch under the condition of arm conveniently, be favorable to promoting the experience sense when the user uses intelligent wrist-watch.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a control method of a smart watch according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another control method for a smart watch according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a control method of another smart watch disclosed in an embodiment of the present application;
fig. 4 is a schematic block diagram of a smart watch disclosed in an embodiment of the present application;
fig. 5 is a modular schematic diagram of another smart watch disclosed in an embodiment of the present application;
fig. 6 is a modular schematic diagram of yet another smart watch disclosed in an embodiment of the present application;
fig. 7 is a schematic block diagram of another smart watch disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "connected" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The embodiment of the application discloses a control method of a smart watch and the smart watch, which can conveniently control the screen of the smart watch to face a user without twisting an arm, and is beneficial to improving the experience of the user when the user uses the smart watch. The following detailed description is made with reference to the accompanying drawings.
In order to better understand the control method of the smart watch disclosed in the embodiment of the present application, a smart watch to which the method is applied is described below. The intelligent watch applicable to the control method of the intelligent watch disclosed by the embodiment of the application can comprise a watch body, a bottom bracket and a watchband; the first end of the bottom bracket is connected with the first end of the watchband in a plugging and pulling mode, and the second end of the bottom bracket is also connected with the second end of the watchband in a plugging and pulling mode; the first end (rotating end) of the watch body is movably connected with the first end of the bottom support through a rotating ball, and the second end (free end) of the watch body is not connected. In a common situation, the watch body can be stacked on the bottom bracket, namely, the bottom side of the watch body is jointed with the upper surface of the bottom bracket; when the watch body is turned over at different angles relative to the bottom bracket through the rotating ball, the bottom side of the watch body forms a certain angle (the angle can be adjusted between 0 and 180 degrees) with the upper surface of the bottom bracket.
When the watch body is erected (for example, the bottom side of the watch body is 90 degrees to the upper surface of the bottom bracket), the watch body can also rotate by any angle within the range of 360 degrees through the rolling ball.
It should be noted that the above-described smart watch is only one implementation of the smart watch to which the control method of the smart watch disclosed in the embodiment of the present application is applied, and should not be construed as a limitation to the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart of a control method of a smart watch disclosed in an embodiment of the present application. The intelligent watch comprises a watch body provided with a screen, and the watch body can rotate at any angle within the range of 360 degrees when erected. As shown in fig. 1, the control method may include the steps of:
101. the intelligent watch detects an externally sent voice signal.
In the embodiment of the application, the smart watch is provided with a plurality of microphones to form a microphone group, so that the environment sound can be continuously detected; further, the smart watch may determine whether the environmental sound includes a target voice signal meeting a preset condition, and may specifically include the following steps: preprocessing the detected environmental sounds; judging whether the preprocessed environmental sound contains a user voice signal; if yes, judging whether the volume of the voice of the user meets a volume condition preset by the smart watch; if yes, judging whether the voiceprint characteristics of the user voice are matched with the voiceprint characteristics of a legal user preset by the intelligent watch; and if the voice signals are matched with each other, determining the voice signals of the user as target voice signals meeting preset conditions, and using the target voice signals as the externally sent voice signals detected by the intelligent watch.
The preprocessing may include end point detection, pre-emphasis, framing, windowing, and other subdividing steps, which is not specifically limited in the embodiment of the present application.
102. The intelligent watch identifies the sound source direction corresponding to the voice signal.
In the embodiment of the present application, the smart watch is provided with a microphone group including a plurality of microphones, and for example, the plurality of microphones may be independently provided at different positions of the smart watch, such as an edge position of the top side of the watch body, an edge position of the bottom side of the watch body, and the like, and the embodiment of the present application is not limited in particular.
In the embodiment of the application, the smart watch can simultaneously detect the externally-sent voice signals through the plurality of microphones to obtain a plurality of sub-voice signals respectively corresponding to each microphone; then, the smart watch may determine a sound source direction corresponding to the voice signal according to an endpoint time (a time of a start point of the voice signal) and a volume of the plurality of sub-voice signals; furthermore, after the sound source direction corresponding to the voice signal is judged, the intelligent watch can record the change amplitude of the sound source direction within a preset time period, and if the maximum change amplitude does not exceed the preset rotation precision of the intelligent watch, no operation is performed; if the voice signal exceeds the preset voice signal threshold value, the intelligent watch re-identifies the voice source direction of the voice signal.
103. The smart watch recognizes the semantics of the speech signal,
illustratively, the semantics may be obtained based on a keyword wake-up technology or a speech recognition and natural language processing technology, which may specifically include watching a screen of a smart watch, taking a self-portrait by using a front camera of the smart watch, and the like, and several examples described in the embodiments of the present application should not be considered as limitations on a recognizable semantic range of the smart watch.
104. If the semantic meaning indicates that the screen of the smart watch needs to be watched, the smart watch controls the standing watch body to rotate until the screen faces the sound source direction.
As an optional implementation manner, after the watch body controlled by the smart watch to stand is rotated until the screen faces the sound source direction, the method may further include the following steps: judging the sound source distance corresponding to the voice signal according to the volume of the voice signal; acquiring a character size matched with the distance of the sound source; the word size of the text content displayed on the screen is adjusted to the word size so as to adapt to the actual watching distance of the user at the sound source, and therefore the user can obtain comfortable watching experience without moving arms.
Therefore, by implementing the control method described in fig. 1, interaction with the smart watch can be performed in a voice manner, and the smart watch can automatically recognize the direction in which the user is located, so that the screen of the smart watch can be conveniently controlled to face the user without twisting the arm, and the experience of the user when using the smart watch can be improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of another control method for a smart watch according to an embodiment of the present application. The intelligent watch comprises a watch body provided with a screen, wherein the watch body can rotate at any angle within the range of 360 degrees when being erected; in addition, the top side of the watch body where the screen is located is provided with a front camera, and the bottom side of the watch body is provided with a rear camera. As shown in fig. 2, the control method may include the steps of:
201. the intelligent watch detects an externally sent voice signal.
202. The intelligent watch identifies the sound source direction corresponding to the voice signal.
203. The smart watch recognizes the semantics of the speech signal.
204. If the semantic meaning indicates that the front camera is used for carrying out self-shooting, the intelligent watch controls the standing watch body to rotate to the direction of the sound source, wherein the screen and the front camera face the direction of the sound source at the same time.
205. The intelligent watch controls a front camera to capture a first person image towards the sound source direction.
In the embodiment of the application, the smart watch can control the front camera to face the sound source direction to obtain first view finding content; next, the smart watch may perform face detection on the first framed content; if the first viewing content is detected to contain more than one face, the smart watch can screen and acquire one face with the highest priority as a first target face according to preset screening conditions; on the basis, the intelligent watch can obtain an image of a person to which the first target face belongs, and the image is used as a first person image; optionally, if the proportion of the first person image in the first viewing content does not satisfy the preset self-timer optimal proportion range, the smart watch may further zoom the first person image until the first person image satisfies the above condition, so as to obtain a better self-timer effect.
Wherein, above-mentioned smart watch screens and obtains the highest people's face of priority in the first content of finding a view as first target people's face according to predetermined screening condition, specifically can include: and according to the position and the size of the image block to which each face belongs in the first viewing content, respectively scoring each face, and performing priority ranking on each face according to the score, wherein the face with the highest score, namely the face with the highest priority, is used as a first target face. For example, the center of the first frame content is taken as the origin of coordinates (0, 0), the center pixel coordinate of the image block to which the face belongs is (x, y), and the distance between the two is taken as the coordinate origin of coordinates (0, 0)
Figure BDA0002242247410000101
The smaller the image block is, the closer the position of the image block to which the face belongs to the center of the first framing content is, and the higher the score is; for another example, the larger the number B of pixels included in the image block to which the face belongs is, the larger the proportion of the image block to which the face belongs in the first framing content is, the higher the score is; the final score can be calculated by S = a + θ · B, where θ is an adjustment parameter and is not particularly limited in the embodiment of the present application.
206. The intelligent watch identifies whether the face features of the first person image are matched with the face features of a wearer of the intelligent watch, and if so, step 207 is executed; otherwise, the flow is ended.
207. The intelligent watch controls the front camera to shoot the first person image, obtains a self-shot image of the front camera and outputs the self-shot image to the screen for display.
As an optional implementation manner, if the semantic meaning of the voice signal indicates that a self-timer shooting is performed by using the rear camera, the smart watch may control the standing watch body to rotate until the rear camera faces the sound source direction; next, the smart watch may control the rear camera to capture a second person image towards the sound source direction; on this basis, whether the face characteristic that intelligence wrist-watch can discern this second personage image matches with the person's of wearing person's face characteristic of this intelligence wrist-watch, if match, intelligence wrist-watch can control its rear camera to shoot this second personage image, obtains rear camera auto heterodyne image and exports to above-mentioned screen display.
As another optional embodiment, after the smart watch executes steps 201 to 207, a front camera is used for self-shooting, a self-shot image of the front camera is obtained and output to the screen for display, the standing watch body can be controlled to rotate clockwise or counterclockwise by a certain angle, and the front camera is further controlled to obtain a plurality of environment images; on this basis, the smart watch can synthesize panoramic picture with above-mentioned leading camera auto heterodyne image sum a plurality of environmental image to above-mentioned screen display is exported.
By implementing the method, the intelligent watch can be conveniently controlled to carry out self-shooting in multiple modes, the user does not need to twist or move the arm in the process, the user does not need to follow the camera to rotate the head or manually focus, the self-shooting operation can be easily completed, and the technology sense of the intelligent watch and the interestingness of using the intelligent watch are favorably improved.
Therefore, by implementing the control method described in fig. 2, the screen of the smart watch can be conveniently controlled to face the user without twisting the arm, which is beneficial to improving the experience of the user when using the smart watch.
In addition, by implementing the control method described in fig. 2, the smart watch can be conveniently controlled to perform self-shooting in multiple modes, which is beneficial to improving the technological sense of the smart watch and the interestingness of using the smart watch.
Referring to fig. 3, fig. 3 is a schematic flowchart of another control method for a smart watch according to an embodiment of the present application. The intelligent watch comprises a watch body provided with a screen, wherein the watch body can rotate at any angle within the range of 360 degrees when being erected; in addition, the top side of the watch body where the screen is located is provided with a front-mounted camera, and the bottom side of the watch body is provided with a rear-mounted camera. As shown in fig. 3, the control method may include the steps of:
301. the intelligent watch detects an externally sent voice signal.
302. The intelligent watch identifies the sound source direction corresponding to the voice signal.
303. The smart watch recognizes the semantics of the speech signal.
304. If the semantic representation executes code scanning payment operation, the intelligent watch controls the erected watch body to rotate to the rear camera to face a certain payment code.
305. The smart watch detects whether a front camera of the smart watch captures a preset legal payment face image of the smart watch, and if so, the steps 306 to 307 are executed; otherwise, the flow is ended.
Illustratively, the legal payment facial image may be a facial image of a wearer of the smart watch, which is recorded in advance, or a facial image of a guardian of the wearer of the smart watch.
For example, the guardian may set a security region in advance on the management device of the smart watch, and when the smart watch executes a code scanning payment operation in the security region, the face image of the wearer of the smart watch is a legal payment face image; when the smart watch executes code scanning payment operation outside the safety area, the face image of the guardian is set as a legal payment face image, so that unsafe consumption behaviors of a wearer of the smart watch are avoided.
306. The smart watch controls a rear camera to scan the payment code, a payment interface is obtained, and the payment interface is output to the screen for display; wherein the payment interface includes at least a payment amount and a payee account identification.
307. The smart watch detects whether a preset blinking action occurs to the user to which the legal payment face image belongs through a front camera of the smart watch, and if so, the step 308 is executed; otherwise, the flow is ended.
Optionally, the smart watch may also detect, through the front camera thereof, whether the user to which the legal payment face image belongs makes a preset mouth shape gesture and/or whether a preset gesture is made, and if so, execute step 308; otherwise, the flow is ended.
308. And the smart watch reports the payment amount, the payee account identification and the legal payment face image to a third party payment platform, so that the third party payment platform deducts the payment amount from the payment account corresponding to the legal payment face image and adds the deducted payment amount to the payee account corresponding to the payee account identification.
As an optional implementation manner, after the smart watch performs step 306, obtains a payment interface at least including the payment amount and the payee account id, and outputs the payment interface to the screen for display, the following steps may be further performed: the method comprises the steps of obtaining the instant position of the intelligent watch, and sending a payment verification request to management equipment of the intelligent watch, wherein the payment verification request comprises the instant position, payment amount and payee account identification of the intelligent watch; and detecting whether a payment permission instruction sent by the management equipment is received, and if so, continuing to execute the step 307 to the step 308 to finish the payment operation.
When the management device receives a payment verification request sent by the intelligent watch, the management device judges whether the instant position, payment amount and payee account identification of the intelligent watch respectively meet the conditions of the position range, payment amount range and being not located in the payee blacklist/payee white list of the allowed payment, and immediately sends response information to the intelligent watch, wherein the response information can be a payment allowance instruction, a payment delay instruction or a payment refusal instruction.
By implementing the method, the phenomenon that a wearer of the intelligent watch consumes insecurity can be avoided, and the safety of executing code scanning payment by utilizing the intelligent watch is improved; simultaneously above-mentioned safety guarantee can automatic operation after setting up in advance and accomplishing, need not the guardian and intervene once more, can not sweep the convenience of sign indicating number payment and bring inconvenient influence to utilizing intelligent wrist-watch.
Therefore, by implementing the control method described in fig. 3, the screen of the smart watch can be conveniently controlled to face the user without twisting the arm, which is beneficial to improving the experience of the user when using the smart watch.
In addition, the control method described in fig. 3 can improve the security when the code scanning payment is executed by using the smart watch.
Referring to fig. 4, fig. 4 is a schematic modular diagram of a smart watch disclosed in an embodiment of the present application, where the smart watch includes a watch body provided with a screen, and the watch body can rotate within a range of 360 ° when standing up; in addition, the top side of the watch body where the screen is located is provided with a front camera, and the bottom side of the watch body is provided with a rear camera. As shown in fig. 4, the smart watch may include:
a first detection unit 401 for detecting an externally uttered voice signal;
a first identifying unit 402, configured to identify a sound source direction corresponding to the speech signal;
a second recognition unit 403 for recognizing the semantics of the speech signal;
a first control unit 404, configured to control the upright watch body to rotate until the screen faces the sound source direction when the second recognition unit 403 recognizes that the semantic representation requires viewing of the screen.
As an alternative embodiment, after the first control unit 404 controls the standing watch body to rotate until the screen faces the sound source direction, the smart watch may further: determining a sound source distance corresponding to the voice signal according to the volume of the voice signal detected by the first detection unit 401; acquiring a character size matched with the distance of the sound source; the word size of the word content displayed on the screen is adjusted to the word size so as to adapt to the actual watching distance of the user at the sound source, and therefore the user can obtain comfortable watching experience without moving arms.
Therefore, the intelligent watch described in the embodiment of fig. 4 can conveniently control the screen of the intelligent watch to face the user without twisting the arm, and is beneficial to improving the experience of the user when using the intelligent watch.
Referring to fig. 5, fig. 5 is a schematic block diagram of another smart watch disclosed in the embodiment of the present application. The smart watch shown in fig. 5 is optimized from the smart watch shown in fig. 4. Compared to the smart watch shown in fig. 4, in the smart watch shown in fig. 5:
the first control unit 404 is further configured to, when the second recognition unit 403 recognizes that the semantic meaning of the voice signal indicates that a self-portrait is performed by using a front camera of the smart watch, control the standing watch body to rotate to the sound source direction of the voice signal while the screen and the front camera face each other;
in addition, the smart watch shown in fig. 5 further includes:
a second control unit 405a for controlling the front camera to capture a first human image toward the sound source direction;
a third identifying unit 406a, configured to identify whether the face features of the first person image match with the face features of the wearer of the smart watch;
a third control unit 407a, configured to control the front camera to shoot the first person image when the third identification unit 406a identifies that the facial features of the first person image match with the facial features of the wearer of the smart watch, obtain a self-portrait image of the front camera, and output the self-portrait image to the screen for display.
As an alternative implementation, in the smart watch shown in fig. 5:
the first control unit 404 may be further configured to control the standing watch body to rotate to a sound source direction of the voice signal, when the second recognition unit 403 recognizes that the semantic representation of the voice signal indicates that a self-timer shooting is performed by using a rear camera of the smart watch;
in addition, the smart watch shown in fig. 5 may further include:
a fourth control unit 405b for controlling the rear camera to capture a second person image toward the sound source direction;
a fourth identifying unit 406b, configured to identify whether the facial features of the second person image match facial features of a wearer of the smart watch;
a fifth control unit 407b, configured to control the rear camera to shoot the second person image when the fourth identification unit 406b identifies that the facial feature of the second person image matches the facial feature of the wearer of the smart watch, obtain a self-portrait image of the rear camera, and output the self-portrait image to the screen for display.
As another optional implementation, after the third control unit 407a obtains the self-portrait image of the front camera and outputs the self-portrait image to the screen for display, the first control unit 404 may further control the standing watch body to rotate clockwise or counterclockwise by a certain angle, and then the third control unit 407a controls the front camera to obtain a plurality of environmental images; on this basis, the intelligent wrist-watch can be with the synthetic panoramic picture of above-mentioned leading camera auto heterodyne image and a plurality of environmental image to export above-mentioned screen display.
Through implementing above-mentioned intelligent wrist-watch, can control intelligent wrist-watch conveniently and carry out the auto heterodyne of multiple mode, the user need not to twist reverse or remove the arm in this in-process, also need not to follow the camera and rotate head or manual focusing, can easily accomplish the operation of autodyning, is favorable to promoting intelligent wrist-watch's science and technology to feel and uses the interest of this intelligent wrist-watch.
Therefore, the intelligent watch described in the embodiment of fig. 5 can conveniently control the screen of the intelligent watch to face the user without twisting the arm, and is beneficial to improving the experience of the user when using the intelligent watch.
In addition, implement the smart watch that fig. 5 described, can control smart watch conveniently and carry out multiple mode auto heterodyne, be favorable to promoting smart watch's science and technology to feel and use the interest of this smart watch.
Referring to fig. 6, fig. 6 is a schematic block diagram of another smart watch disclosed in the embodiment of the present application. The smart watch shown in fig. 6 is optimized from the smart watch shown in fig. 4. Compared to the smart watch shown in fig. 4, in the smart watch shown in fig. 6:
the first control unit 404 may be further configured to, when the second recognition unit 403 recognizes that the semantic representation of the voice signal performs a code scanning payment operation, control the erected watch body to rotate to a position where the rear camera of the smart watch faces a certain payment code;
in addition, the smart watch shown in fig. 6 further includes:
a second detecting unit 408, configured to detect whether the front camera of the smartwatch captures a preset legal payment face image of the smartwatch after the first controlling unit 404 controls the standing watch body to rotate until the rear camera faces a certain payment code;
a sixth control unit 409, configured to control the rear camera to perform a code scanning operation on the payment code when the second detection unit 408 detects that the front camera captures a legal payment face image preset by the smart watch, obtain a payment interface, and output the payment interface to a screen of the smart watch for display; the payment interface at least comprises a payment amount and a payee account identification;
the third detecting unit 410 is configured to detect whether a preset blinking motion occurs to a user to which the legal payment face image belongs through the front-facing camera.
Optionally, the third detecting unit 410 may also be configured to detect, through the front-facing camera, whether the user to which the legal payment face image belongs makes a preset mouth-shaped gesture, and/or whether a preset gesture is made.
A payment processing unit 411, configured to report the payment amount, the payee account id, and the legal payment face image to a third party payment platform when the third detecting unit 410 detects that a preset blinking motion occurs in the user to which the legal payment face image belongs, and/or makes a preset mouth gesture, and/or whether a preset gesture is made, so that the third party payment platform deducts the payment amount from a payment account corresponding to the legal payment face image, and adds the deducted payment amount to a payee account corresponding to the payee account id.
As an optional implementation manner, after the sixth control unit 409 obtains a payment interface at least including the payment amount and the payee account id and outputs the payment interface to the screen for display, the smart watch may further: the method comprises the steps of obtaining the instant position of the intelligent watch, and sending a payment verification request to management equipment of the intelligent watch, wherein the payment verification request comprises the instant position, payment amount and payee account identification of the intelligent watch; whether a payment permission instruction sent by the management device is received or not is detected, and if the payment permission instruction is received, the third detection unit 410 and the payment processing unit 411 continue to complete the payment operation.
When the management device receives a payment verification request sent by the intelligent watch, the management device judges whether the instant position, payment amount and payee account identification of the intelligent watch respectively meet the conditions of the position range, payment amount range and being not located in the payee blacklist/payee white list of the allowed payment, and immediately sends response information to the intelligent watch, wherein the response information can be a payment allowance instruction, a payment delay instruction or a payment refusal instruction.
By implementing the intelligent watch, unsafe consumption behaviors of a wearer of the intelligent watch can be avoided, and the safety of executing code scanning payment by utilizing the intelligent watch is improved; meanwhile, the safety guarantee can automatically operate after being preset and completed, the guardian does not need to interfere again, and inconvenience is not brought to convenience of code scanning payment by the intelligent watch.
It can be seen that, by implementing the smart watch described in fig. 6, the screen of the smart watch can be conveniently controlled to face the user without twisting the arm, which is beneficial to improving the experience of the user when using the smart watch.
In addition, the implementation of the smart watch described in fig. 6 can improve the security when code scanning payment is performed by using the smart watch.
Referring to fig. 7, fig. 7 is a schematic block diagram of another smart watch disclosed in an embodiment of the present application, where the smart watch may include a watch body provided with a screen, and the watch body may rotate within a range of 360 ° when standing up; in addition, the top side of the watch body where the screen is located is provided with a front-mounted camera, and the bottom side of the watch body is provided with a rear-mounted camera. As shown in fig. 7, the smart watch includes:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute all or part of the steps in the control method of the smart watch in any one of fig. 1 to 3.
In addition, the embodiment of the application further discloses a computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program enables a computer to execute all or part of the steps in the control method of the smart watch in any one of fig. 1 to 3.
In addition, the embodiment of the present application further discloses a computer program product, which, when running on a computer, causes all or part of the steps in the control method of any one of the smartwatches 1 to 3 to be performed.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be implemented by program instructions associated with hardware, and the program may be stored in a computer-readable storage medium, which includes Read-Only Memory (ROM), random Access Memory (RAM), programmable Read-Only Memory (PROM), erasable Programmable Read-Only Memory (EPROM), one-time Programmable Read-Only Memory (OTPROM), electrically Erasable Programmable Read-Only Memory (EEPROM), an optical Disc-Read-Only Memory (CD-ROM) or other storage medium, a magnetic tape, or any other medium capable of storing data for a computer or other computer.
The control method of the smart watch and the smart watch disclosed in the embodiments of the present application are described in detail, and specific examples are applied in the description to explain the principle and the implementation of the present invention, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A control method of a smart watch including a watch body provided with a screen and a microphone set provided with a plurality of microphones, characterized in that the watch body is capable of rotating by any angle within a range of 360 ° when standing up, the method comprising:
simultaneously detecting externally-emitted voice signals through the plurality of microphones to obtain a plurality of sub-voice signals respectively corresponding to each microphone;
identifying the sound source direction corresponding to the voice signals according to the endpoint time and the volume of the plurality of sub-voice signals; recording the variation amplitude of the sound source direction within a preset time length, and re-identifying the sound source direction if the maximum variation amplitude in the variation amplitudes exceeds a preset rotation precision;
recognizing the semantics of the voice signal;
if the semantic representation needs to watch the screen, controlling the erected table body to rotate until the screen faces the sound source direction;
judging the sound source distance corresponding to the voice signal according to the volume of the voice signal; and adjusting the character size of the character content displayed on the screen to the character size matched with the distance of the sound source.
2. The control method according to claim 1, wherein a front camera is arranged on the top side of the watch body where the screen is located, and a rear camera is arranged on the bottom side of the watch body, and the method further comprises:
if the semantic representation utilizes the front camera to carry out self-shooting, controlling the erected watch body to rotate to the screen and the front camera to face the sound source direction at the same time; and controlling the front camera to capture a first human image towards the sound source direction; and identifying whether the face features of the first person image are matched with the face features of the wearer of the intelligent watch, and if so, controlling the front camera to shoot the first person image, acquiring a self-shot image of the front camera and outputting the self-shot image to the screen for display.
3. The control method according to claim 1, wherein a front camera is arranged on the top side of the watch body where the screen is arranged, and a rear camera is arranged on the bottom side of the watch body, and the method further comprises:
if the semantic meaning shows that the rear camera is used for carrying out self-shooting, controlling the erected meter body to rotate until the rear camera faces the sound source direction; controlling the rear camera to face the sound source direction to capture a second person image; and identifying whether the face features of the second figure image are matched with the face features of the wearer of the intelligent watch, if so, controlling the rear camera to shoot the second figure image, acquiring a self-shot image of the rear camera and outputting the self-shot image to the screen for display.
4. The control method according to claim 1, wherein a front camera is arranged on the top side of the watch body where the screen is located, and a rear camera is arranged on the bottom side of the watch body, and the method further comprises:
if the semantic representation executes code scanning payment operation, controlling the erected meter body to rotate until the rear camera faces a certain payment code, detecting whether the front camera captures a preset legal payment face image of the smart watch, and if the front camera captures the preset legal payment face image of the smart watch, controlling the rear camera to perform code scanning operation on the payment code, acquiring a payment interface and outputting the payment interface to the screen for display; wherein the payment interface at least comprises a payment amount and a payee account identifier;
and reporting the payment amount, the payee account identification and the legal payment face image to a third party payment platform so that the third party payment platform deducts the payment amount from a payment account corresponding to the legal payment face image and adds the deducted payment amount to a payee account corresponding to the payee account identification.
5. The control method of claim 4, wherein after obtaining the payment interface and outputting the payment interface to the screen display, and before reporting the payment amount, the payee account identifier, and the legal payment facial image to a third party payment platform, the method further comprises:
and detecting whether a preset blinking action occurs to the user to which the legal payment face image belongs through the front-facing camera, and if so, executing the step of reporting the payment amount, the payee account identification and the legal payment face image to a third-party payment platform.
6. A smart watch including a watch body provided with a screen and a microphone set provided with a plurality of microphones, characterized in that the watch body can be rotated by an arbitrary angle within a range of 360 ° when it is erected, the smart watch comprising:
a first detection unit for simultaneously detecting externally-emitted voice signals through the plurality of microphones to obtain a plurality of sub-voice signals respectively corresponding to each of the microphones;
the first identification unit is used for identifying the sound source direction corresponding to the voice signals according to the endpoint time and the volume of the plurality of sub voice signals; recording the variation amplitude of the sound source direction within a preset time length, and re-identifying the sound source direction if the maximum variation amplitude in the variation amplitudes exceeds a preset rotation precision;
a second recognition unit for recognizing the semantic meaning of the voice signal;
a first control unit, configured to control the standing watch body to rotate to a position where the screen faces the sound source direction when the second identification unit identifies that the semantic representation needs to be viewed on the screen;
means for performing the following:
judging the sound source distance corresponding to the voice signal according to the volume of the voice signal; and adjusting the character size of the character content displayed on the screen to the character size matched with the distance of the sound source.
7. The smart watch of claim 6, wherein a front camera is disposed on a top side of the watch body where the screen is disposed, and a rear camera is disposed on a bottom side of the watch body, and the smart watch further comprises:
the first control unit is further configured to control the erected watch body to rotate to the screen and the front camera to face the sound source direction simultaneously when the second identification unit identifies that the semantic representation utilizes the front camera to perform self-shooting;
the smart watch further includes:
the second control unit is used for controlling the front camera to capture a first human image towards the sound source direction;
a third identification unit, configured to identify whether a facial feature of the first person image matches a facial feature of a wearer of the smart watch;
and the third control unit is used for controlling the front camera to shoot the first person image when the third identification unit identifies that the face characteristics of the first person image are matched with the face characteristics of the wearer of the intelligent watch, so that a self-portrait image of the front camera is obtained and output to the screen for display.
8. The smart watch of claim 6, wherein a front camera is disposed on a top side of the watch body where the screen is disposed, and a rear camera is disposed on a bottom side of the watch body, the smart watch further comprising:
the first control unit is further configured to control the erected watch body to rotate to the position where the rear camera faces the sound source direction when the second identification unit identifies that the semantic representation utilizes the rear camera to perform self-shooting;
the smart watch further includes:
the fourth control unit is also used for controlling the rear camera to capture a second character image towards the sound source direction;
the fourth identification unit is further used for identifying whether the face features of the second person image are matched with the face features of the wearer of the smart watch;
and the fifth control unit is further used for controlling the rear camera to shoot the second character image when the fourth identification unit identifies that the face characteristics of the second character image are matched with the face characteristics of the wearer of the smart watch, acquiring a self-portrait image of the rear camera and outputting the self-portrait image to the screen for display.
9. The smart watch of claim 6, wherein a front camera is disposed on a top side of the watch body where the screen is disposed, and a rear camera is disposed on a bottom side of the watch body, the smart watch further comprising:
the first control unit is further configured to control the erected meter body to rotate until the rear camera faces a certain payment code when the semantic representation executes code scanning payment operation;
the smart watch further includes:
the second detection unit is used for detecting whether the front camera captures a preset legal payment face image of the intelligent watch or not after the first control unit controls the standing watch body to rotate to the position where the rear camera faces a certain payment code;
the sixth control unit is used for controlling the rear camera to perform code scanning operation on the payment code when the second detection unit detects that the front camera captures a legal payment face image preset by the smart watch, so as to obtain a payment interface and output the payment interface to the screen for display; wherein the payment interface at least comprises a payment amount and a payee account identifier;
and the payment processing unit is used for reporting the payment amount, the payee account identification and the legal payment face image to a third party payment platform so that the third party payment platform deducts the payment amount from a payment account corresponding to the legal payment face image and adds the deducted payment amount to a payee account corresponding to the payee account identification.
10. The smart watch of claim 9, further comprising:
and the third detection unit is used for detecting whether a user to which the legal payment face image belongs generates a preset blinking action through the front-facing camera before the payment processing unit reports the payment amount, the payee account identifier and the legal payment face image to a third-party payment platform after the sixth control unit obtains a payment interface and outputs the payment interface to the screen for display, and if the user to which the legal payment face image belongs generates a preset blinking action, the payment processing unit is triggered to report the payment amount, the payee account identifier and the legal payment face image to the third-party payment platform.
11. A smart watch, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the control method of the smart watch according to any one of claims 1 to 5.
12. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the control method of the smart watch according to any one of claims 1 to 5.
CN201911004176.3A 2019-10-22 2019-10-22 Control method of smart watch and smart watch Active CN111768785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911004176.3A CN111768785B (en) 2019-10-22 2019-10-22 Control method of smart watch and smart watch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911004176.3A CN111768785B (en) 2019-10-22 2019-10-22 Control method of smart watch and smart watch

Publications (2)

Publication Number Publication Date
CN111768785A CN111768785A (en) 2020-10-13
CN111768785B true CN111768785B (en) 2022-12-27

Family

ID=72718900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911004176.3A Active CN111768785B (en) 2019-10-22 2019-10-22 Control method of smart watch and smart watch

Country Status (1)

Country Link
CN (1) CN111768785B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748803A (en) * 2020-12-31 2021-05-04 读书郎教育科技有限公司 Telephone watch switching event triggering system and method
CN113467342A (en) * 2021-08-09 2021-10-01 重庆宗灿科技发展有限公司 Wisdom endowment system and intelligent watch based on internet of things

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004130427A (en) * 2002-10-09 2004-04-30 Sony Corp Robot device and method for controling operation of robot device
CN104698827A (en) * 2013-12-08 2015-06-10 崔允太 Screen for smart watch
EP2989523A1 (en) * 2013-04-22 2016-03-02 LG Electronics Inc. Smart watch and control method for the same
CN107797718A (en) * 2016-09-06 2018-03-13 中兴通讯股份有限公司 The method of adjustment and device in screen display direction
CN108737719A (en) * 2018-04-04 2018-11-02 深圳市冠旭电子股份有限公司 Camera filming control method, device, smart machine and storage medium
CN109561002A (en) * 2018-12-06 2019-04-02 珠海格力电器股份有限公司 The sound control method and device of household appliance
CN110135837A (en) * 2018-09-29 2019-08-16 广东小天才科技有限公司 A kind of payment management method and wearable device based on wearable device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004130427A (en) * 2002-10-09 2004-04-30 Sony Corp Robot device and method for controling operation of robot device
EP2989523A1 (en) * 2013-04-22 2016-03-02 LG Electronics Inc. Smart watch and control method for the same
CN104698827A (en) * 2013-12-08 2015-06-10 崔允太 Screen for smart watch
CN107797718A (en) * 2016-09-06 2018-03-13 中兴通讯股份有限公司 The method of adjustment and device in screen display direction
CN108737719A (en) * 2018-04-04 2018-11-02 深圳市冠旭电子股份有限公司 Camera filming control method, device, smart machine and storage medium
CN110135837A (en) * 2018-09-29 2019-08-16 广东小天才科技有限公司 A kind of payment management method and wearable device based on wearable device
CN109561002A (en) * 2018-12-06 2019-04-02 珠海格力电器股份有限公司 The sound control method and device of household appliance

Also Published As

Publication number Publication date
CN111768785A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
US10083710B2 (en) Voice control system, voice control method, and computer readable medium
US9906725B2 (en) Portable video communication system
CN104580992B (en) A kind of control method and mobile terminal
US8063938B2 (en) Photographed image process changeover apparatus of a video telephone function
KR100556856B1 (en) Screen control method and apparatus in mobile telecommunication terminal equipment
CN108470169A (en) Face identification system and method
CN107346661B (en) Microphone array-based remote iris tracking and collecting method
US10887548B2 (en) Scaling image of speaker's face based on distance of face and size of display
US11627007B2 (en) Mobile information terminal
CN105426723A (en) Voiceprint identification, face identification and synchronous in-vivo detection-based identity authentication method and system
CN111767785A (en) Man-machine interaction control method and device, intelligent robot and storage medium
CN111079513B (en) Posture reminding method and device, mobile terminal and storage medium
JP2012014394A (en) User instruction acquisition device, user instruction acquisition program and television receiver
CN111768785B (en) Control method of smart watch and smart watch
KR20190121758A (en) Information processing apparatus, information processing method, and program
KR20140055819A (en) Appparatus and method for face recognition
CN111179923B (en) Audio playing method based on wearable device and wearable device
US11076091B1 (en) Image capturing assistant
CN208351494U (en) Face identification system
CN109151309A (en) A kind of method for controlling rotation of camera, device, equipment and storage medium
CN112286364A (en) Man-machine interaction method and device
CN110032921B (en) Adjusting device and method of face recognition equipment
JP2008004050A (en) Personal information authentication system, personal information authentication method, program, and recording medium
CN116665111A (en) Attention analysis method, system and storage medium based on video conference system
CN110941381A (en) Display screen brightness adjusting method and system of conference all-in-one machine and conference all-in-one machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant