CN111182201A - Method for adjusting sound box camera module and sound box - Google Patents
Method for adjusting sound box camera module and sound box Download PDFInfo
- Publication number
- CN111182201A CN111182201A CN201911003421.9A CN201911003421A CN111182201A CN 111182201 A CN111182201 A CN 111182201A CN 201911003421 A CN201911003421 A CN 201911003421A CN 111182201 A CN111182201 A CN 111182201A
- Authority
- CN
- China
- Prior art keywords
- sound box
- camera module
- target
- image information
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000001815 facial effect Effects 0.000 claims description 40
- 210000000707 wrist Anatomy 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 10
- 230000007958 sleep Effects 0.000 claims description 10
- 230000009471 action Effects 0.000 claims description 8
- 230000000147 hypnotic effect Effects 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 230000009429 distress Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 description 77
- 238000004590 computer program Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000012795 verification Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 238000012015 optical character recognition Methods 0.000 description 3
- 206010019233 Headaches Diseases 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 2
- 231100000869 headache Toxicity 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/02—Casings; Cabinets ; Supports therefor; Mountings therein
- H04R1/028—Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Vascular Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention relates to the technical field of electronic equipment, and discloses a method for adjusting a sound box camera module and a sound box, wherein the method comprises the following steps: when the sound box is detected to start a certain target function which needs to use the camera module, determining a target machine position to which the camera module of the sound box needs to be adjusted according to the target function; determining the target amplitude and/or the target angle required to be popped up by the camera module according to the initial machine position and the target machine position in which the camera module is currently positioned; the camera shooting module is controlled to pop up and/or rotate to a target angle according to the target amplitude, so that the camera shooting module can acquire image information required by the execution of a target function. By implementing the embodiment of the invention, the success rate of the sound box executing function can be improved.
Description
Technical Field
The invention relates to the technical field of electronic equipment, in particular to an adjusting method of a sound box camera module and a sound box.
Background
With the development of sound box manufacturing technology, sound boxes of today have new functions of switching sound sources by recognizing gestures, waking up a machine when recognizing face information of a user, and the like, in addition to functions of amplifying and outputting sound sources.
Found in practice, the audio amplifier when carrying out gesture recognition or face identification etc. and need use the function of the module of making a video recording, because the module of making a video recording of traditional audio amplifier can only shoot the scene in certain fixed height, and people's hand and people's face often are not at same height again, so the module of making a video recording of traditional audio amplifier often can not gather the effective image information that executive function needs to be unfavorable for improving audio amplifier executive function's success rate.
Disclosure of Invention
The embodiment of the invention discloses a method for adjusting a sound box camera module and a sound box, which can improve the success rate of sound box execution functions.
The first aspect of the embodiment of the invention discloses a method for adjusting a sound box camera module, which comprises the following steps:
when a sound box is detected to start a certain target function of a camera module, determining a target machine position to which the camera module of the sound box needs to be adjusted according to the target function;
determining the target amplitude and/or the target angle required to be popped up by the camera module according to the initial machine position and the target machine position in which the camera module is currently positioned;
and controlling the camera module to pop up and/or rotate to the target angle by the target amplitude, so that the camera module can acquire image information required by executing the target function.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after controlling the camera module to pop up at the target amplitude and rotate to the target angle, the method further includes:
if the target function is to play a sleeping song, controlling the camera module to shoot the character image information around the sound box, and identifying the facial features of characters in the character image information;
judging whether the facial features of the people in the people image information are matched with preset facial features, wherein the preset facial features are facial features representing that the people are in a sleep state;
and if the sound box is matched with the sound box, controlling the sound box to stop playing the hypnotic songs.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after determining that the facial feature of the person in the person image information matches a preset facial feature, the method further includes:
controlling the camera module to acquire the light intensity of lamps around the sound box;
judging whether the light intensity of the lamps around the sound box is higher than a preset intensity threshold value or not;
and if the light intensity of the lamps around the sound box is higher than a preset intensity threshold value, controlling the lamps to be turned off, wherein the lamps are in communication connection with the sound box.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after controlling the camera module to pop up at the target amplitude and rotate to the target angle, the method further includes:
if the camera module collects hand image information of a user, judging whether the wrist of the user wears intelligent wearable equipment or not according to the hand image information;
if the wrist of the user wears the intelligent wearable device, the Bluetooth of the sound box is controlled to be started and the Bluetooth of the intelligent wearable device is controlled to be started, so that the sound box is connected with the intelligent wearable device in a Bluetooth mode.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the controlling bluetooth activation of the speaker and bluetooth activation of the smart wearable device to enable the speaker to establish a bluetooth connection with the smart wearable device, the method further includes:
if the camera module monitors that the user makes a preset distress action, receiving physiological characteristic data of the user, which is acquired by the intelligent wearable device;
and judging whether the physiological characteristic data of the user exceeds the range of preset normal physiological characteristic data, if so, reading a specified alarm number from the address list of the intelligent wearable device, and calling the specified alarm number.
A second aspect of the embodiments of the present invention discloses a sound box, including:
the first determining unit is used for determining a target machine position to which the camera module of the sound box needs to be adjusted according to a certain target function when the sound box is detected to start the target function of the camera module which needs to be used;
the second determining unit is used for determining the target amplitude required to be popped up and/or the target angle required to be rotated by the camera module according to the initial machine position and the target machine position in which the camera module is currently positioned;
the first control unit is used for controlling the camera shooting module to pop up and/or rotate to the target angle according to the target amplitude, so that the camera shooting module can acquire image information required by executing the target function.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the sound box further includes:
the identification unit is used for controlling the camera module to shoot the character image information around the sound box and identifying the facial features of the characters in the character image information if the target function is judged to be playing of a sleeping song after the first control unit controls the camera module to pop up and/or rotate to the target angle by the target amplitude;
a first judging unit configured to judge whether a face feature of a person in the person image information matches a preset face feature, where the preset face feature is a face feature indicating that the person is in a sleep state;
and the second control unit is used for controlling the sound box to stop playing the hypnotic song when the first judgment unit judges that the facial features of the person in the person image information are matched with the preset facial features.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the sound box further includes:
the acquisition unit is used for controlling the camera module to acquire the light intensity of lamps around the sound box after the first judgment unit judges that the facial features of the person in the person image information are matched with the preset facial features;
the second judgment unit is used for judging whether the light intensity of the lamps around the sound box is higher than a preset intensity threshold value or not;
and the third control unit is used for controlling the lamp to be turned off when the second judging unit judges that the light intensity of the lamp around the sound box is higher than a preset intensity threshold value, and the lamp is in communication connection with the sound box.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the sound box further includes:
the third judging unit is used for judging whether the wrist of the user wears the intelligent wearable device or not according to the hand image information if the camera module collects the hand image information of the user after the first control unit controls the camera module to pop up and/or rotate to the target angle by the target amplitude;
the establishing unit is used for controlling the Bluetooth of the sound box to be opened and the Bluetooth of the intelligent wearable device to be opened when the third judging unit judges that the wrist of the user wears the intelligent wearable device, so that the sound box is connected with the intelligent wearable device through Bluetooth.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the sound box further includes:
the receiving unit is used for receiving the physiological characteristic data of the user, which is acquired by the intelligent wearable device, if the camera module monitors that the user makes a preset help seeking action after the establishing unit controls the Bluetooth opening of the sound box and the Bluetooth opening of the intelligent wearable device so that the sound box is connected with the intelligent wearable device in a Bluetooth mode;
the fourth judging unit is used for judging whether the physiological characteristic data of the user exceeds the range of preset normal physiological characteristic data or not;
and the calling unit is used for reading a specified alarm number from the address list of the intelligent wearable device and calling the specified alarm number when the fourth judging unit judges that the physiological characteristic data of the user exceeds the range of preset normal physiological characteristic data.
A third aspect of the embodiments of the present invention discloses a sound box, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the adjustment method of the sound box camera module disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium, which stores a computer program, wherein the computer program enables a computer to execute the method for adjusting a speaker camera module disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product, which, when running on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect of the embodiments of the present invention.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to perform part or all of the steps of any one of the methods in the first aspect of the present embodiment.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when the sound box detects that the sound box starts a certain target function which needs to use the camera module, the sound box can determine the target machine position to which the camera module of the sound box needs to be adjusted according to the target function; determining the target amplitude required to be popped up by the camera module and the target angle required to be rotated according to the initial machine position in which the camera module is currently positioned and the target machine position required to be adjusted; and then controlling the camera module to pop up and/or rotate to a target angle by the target amplitude so that the camera module can acquire image information required by executing a target function. It can be seen that, compared with the traditional sound box, the camera module can only shoot the scene in a certain fixed height, the sound box of the embodiment of the invention can adjust the camera module of the sound box to different machine positions according to different started functions, so that the camera module can shoot the scenes in different heights and different planes, and further the camera module can collect effective image information required when the sound box executes different functions, and the success rate of executing functions of the sound box is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an adjusting method of a sound box camera module according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another adjustment method for a speaker camera module according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a sound box disclosed in the embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another loudspeaker disclosed in the embodiments of the present invention;
FIG. 5 is a schematic structural diagram of another speaker disclosed in the embodiments of the present invention;
FIG. 6 is a top view of a sound box according to an embodiment of the present invention;
fig. 7 is a right side view of a sound box disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first", "second", "third" and "fourth" etc. in the description and claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a method for adjusting a sound box camera module and a sound box, which can improve the success rate of sound box execution functions.
The technical solution of the present invention will be described in detail with reference to specific examples.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an adjusting method of a speaker camera module according to an embodiment of the present invention. As shown in fig. 1, the method for adjusting the speaker camera module may include the following steps:
101. when detecting that the audio amplifier starts certain target function that needs to use the module of making a video recording, determine the target position that the module of making a video recording of audio amplifier needs to adjust to according to target function.
In the embodiment of the present invention, an execution main body for executing the adjustment method of the sound box camera module disclosed in the embodiment of the present invention may be a sound box, a control center in communication connection with the sound box, and the like.
It should be noted that: the sound box disclosed by the embodiment of the invention can comprise components such as a loudspeaker module, a camera module, a display screen, a light projection module, a battery module, a wireless communication module (such as a mobile communication module, a WIFI module, a Bluetooth module and the like), a sensor module (such as a proximity sensor, a pressure sensor and the like), an input module (such as a microphone and a key) and a user interface module (such as a charging interface, an external power supply interface, a card slot, a wired earphone interface and the like); wherein, the module of making a video recording that this audio amplifier configured can be can follow and pop out and the module of making a video recording that can rotate in the audio amplifier, for example the module of making a video recording can be through the buckle built-in with the audio amplifier in to can pop out at the audio amplifier or the module during operation of making a video recording, and the module of making a video recording can be connected with the box of audio amplifier through rotatable connecting axle, makes the module of making a video recording can rotate at the box based on the audio amplifier of during.
In the embodiment of the invention, the target function can be the function of a camera module of the sound box, such as gesture recognition, face recognition and the like. For example, if it is detected that the sound box starts a face recognition function, it may be determined that the camera module of the sound box needs to be adjusted to a target position at which the face image of the user can be captured; for another example, if it is detected that the sound box starts the gesture recognition function, it may be determined that the camera module of the sound box needs to be adjusted to a target position where the hand image of the user can be captured.
As an optional implementation manner, when identifying verification information (for example, a two-dimensional code) on an authenticated finger stall, the sound box may further acquire real-time coordinates of the finger stall, and determine a machine position capable of shooting the real-time coordinates of the finger stall as a target machine position; then, the sound box can control the camera module of the sound box to adjust to a target machine position (for example, ascending, descending or rotating) based on the real-time coordinate of the finger stall, so that the camera module can track the real-time coordinate of the finger stall to move, and the camera module of the sound box can shoot image information of the position where the finger stall is located.
For example, the user can wear the finger stall that has passed the authentication of audio amplifier firm and point to the module of making a video recording, and the module of making a video recording can lock the coordinate of finger stall and move along the removal of finger stall after discerning the verification information on the finger stall. And then make the user can control the camera position of the module of making a video recording through the finger, convenience of customers controls the camera position of the module of making a video recording.
By implementing the method, the sound box can control the camera module to move along with the movement of the finger stall when the authenticated finger stall is identified, so that a user can control the camera position of the camera module through the finger, and the user can remotely control the camera position of the camera module.
As another optional implementation, in the process of controlling the camera module of the sound box to adjust to the target machine position based on the real-time coordinates of the finger stall, so that the camera module tracks the real-time coordinates of the finger stall to move, the sound box can also judge whether the user makes a preset gesture representing starting of the video recording function through the hand image information of the user wearing the finger stall fed back by the camera module, and if the preset gesture representing starting of the video recording function is detected, the video recording function of the camera module is started; in the process of using the video recording function, if the gesture which indicates the video recording stopping function and is preset by a user is detected, the camera module can be controlled to stop recording;
or, the real-time coordinate of following the finger stall at the module of making a video recording carries out the in-process that removes, and the audio amplifier can judge whether the user makes the gesture that preset expression starts the function of shooing through the user's of wearing above-mentioned finger stall hand image information that the module of making a video recording feedbacks, if detect the gesture that preset expression starts the function of shooing, then open the function of shooing of the module of making a video recording and shoot.
By implementing the method, the sound box can also start the video recording function or the photographing function of the camera module when detecting that the user makes a preset gesture so as to execute subsequent video recording or photographing work; therefore, the gesture is used as a triggering condition of a video recording function or a photographing function, compared with the traditional button, the button triggering is more convenient, and the use experience of a user is improved.
As another optional implementation manner, after detecting that the user makes a preset gesture indicating that the video recording function is stopped and controlling the camera module to stop recording, the sound box may store the video in the cache, determine a mailbox address bound by the verification information according to the verification information on the identified finger stall, and send the video stored in the cache to the mailbox address for the user to inquire and use.
By implementing the method, the sound box can also automatically send the video shot by the camera module to the mailbox address set by the user for the user to inquire and use, so that the use experience of the user is improved.
102. According to the initial machine position and the target machine position where the camera module is located at present, the target amplitude and/or the target angle needing to be rotated, which need to be popped up by the camera module, are determined.
In the embodiment of the invention, the camera position of the camera module can be the shooting position of the camera module. Based on the structural features of the sound box and the camera module described in step 101, how to determine a target amplitude and/or a target angle that needs to be popped up by the camera module according to the initial position and the target position at which the camera module is currently located is illustrated, for example, a shooting position of the initial position at which the camera module is currently located is a position right in front of the sound box (as shown in fig. 6, a top view of the sound box disclosed in the embodiment of the present invention is shown in fig. 6), and a shooting position of the target position is a left side of the shooting sound box, it can be determined that a camera of the camera module needs to be rotated 90 degrees to the left based on a rotatable connecting shaft according to the initial position and the target position, and the rotated target position is shown in fig. 6, and the camera module can shoot a scene on the left side of the sound box at this time; for another example, the shooting position of the initial position where the camera module is currently located is right in front of the sound box (as shown in fig. 7 c, fig. 7 is a right view of the sound box disclosed in the embodiment of the present invention), and the shooting position of the target position is below the shooting sound box, it can be determined that the camera of the camera module and the rotatable connecting shaft need to pop out 90 degrees downwards according to the initial position and the target position (as shown in fig. 7 d, the camera module can shoot the scene below the sound box at this time).
103. The camera shooting module is controlled to pop up and/or rotate to a target angle according to the target amplitude, so that the camera shooting module can acquire image information required by the execution of a target function.
In the embodiment of the invention, after the target amplitude and/or the target angle which needs to be rotated and needs to be popped up by the camera module are determined according to the initial position and the target position which needs to be adjusted, the sound box can control the camera module of the sound box to pop up and/or rotate to the target angle according to the target amplitude, so that the camera module can acquire the image information which needs to execute the target function.
As an optional implementation manner, after controlling the camera module to pop up and/or rotate to a target angle by a target amplitude, when the sound box can also judge that the target function is playing a sleeping song, controlling the camera module of the sound box to shoot the image information of people around the sound box, and identifying the facial features of people in the shot image information of the people; judging whether the facial features of the person in the person image information are matched with preset facial features, wherein the preset facial features are facial features representing that the person is in a sleep state; and if the two songs are matched, controlling the sound box to stop playing the hypnotic songs.
It should be noted that: the hypnotic songs may include, but are not limited to, light music for hypnosis, white noise for hypnosis (e.g., rain, bird calls, etc.). Facial features that indicate that a person is asleep may include, but are not limited to: closing the eyes, etc.
By implementing the method, the sound box can judge whether the user falls asleep or not when the user puts the sleeping songs, and controls the sound box to stop playing the sleeping songs if the user is judged to fall asleep, so that a comfortable sleeping environment is created for the user, and the use experience of the user is improved.
As another optional implementation, after it is determined that the facial features of the person in the photographed person image information match the preset facial features, the sound box may further control the camera module of the sound box to collect the light intensity of the lamps around the sound box; judging whether the light intensity of the lamps around the sound box is higher than a preset intensity threshold value or not; and if the light intensity of the lamps around the sound box is higher than a preset intensity threshold value, controlling the lamps to be turned off, wherein the lamps are in communication connection with the sound box.
It should be noted that: the preset intensity threshold value can be set by a developer according to a large amount of experimental data, and the specific data can be the light intensity of the ambient environment when a person sleeps healthily; the lamp can be an intelligent lamp in communication connection with the sound box, and the lamp is installed in the coverage range of the control signal of the sound box so as to ensure that the sound box can control the sound box to be started or closed.
By implementing the method, the sound box can also close the lamps around the sound box when judging that the user falls asleep, so that a comfortable sleep environment is created for the user, and the use experience of the user is improved.
It can be seen that, by implementing the method described in fig. 1, when it is detected that the sound box starts a certain target function that the camera module needs to be used, a target machine position to which the camera module of the sound box needs to be adjusted can be determined according to the target function; determining the target amplitude required to be popped up by the camera module and the target angle required to be rotated according to the initial machine position in which the camera module is currently positioned and the target machine position required to be adjusted; and then controlling the camera module to pop up and/or rotate to a target angle by the target amplitude so that the camera module can acquire image information required by executing a target function. It can be seen that, compared with the traditional sound box, the camera module can only shoot the scene in a certain fixed height, the sound box of the embodiment of the invention can adjust the camera module of the sound box to different machine positions according to different started functions, so that the camera module can shoot the scenes in different heights and different planes, and further the camera module can collect effective image information required when the sound box executes different functions, and the success rate of executing functions of the sound box is improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another adjusting method for a speaker camera module according to an embodiment of the present invention. As shown in fig. 2, the method for adjusting the speaker camera module may include the following steps:
201. when detecting that the audio amplifier starts certain target function that needs to use the module of making a video recording, determine the target position that the module of making a video recording of audio amplifier needs to adjust to according to target function.
202. According to the initial machine position and the target machine position where the camera module is located at present, the target amplitude and/or the target angle needing to be rotated, which need to be popped up by the camera module, are determined.
203. The camera shooting module is controlled to pop up and/or rotate to a target angle according to the target amplitude, so that the camera shooting module can acquire image information required by the execution of a target function.
204. If the camera module collects hand image information of the user, judging whether the wrist of the user wears intelligent wearable equipment or not according to the hand image information; if yes, go to step 205; if not, the flow is ended.
In the embodiment of the present invention, the sound box may recognize image information collected by the camera module through an Optical Character Recognition (OCR) technology, and if it is recognized that the image information includes features of the hand (for example, fingers), it may further recognize whether the wrist in the image information including the hand wears the smart wearable device, if so, execute step 205; if not, the flow is ended.
It should be noted that: the OCR technology converts characters of various bills, newspapers, books, documents and other printed matters into image information by means of optical input methods such as scanning, and then converts the image information into a usable computer input technology by means of a character recognition technology.
It needs to be further explained that: smart wearable devices may include, but are not limited to: smart watch, smart bracelet, smart armguard, etc.
205. The Bluetooth of control audio amplifier is opened and the bluetooth of intelligent wearable equipment is opened to make audio amplifier and intelligent wearable equipment establish the bluetooth and be connected.
In the embodiment of the invention, Bluetooth functional modules can be arranged in the sound box and the intelligent wearable device and used for executing Bluetooth functions, wherein Bluetooth is a wireless technical standard and can realize short-distance data exchange (using UHF radio waves of ISM wave band of 2.4-2.485 GHz) between fixed equipment, mobile equipment and a building personal area network.
After the bluetooth of audio amplifier and the bluetooth of the wearable equipment of intelligence all opened, if audio amplifier and the wearable equipment of intelligence all are in the coverage of the bluetooth of other side each other when, can control audio amplifier and the wearable equipment of intelligence and establish the bluetooth and be connected to make things convenient for audio amplifier and the wearable equipment of intelligence to carry out data transmission and further interact.
As an optional implementation manner, after the bluetooth of the sound box and the bluetooth of the intelligent wearable device are controlled to be turned on so that the sound box and the intelligent wearable device are connected in a bluetooth manner, the camera module can monitor that the user makes a preset distress action and receive physiological characteristic data of the user, which is acquired by the intelligent wearable device; and judging whether the physiological characteristic data of the user exceeds the range of the preset normal physiological characteristic data, if so, reading a specified alarm number from the address list of the intelligent wearable device, and calling the specified alarm number.
It should be noted that: the preset distress action may be: the camera module is used for shooting the camera module, and the camera module is used for shooting the camera module.
It needs to be further explained that: the audio amplifier can have the scope of the normal physiological characteristic data of predetermineeing in the storage, and after the wearable equipment of intelligence feedbacked user's physiological characteristic data, the audio amplifier can inquire whether the physiological characteristic data of the user who wears this wearable equipment of intelligence surpassed the scope of the normal physiological characteristic data of predetermineeing, if surpass, read appointed alarm number from the address list of wearable equipment of intelligence to call out appointed alarm number.
For example: the normal heart rate range (namely the range of normal physiological characteristic data) stored in the sound box is 60-100 times/min, and when the heart rate of a user is 50-60 times/min or 100-110 times/min, the user is in a sub-health state; a dangerous situation is when the heart rate of the user is less than 45 beats/minute or more than 115 beats/minute. Therefore, when the wearable device judges that the heart rate data of the wearer exceeds 60-100 times/minute, the specified alarm number can be read from the address list of the intelligent wearable device, and the specified alarm number is called.
By implementing the method, when a user of the intelligent wearable device is detected to make a preset help seeking action and the physiological characteristic data exceeds the range of the preset normal physiological characteristic data when the user is untimely (for example, headache) or in danger (for example, robbery in a room), the user can automatically read the specified alarm number from the address book of the intelligent wearable device and call the specified alarm number, so that the personal safety hidden danger of a wearer of the intelligent wearable device can be reduced, and tragedy can be prevented from occurring.
It can be seen that, compared with the implementation of the method described in fig. 1, when it is determined that the user wears the smart wearable device in the image information collected by the camera module, the method described in fig. 2 can be implemented to automatically control the sound box to establish the bluetooth connection with the smart wearable device worn by the user, so as to facilitate data transmission and further interaction between the sound box and the smart wearable device.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a sound box according to an embodiment of the present invention. As shown in fig. 3, the sound box may include:
the first determining unit 301 is configured to determine, when it is detected that the sound box starts a certain target function that the camera module needs to be used, a target machine position to which the camera module of the sound box needs to be adjusted according to the target function;
the second determining unit 302 is configured to determine, according to the initial machine position and the target machine position at which the camera module is currently located, a target amplitude that the camera module needs to pop up and/or a target angle that the camera module needs to rotate;
the first control unit 303 is configured to control the camera module to pop up and/or rotate to a target angle by a target amplitude, so that the camera module can acquire image information required for executing a target function.
It can be seen that, with the sound box described in fig. 3, when it is detected that the sound box starts a certain target function that the camera module needs to be used, a target machine position to which the camera module of the sound box needs to be adjusted can be determined according to the target function; determining the target amplitude required to be popped up by the camera module and the target angle required to be rotated according to the initial machine position in which the camera module is currently positioned and the target machine position required to be adjusted; and then controlling the camera module to pop up and/or rotate to a target angle by the target amplitude so that the camera module can acquire image information required by executing a target function. It can be seen that, compared with the traditional sound box, the camera module can only shoot the scene in a certain fixed height, the sound box of the embodiment of the invention can adjust the camera module of the sound box to different machine positions according to different started functions, so that the camera module can shoot the scenes in different heights and different planes, and further the camera module can collect effective image information required when the sound box executes different functions, and the success rate of executing functions of the sound box is improved.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of another sound box disclosed in the embodiment of the present invention. The sound box shown in fig. 4 is optimized from the sound box shown in fig. 3. Compared to the loudspeaker shown in fig. 3, the loudspeaker shown in fig. 4 may further comprise:
the identification unit 304 is configured to, after the first control unit 303 controls the camera module to pop up and/or rotate to a target angle by a target amplitude, control the camera module to shoot the person image information around the sound box and identify the facial features of the person in the person image information if it is determined that the target function is to play a hibernation-like song;
a first judgment unit 305 for judging whether or not the facial feature of the person in the person image information matches a preset facial feature that is a facial feature indicating that the person is in a sleep state;
and a second control unit 306, configured to control the loudspeaker to stop playing the hibernation-type song when the first determination unit 305 determines that the facial feature of the person in the person image information matches the preset facial feature.
As an alternative embodiment, the sound box shown in fig. 4 may further include:
the acquisition unit 307 is configured to control the camera module to acquire the light intensity of the lamps around the sound box after the first determination unit 305 determines that the facial features of the person in the person image information match the preset facial features;
a second judging unit 308, configured to judge whether light intensity of a lamp around the sound box is higher than a preset intensity threshold;
a third control unit 309, configured to control the lamp to turn off when the second determining unit 308 determines that the light intensity of the lamp around the sound box is higher than the preset intensity threshold, where the lamp is in communication connection with the sound box.
By implementing the method, the sound box can also close the lamps around the sound box when judging that the user falls asleep, so that a comfortable sleep environment is created for the user, and the use experience of the user is improved.
As an alternative embodiment, the sound box shown in fig. 4 may further include:
the third judging unit 310 is configured to judge whether the wrist of the user wears the intelligent wearable device according to the hand image information if the camera module collects the hand image information of the user after the first control unit 303 controls the camera module to pop up and/or rotate to a target angle by a target amplitude;
the establishing unit 311 is configured to, when the third determining unit 310 determines that the smart wearable device is worn on the wrist of the user, control bluetooth of the speaker to be turned on and bluetooth of the smart wearable device to be turned on, so that the speaker and the smart wearable device establish bluetooth connection.
By implementing the method, when the sound box judges that the intelligent wearable equipment is worn by the user in the image information acquired by the camera module, the sound box is automatically controlled to establish Bluetooth connection with the intelligent wearable equipment worn by the user, so that the sound box and the intelligent wearable equipment can conveniently perform data transmission and further interaction.
As an alternative embodiment, the sound box shown in fig. 4 may further include:
the receiving unit 312 is configured to, after the establishing unit 311 controls bluetooth activation of the sound box and bluetooth activation of the smart wearable device, establish bluetooth connection between the sound box and the smart wearable device, receive physiological characteristic data of the user acquired by the smart wearable device if it is monitored that the user performs a preset distress operation through the camera module;
a fourth judging unit 313, configured to judge whether the physiological characteristic data of the user exceeds a preset range of normal physiological characteristic data;
a calling unit 314, configured to read a specified alarm number from the address book of the smart wearable device and call the specified alarm number when the fourth determining unit 313 determines that the physiological characteristic data of the user exceeds the preset range of the normal physiological characteristic data.
By implementing the method, when a user of the intelligent wearable device is detected to make a preset help seeking action and the physiological characteristic data exceeds the range of the preset normal physiological characteristic data when the user is untimely (for example, headache) or in danger (for example, robbery in a room), the user can automatically read the specified alarm number from the address book of the intelligent wearable device and call the specified alarm number, so that the personal safety hidden danger of a wearer of the intelligent wearable device can be reduced, and tragedy can be prevented from occurring.
As an optional implementation manner, the sound box shown in fig. 4 may further obtain the real-time coordinates of the authenticated finger stall when identifying verification information (e.g., a two-dimensional code) on the finger stall, and determine a machine position capable of shooting the real-time coordinates of the finger stall as a target machine position; then, the sound box can control the camera module of the sound box to adjust to a target machine position (for example, ascending, descending or rotating) based on the real-time coordinate of the finger stall, so that the camera module can track the real-time coordinate of the finger stall to move, and the camera module of the sound box can shoot image information of the position where the finger stall is located.
By implementing the method, the sound box can control the camera module to move along with the movement of the finger stall when the authenticated finger stall is identified, so that a user can control the camera position of the camera module through the finger, and the user can remotely control the camera position of the camera module.
As another optional implementation manner, in the process that the sound box shown in fig. 4 controls the camera module of the sound box to adjust to the target machine position based on the real-time coordinates of the finger stall, so that the camera module tracks the real-time coordinates of the finger stall to move, the sound box can judge whether the user makes a preset gesture indicating that the video recording function is started through the hand image information of the user wearing the finger stall fed back by the camera module, and if the preset gesture indicating that the video recording function is started is detected, the video recording function of the camera module is started; in the process of using the video recording function, if the gesture which is preset by a user and indicates that the video recording function is stopped is detected, the camera module can be controlled to stop recording the video;
or, the real-time coordinate of following the finger stall at the module of making a video recording carries out the in-process that removes, and the audio amplifier can judge whether the user makes the gesture that preset expression starts the function of shooing through the user's of wearing above-mentioned finger stall hand image information that the module of making a video recording feedbacks, if detect the gesture that preset expression starts the function of shooing, then open the function of shooing of the module of making a video recording.
By implementing the method, the sound box can also start the video recording function or the photographing function of the camera module when detecting that the user makes a preset gesture so as to execute subsequent video recording or photographing work; therefore, the gesture is used as a triggering condition of a video recording function or a photographing function, compared with the traditional button, the button triggering is more convenient, and the use experience of a user is improved.
As another optional implementation manner, after the sound box shown in fig. 4 detects that the user makes a preset gesture indicating that the video recording function is stopped and controls the camera module to stop recording, the sound box may further store the video in the cache, determine a mailbox address bound by the verification information according to the verification information on the identified finger stall, and send the video stored in the cache to the mailbox address for the user to query and use.
By implementing the method, the sound box can also automatically send the video shot by the camera module to the mailbox address set by the user for the user to inquire and use, so that the use experience of the user is improved.
It can be seen that, compared with the sound box shown in fig. 3, the sound box shown in fig. 4 can also determine whether the user falls asleep when the user puts a sleeping song, and control the sound box to stop playing the sleeping song if the user is determined to fall asleep, so as to create a comfortable sleep environment for the user and improve the user experience of the user.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another sound box disclosed in the embodiment of the present invention. As shown in fig. 5, the sound box may include:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
the processor 502 calls the executable program code stored in the memory 501 to execute the method for adjusting the speaker camera module shown in any one of fig. 1 to 2.
It should be noted that, in this embodiment of the application, the sound box shown in fig. 5 may further include a speaker module, a camera module, a display screen, a light projection module, a battery module, a wireless communication module (such as a mobile communication module, a WIFI module, a bluetooth module, etc.), a sensor module (such as a proximity sensor, a pressure sensor, etc.), an input module (such as a microphone, a button), and undisplayed components such as a user interface module (such as a charging interface, an external power supply interface, a card slot, a wired earphone interface, etc.).
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute an adjusting method of a sound box camera module shown in any one of figures 1-2.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The method for adjusting the sound box camera module and the sound box disclosed by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. A method for adjusting a sound box camera module is characterized by comprising the following steps:
when a sound box is detected to start a certain target function of a camera module, determining a target machine position to which the camera module of the sound box needs to be adjusted according to the target function;
determining the target amplitude and/or the target angle required to be popped up by the camera module according to the initial machine position and the target machine position in which the camera module is currently positioned;
and controlling the camera module to pop up and/or rotate to the target angle by the target amplitude, so that the camera module can acquire image information required by executing the target function.
2. The method of claim 1, wherein after controlling the camera module to pop up at the target amplitude and rotate to the target angle, the method further comprises:
if the target function is to play a sleeping song, controlling the camera module to shoot the character image information around the sound box, and identifying the facial features of characters in the character image information;
judging whether the facial features of the people in the people image information are matched with preset facial features, wherein the preset facial features are facial features representing that the people are in a sleep state;
and if the sound box is matched with the sound box, controlling the sound box to stop playing the hypnotic songs.
3. The method according to claim 2, wherein after determining that the facial feature of the person in the person image information matches a preset facial feature, the method further comprises:
controlling the camera module to acquire the light intensity of lamps around the sound box;
judging whether the light intensity of the lamps around the sound box is higher than a preset intensity threshold value or not;
and if the light intensity of the lamps around the sound box is higher than a preset intensity threshold value, controlling the lamps to be turned off, wherein the lamps are in communication connection with the sound box.
4. The method of claim 1, wherein after controlling the camera module to pop up at the target amplitude and rotate to the target angle, the method further comprises:
if the camera module collects hand image information of a user, judging whether the wrist of the user wears intelligent wearable equipment or not according to the hand image information;
if the wrist of the user wears the intelligent wearable device, the Bluetooth of the sound box is controlled to be started and the Bluetooth of the intelligent wearable device is controlled to be started, so that the sound box is connected with the intelligent wearable device in a Bluetooth mode.
5. The method of claim 4, wherein after the controlling Bluetooth activation of the speaker and Bluetooth activation of the smart wearable device to establish a Bluetooth connection between the speaker and the smart wearable device, the method further comprises:
if the camera module monitors that the user makes a preset distress action, receiving physiological characteristic data of the user, which is acquired by the intelligent wearable device;
and judging whether the physiological characteristic data of the user exceeds the range of preset normal physiological characteristic data, if so, reading a specified alarm number from the address list of the intelligent wearable device, and calling the specified alarm number.
6. An acoustic enclosure, comprising:
the first determining unit is used for determining a target machine position to which the camera module of the sound box needs to be adjusted according to a certain target function when the sound box is detected to start the target function of the camera module which needs to be used;
the second determining unit is used for determining the target amplitude required to be popped up and/or the target angle required to be rotated by the camera module according to the initial machine position and the target machine position in which the camera module is currently positioned;
the first control unit is used for controlling the camera shooting module to pop up and/or rotate to the target angle according to the target amplitude, so that the camera shooting module can acquire image information required by executing the target function.
7. An acoustic enclosure according to claim 6, further comprising:
the identification unit is used for controlling the camera module to shoot the character image information around the sound box and identifying the facial features of the characters in the character image information if the target function is judged to be playing of a sleeping song after the first control unit controls the camera module to pop up and/or rotate to the target angle by the target amplitude;
a first judging unit configured to judge whether a face feature of a person in the person image information matches a preset face feature, where the preset face feature is a face feature indicating that the person is in a sleep state;
and the second control unit is used for controlling the sound box to stop playing the hypnotic song when the first judgment unit judges that the facial features of the person in the person image information are matched with the preset facial features.
8. An acoustic enclosure according to claim 7, further comprising:
the acquisition unit is used for controlling the camera module to acquire the light intensity of lamps around the sound box after the first judgment unit judges that the facial features of the person in the person image information are matched with the preset facial features;
the second judgment unit is used for judging whether the light intensity of the lamps around the sound box is higher than a preset intensity threshold value or not;
and the third control unit is used for controlling the lamp to be turned off when the second judging unit judges that the light intensity of the lamp around the sound box is higher than a preset intensity threshold value, and the lamp is in communication connection with the sound box.
9. An acoustic enclosure according to claim 6, further comprising:
the third judging unit is used for judging whether the wrist of the user wears the intelligent wearable device or not according to the hand image information if the camera module collects the hand image information of the user after the first control unit controls the camera module to pop up and/or rotate to the target angle by the target amplitude;
the establishing unit is used for controlling the Bluetooth of the sound box to be opened and the Bluetooth of the intelligent wearable device to be opened when the third judging unit judges that the wrist of the user wears the intelligent wearable device, so that the sound box is connected with the intelligent wearable device through Bluetooth.
10. An acoustic enclosure according to claim 9, further comprising:
the receiving unit is used for receiving the physiological characteristic data of the user, which is acquired by the intelligent wearable device, if the camera module monitors that the user makes a preset help seeking action after the establishing unit controls the Bluetooth opening of the sound box and the Bluetooth opening of the intelligent wearable device so that the sound box is connected with the intelligent wearable device in a Bluetooth mode;
the fourth judging unit is used for judging whether the physiological characteristic data of the user exceeds the range of preset normal physiological characteristic data or not;
and the calling unit is used for reading a specified alarm number from the address list of the intelligent wearable device and calling the specified alarm number when the fourth judging unit judges that the physiological characteristic data of the user exceeds the range of preset normal physiological characteristic data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911003421.9A CN111182201A (en) | 2019-10-22 | 2019-10-22 | Method for adjusting sound box camera module and sound box |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911003421.9A CN111182201A (en) | 2019-10-22 | 2019-10-22 | Method for adjusting sound box camera module and sound box |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111182201A true CN111182201A (en) | 2020-05-19 |
Family
ID=70651866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911003421.9A Pending CN111182201A (en) | 2019-10-22 | 2019-10-22 | Method for adjusting sound box camera module and sound box |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111182201A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103582188A (en) * | 2012-07-30 | 2014-02-12 | 飞雕电器集团有限公司 | Mobile gateway device |
CN106097651A (en) * | 2016-08-17 | 2016-11-09 | 广东小天才科技有限公司 | Automatic alarm method based on wearable device and wearable device |
CN106960530A (en) * | 2017-03-21 | 2017-07-18 | 广东欧珀移动通信有限公司 | Alarm method, device and terminal based on vital sign parameter |
CN107750012A (en) * | 2017-10-27 | 2018-03-02 | 珠海市魅族科技有限公司 | A kind of video broadcasting method and device |
CN108156542A (en) * | 2017-12-25 | 2018-06-12 | 广州市尊浪电器有限公司 | A kind of intelligence outdoor video speaker |
CN208457505U (en) * | 2018-07-06 | 2019-02-01 | 深圳中科信迅信息技术有限公司 | A kind of testimony of a witness verification terminal with Intelligent telescopic rotating camera |
CN110087164A (en) * | 2019-05-21 | 2019-08-02 | 出门问问信息科技有限公司 | A kind of speaker |
-
2019
- 2019-10-22 CN CN201911003421.9A patent/CN111182201A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103582188A (en) * | 2012-07-30 | 2014-02-12 | 飞雕电器集团有限公司 | Mobile gateway device |
CN106097651A (en) * | 2016-08-17 | 2016-11-09 | 广东小天才科技有限公司 | Automatic alarm method based on wearable device and wearable device |
CN106960530A (en) * | 2017-03-21 | 2017-07-18 | 广东欧珀移动通信有限公司 | Alarm method, device and terminal based on vital sign parameter |
CN107750012A (en) * | 2017-10-27 | 2018-03-02 | 珠海市魅族科技有限公司 | A kind of video broadcasting method and device |
CN108156542A (en) * | 2017-12-25 | 2018-06-12 | 广州市尊浪电器有限公司 | A kind of intelligence outdoor video speaker |
CN208457505U (en) * | 2018-07-06 | 2019-02-01 | 深圳中科信迅信息技术有限公司 | A kind of testimony of a witness verification terminal with Intelligent telescopic rotating camera |
CN110087164A (en) * | 2019-05-21 | 2019-08-02 | 出门问问信息科技有限公司 | A kind of speaker |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108833818B (en) | Video recording method, device, terminal and storage medium | |
CN105323648B (en) | Caption concealment method and electronic device | |
CN108509033B (en) | Information processing method and related product | |
CN110740262A (en) | Background music adding method and device and electronic equipment | |
CN106685459B (en) | Wearable device operation control method and wearable device | |
CN109144245B (en) | Equipment control method and related product | |
CN108848394A (en) | Net cast method, apparatus, terminal and storage medium | |
CN108959273B (en) | Translation method, electronic device and storage medium | |
CN109214301A (en) | Control method and device based on recognition of face and gesture identification | |
CN112990909A (en) | Voice payment method and electronic equipment | |
KR102395888B1 (en) | Method for detecting input using audio signal and apparatus thereof | |
CN110096251A (en) | Exchange method and device | |
KR20200092207A (en) | Electronic device and method for providing graphic object corresponding to emotion information thereof | |
EP3793275B1 (en) | Location reminder method and apparatus, storage medium, and electronic device | |
CN110177239B (en) | Video call method based on remote control and wearable device | |
CN111176435A (en) | User behavior-based man-machine interaction method and sound box | |
CN109117819B (en) | Target object identification method and device, storage medium and wearable device | |
CN110337030B (en) | Video playing method, device, terminal and computer readable storage medium | |
CN110174988B (en) | Learning method based on wearable device and wearable device | |
CN111182201A (en) | Method for adjusting sound box camera module and sound box | |
CN110197569B (en) | Safety monitoring method based on wearable device and wearable device | |
CN112133296A (en) | Full-duplex voice control method, device, storage medium and voice equipment | |
CN111652624A (en) | Ticket buying processing method, ticket checking processing method, device, equipment and storage medium | |
CN112305927A (en) | Equipment control method and device | |
CN112218196A (en) | Earphone and earphone control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200519 |