CN112367431A - Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone - Google Patents

Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone Download PDF

Info

Publication number
CN112367431A
CN112367431A CN202011427340.4A CN202011427340A CN112367431A CN 112367431 A CN112367431 A CN 112367431A CN 202011427340 A CN202011427340 A CN 202011427340A CN 112367431 A CN112367431 A CN 112367431A
Authority
CN
China
Prior art keywords
trigger
triggering
sensor
electronic equipment
intelligent electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011427340.4A
Other languages
Chinese (zh)
Other versions
CN112367431B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011427340.4A priority Critical patent/CN112367431B/en
Publication of CN112367431A publication Critical patent/CN112367431A/en
Application granted granted Critical
Publication of CN112367431B publication Critical patent/CN112367431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72406User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by software upgrading or downloading
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories

Abstract

The invention relates to a method for controlling intelligent electronic equipment, which utilizes a sensor or a sensor group, on the basis of a trigger coding table formed in short, long and longer time, combines directional trigger or pressure trigger or direction trigger and speed trigger or directional trigger and speed trigger and pressure trigger of the sensor or the sensor group to form a 2-to-multidimensional trigger coding table, codes and the functions and states of the controlled intelligent electronic equipment or APP are combined to form a trigger instruction, and under the corresponding functions and states, the operation and control of the intelligent electronic equipment are realized by triggering the sensor, and prompt information which can be sensed by a user, such as voice, sound, TTS, vibration and visual information, is generally subjected to instruction trigger.

Description

Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone
Technical Field
The method comprises the steps of utilizing a sensor or a sensor group capable of sensing the triggering time length of an external object, combining the sensor or the sensor group with the capability of sensing the direction, the speed and the pressure, monitoring and identifying external triggering, and forming a multi-dimensional triggering coding table according to the condition that the triggering time length is orthogonal to any two of direction type triggering, speed type triggering and pressure type triggering or is orthogonal to multiple triggering. The multidimensional trigger coding table is combined with the function and state of the target controlled electronic equipment or the function, state and use scene of APP running on the target controlled electronic equipment to form an instruction code, and after the corresponding instruction trigger is monitored under the corresponding state and function, the corresponding instruction and function are executed, so that a user of the intelligent terminal or the electronic equipment can operate and control the intelligent terminal or the intelligent electronic equipment through a contact type or non-contact type trigger sensor or group under the prompt of voice, sound, vibration, TTS and other forms of signals which can be recognized by the user (the method not only aims at controlling certain functions of the intelligent electronic equipment, but also effectively operates the intelligent electronic equipment). The method is particularly suitable for driving, moving or other scenes (non-static scenes or quasi-static scenes) in which the intelligent electronic equipment such as an intelligent terminal, a smart phone and the like is not controlled by hands, is an effective solution for the problem that the existing intelligent electronic equipment cannot be safely and conveniently used during moving and driving, is an effective supplement for operating the electronic equipment in the non-static scenes, is also suitable for controlling electromechanical integrated equipment controlled by the intelligent electronic equipment through a sensor, is particularly suitable for being used in special fields, such as sniper aiming in the military field, the sniper can be in close communication cooperation with a rear command unit without influencing sniper aiming, one hand is often used for controlling steps and the other hand is used for aiming when the sniper aiming is prepared in the military field and is used for communicating with a command party walkie phone, aiming accuracy is reduced to lose fighting opportunities, the intelligent degree of the existing individual fighting system is high day by day, and a fighter has to hold a gun and operate the electronic equipment with one hand when utilizing the individual fighting system, such as a communication system, so that a lot of good fighting opportunities are lost.
Background
People often control electronic devices by "keys," which are typically physical keys as well as "keys" defined on a touch screen. Particularly, after intelligent electronic devices, intelligent terminals, smart phones and the like are started up, the functions of the devices are increasingly complex, so that users have to locate the 'keys' with eyes and press keys with fingers when using the devices, and the electronic devices are controlled in a manner familiar to the families. At present, data in the aspect of traffic accidents of main countries indicate that the use of a mobile phone in driving is one of the most main causes of the traffic accidents at present. In various sports scenes, people can not leave the intelligent electronic equipment, but when the intelligent electronic equipment is used for control, the eyes and fingers are also required to be matched to complete the control, so that the original sports state of a sporter needs to be slowed down, changed and interrupted. And for personnel who need outdoor operation such as express delivery personnel, when running into rain, the screen at hand, smart mobile phone, terminal is stained with water, leads to these equipment all can not, inconvenient controlling. When people wear gloves in cold weather, people need to take off the gloves and draw out the intelligent electronic equipment for operation. The industry does not have a practical solution to the control of intelligent electronic equipment under these scenes, although a voice recognition technology can be used for control, the voice recognition cannot meet the normal control requirement under the conditions of outdoor wind noise, vehicle body noise during driving and the like, and the individual soldier system cannot rely on the voice recognition for control during combat. The voice recognition technology is limited by the signal-to-noise ratio, and uses scenes, but noise in motion and driving cannot be avoided, so that more practical technology and method are needed to help people. The method utilizes the equipment which is connected with the intelligent electronic equipment in a wired/wireless way and contains the sensor or the sensor group or the sensor group and the like (a built-in sensor or a sensor group) which is integrated with the intelligent electronic equipment, monitors the triggering of the sensor or the sensor group, forms a triggering code according to the triggering direction, the triggering speed, the triggering strength and the triggering duration, combines (orthogonally) with the state and the function of the intelligent electronic equipment to form a triggering instruction, a user of the intelligent electronic equipment triggers the sensor under the prompting of voice, sound, vibration or other recognizable signals, and executes the corresponding instruction or function when monitoring the corresponding triggering and the corresponding state and function, thereby leading the simple touch technology suitable for the sports and the driving to be applied to the life of people, leading the people to use the intelligent electronic equipment under various scenes more safely, More freedom and convenience. The intelligent electronic equipment usually comprises a Central Processing Unit (CPU) for control, and a user interacts with the intelligent electronic equipment through a human-computer interaction interface so as to realize operation and control; the intelligent electronic equipment comprises a smart phone, an intelligent terminal, an intelligent talkback party, an intelligent watch, an intelligent earphone, a control panel on an automobile and electronic equipment with program control functions such as a CPU (central processing unit) in a control system or other types of electronic equipment. The method is an optimization and supplement to the inventor of the invention WO2016192622A1 or the granted patent CN2016103632799 under a more comprehensive scene, by adding pressure type and speed type triggering and further orthogonally triggering the speed or triggering pressure by using a triggering coding method with orthogonal time length or time length and direction, so that more scenes are adapted to be used, for example, in the use scene of a trekking pole, if the pressure dimension is increased, the scene usability is obviously higher than that only used for operation and control, in a military scene, if a pressure type triggering instruction is adopted, such as during single-soldier combat, the control of combat and electronic equipment is not interfered, and the constraint which exists in the actual combat today can be solved.
The voice recognition, lip language recognition and gesture radar technologies are explored in the industry, but in sports, driving, all-weather and medium and fast sports scenes, the voice recognition technology finds that the recognition rate cannot meet the basic requirements of people on intelligent electronic equipment control due to environmental noise and wind noise (caused by natural wind or movement speed), lip language recognition requires better illumination, corresponding equipment for collecting lip change and collecting space and position, the method is obviously not feasible during sports and running, and the lighting defect (all-weather) exists at night during driving, so that lip language recognition is more consistent with the driving scene than voice recognition, but also depends on in-vehicle lighting, and when the in-vehicle lighting intensity at night is known to be too high, the observation of external objects is directly influenced to cause traffic accidents; the gesture radar is obviously not preferable when running and other sports, because the hand is difficult to be fixed in the detection area of the gesture radar to perform continuous gestures in the sports, and the glove must be taken off in winter, so that the technology cannot be adopted in many practical scenes. Touch technology is ignored in the industry, and the essence of the method is that a plurality of dimensions triggered by a sensor are combined to be orthogonal to the state and the function of the controlled equipment, and the specific function or state switching is controlled by the triggering of the sensor under the prompting of voice, sound and vibration, so that the intelligent electronic equipment can still serve people in the scene which cannot be well used today; in a special environment such as military combat, the technologies such as voice recognition and the like cannot be used in at least one individual soldier link, but the method can effectively solve the problem that when the individual soldier combat, an operator controls the electronic equipment of the individual soldier such as communication and the like without influencing the combat, so that when the electronic equipment is controlled in the actual combat at present, one hand of an operation firearm must be occupied, the operator loses opportunity or causes unnecessary risks, and the problem that the operation is not effectively solved at present is solved.
Disclosure of Invention
In order to overcome the problem that the current intelligent electronic equipment is forced to depend on the hands and eyes of a user when in use, the method utilizes a sensor or a sensor group which can sense the triggering direction, the triggering speed, the triggering strength and the triggering duration of an external object, by monitoring and identifying external trigger, and according to trigger duration, trigger direction, trigger speed and trigger strength, etc. to form trigger code, and combines (orthogonalizes) with the functions and states of the intelligent terminal, the intelligent mobile phone and the intelligent electronic equipment or the functions or states of APP running on the intelligent terminal, the intelligent mobile phone and the intelligent electronic equipment to form instruction codes, when the corresponding trigger is monitored under the corresponding state and function, the corresponding command is identified and the corresponding command and function are executed, therefore, a user of the intelligent electronic equipment can control the intelligent electronic equipment by triggering the sensor under the combined prompt of voice, sound, vibration and other sound and light. The method can enable a user of the intelligent electronic equipment to control the intelligent electronic equipment through the sensor and the sensor group under the prompting of voice, sound, vibration and TTS, and the intelligent electronic equipment which can be controlled by eyes and fingers can be completed without the participation of eyes, fingers and even hands.
There are various sensors for sensing the external trigger direction, such as a radar, a gesture sensor, two or more proximity distance sensor groups separated by a distance (e.g., 5 cm), a conductive fiber fabric, a touch screen, and the like, and the sensors can be classified into a contact trigger and a non-contact trigger according to the trigger, for example, the sensors of the conductive fiber fabric need to be contacted and the radar and the gesture radar do not need to be contacted and triggered.
There are many sensors with sensing speed, for example, two proximity distance sensor groups with 5cm distance and high precision, when the first triggered time is t1 and the second triggered time is t2, the speed is 5cm/(t2-t1), for the radar sensor, when the precision reaches the requirement, two or more sensors receiving radar reflected wave can calculate the triggering speed of the triggering object; of course, the approaching trigger speed is also provided, such as an approaching distance sensor with high precision and a laser pulse reflection sensor, which can measure the approaching speed; of course, after the touch pad and the screen are moved from point a (x 1, y 1) to point B (x2, y2), such as when the finger slides over, the distance between the two points can be measured by the length of time between the two points, so that the fast speed and the slow speed can be distinguished, such as on the music control, the fast speed represents the fast forward, and the slow speed represents the next song, and the fast forward (fast backward) or the next (upper) song can be formed by combining the directionality. This is a very useful touch method in contact or non-contact sensors, such as in the case of cycling, in sports helmets for downhill skiing where the time duration is orthogonal to the direction and speed of the trigger, so that the user can take the sports glove to control the player's communication or entertainment device without non-contact triggering, and today in downhill skiing, the user cannot operate the smart electronic device in motion.
Pressure sensors are typically used to measure changes in pressure values, but in the present method, the light and heavy pressure values and the relationship between the trigger duration, trigger time and trigger can also be used to form a trigger code rather than just a pressure test.
All the sensors or sensor groups with similar attributes can measure the trigger time length, so that the method can form multidimensional trigger codes by combining the orthogonality of two or more items of the trigger attributes, the trigger time length, the direction, the speed, the pressure and the like. For the method, a sensor formed by acoustic, optical, electrical or magnetic field and any combination thereof can be used for controlling the intelligent electronic equipment under the method as long as the sensor can accurately and sensitively provide the triggering time length, the triggering direction, the triggering pressure, the triggering speed and 2 or more orthogonal combinations to form the triggering code, and the triggering code is combined with the state and the function of the controlled object.
Drawings
The method is further explained below with reference to the accompanying drawings.
FIG. 1: schematic diagram of the method.
Table 1.
FIG. 2: trigger duration pulse patterns corresponding to table 1 proximity sensor example.
FIG. 3: embodiments for controlling telephone communications with a proximity sensor on a fully wireless headset.
FIG. 4: embodiments of controlling music with a proximity sensor on a fully wireless headset.
FIG. 5: embodiments of concealed distress using a proximity sensor on a fully wireless headset.
Table 2.
FIG. 6: embodiments of video control are implemented on a treadmill based on a directional sensor.
Table 3.
Table 4.
Detailed Description
The embodiments and specific parameters such as time, codes, etc. described in the following exemplary embodiments do not represent all embodiments consistent with the present method, but rather, they are only consistent with the method described in the appended claims, which uses a separate sensing device wired/wirelessly interconnected with the intelligent electronic device or a sensor, a sensor group, etc. integrated with the intelligent electronic device itself, to monitor the trigger thereof, and form a trigger code according to the direction of the trigger, the speed of the trigger, the strength of the trigger, the duration of the trigger, two or more orthogonal to each other, and form a trigger instruction in combination with the state, function, or function and state and usage scenario of the APP running on the intelligent electronic device, the user of the intelligent electronic device triggers the sensor under the prompt of voice, TTS, vibration, when the corresponding trigger code and the corresponding state and function are monitored and identified, the corresponding instruction or function is executed, so that a user of the intelligent electronic equipment can use the intelligent electronic equipment in more scenes instead of being limited to certain scenes nowadays.
As shown in fig. 1, S101 is to initialize a sensor, where the purpose of initializing the sensor is to enable the sensor to work in a target scene suitable for the target scene, such as setting a sampling frequency, a trigger distance, a trigger threshold, and the like. The initialization operation is usually initialized when the device is started or the sensor is called, so that the device can meet a target scene, and the step is not needed after the initialization is successful.
S102, monitoring a sensor, and monitoring triggering of the sensor; when the sensor and the intelligent electronic device are integrated, the operation of the intelligent electronic device is usually monitored by an operating system or a circuit and a program of the intelligent electronic device, if the intelligent electronic device is split and comprises the sensor to control the main intelligent electronic device through wireless or wired electronic devices, the circuit or the program in the device is needed to monitor the operation of the intelligent electronic device, and the monitored data is fed back to the intelligent electronic device through a wired or wireless link and then processed next step.
Step S103 is to identify the trigger, when the user of the intelligent electronic device wants to control the intelligent electronic device through the sensor, step S102 monitors the trigger, and step S103 needs to analyze whether the trigger is a trigger code defined previously, such as trigger, trigger duration, an orthogonal relationship of each dimension, whether the trigger is in a corresponding instruction window, or information removal of false trigger. In the method, in step S103, multiple sets of parallel time/clocks are needed to analyze and identify the trigger, or after the trigger time window is over, the instruction corresponding to the trigger or the subsequent instruction after the window is over is executed.
Step S104 is to identify the command, and after the trigger code is identified, it is necessary to identify whether the command is a legal command or not based on the state and function of the intelligent electronic device at that time in combination with the trigger code identified in step S103, corresponding to the preset command code, that is, the method is that the state and function are orthogonal to the trigger code. And the identification of the trigger instruction is realized by the step S104, so that the step S104 needs two inputs, one is S107, and the other is S103, and when the identification confirms that the command is coded, the step S105 is executed to execute the command or the corresponding function.
Step S105 is to execute the instruction or the function corresponding to the instruction, after the function is executed, the state and the function are usually changed in the intelligent electronic device, for example, in the music playing function, the music playing state is in the music playing state, but at this time, if the trigger instruction is received as pause, after step S105 is executed, the state of the music function is adjusted from the playing state to the pause state.
Step S106 represents the running state \ function of the controlled device, system and APP, such as the telephone function and the calling dialing state. In an intelligent electronic device, an operating system generally manages functions and states of all the operating systems, when the states or the functions are changed, the system is linked, for example, the state before a telephone call is music playing, when the telephone is called, the music is automatically paused, but after the telephone is hung up, the music starts to be automatically played again. In an operating system of a smart phone, a plurality of state flags are defined for system functions or APP calls. In the APP, the APP needs to manage the state, function, and the like defined by the APP, and when the state or function changes, the APP needs to manage and monitor by the APP. For example, when the system plays music, S106 indicates music function and playing status, and when the system is paused, indicates music function and pausing status. And when hiding the SOS, this function intelligent terminal, smart mobile phone do not possess at present, only can be this function of APP management of resident memory, when getting into the function of hiding the SOS from the music state, acquiescence music pauses, and through S107 monitoring function state change to indicate the SOS person through S108 and have got into the function of hiding the SOS or SOS present state.
Step S107 is the content when the state to be monitored is not defined by the system itself function, state or the combination of state and function, which provides the monitoring result to the step S104 and provides the corresponding function and state data to the step S108. When all functions and states are standard functions and states of the intelligent electronic device system, S106 and S107 are usually integrated and completed by the operating system (usually processed by different functional modules), and when the required functions and states are not covered by the operating system, it needs to be managed and monitored by a program outside the operating system.
Step S108 is TO prompt the user of the intelligent electronic device with sound, voice, vibration, etc., that is, a prompt that can be recognized by the user, for example, when music is being played, the music being played is a status prompt, and when the user is on the phone, the ringing tone is a status prompt, and when the menu function is switched, the user is informed of the menu item through tts (text TO speech), and when the user is hidden for help, the user is informed of the help seeker in a vibration manner, so that the user is prevented from being informed of an injury when the voice prompt is performed. Therefore, in step S108, some functions are used by the user according to system procedures, some functions need to be specially defined, and the definition needs to define TTS, sound, vibration, and the like or their combination according to data and scenes provided in step S107, because the man-machine interaction of the method is oriented to non-static scenes, while the man-machine interaction of the traditional GUI is oriented to static scenes, and the variable factors of the non-static scenes are much more than those of the static scenes, more prompting methods are needed, for example, TTS only supports limited languages in a mobile phone system, so that a voice recording recorded in advance needs to be played during prompting, or menu information is sent to a server capable of providing TTS service through the internet, and then a feedback sound file is played, so that the scene and scene combination is one characteristic of the method, which is not available in the man-machine interaction of multi-touch technology (invention in 2004) or mouse keyboard plus GUI (invention in 1963), this is because the state of the user of the electronic device is changeable during the activities of movement, driving, fighting, etc., and it is a basic premise that the user cannot satisfy these scenes depending on the eyes and fingers.
In step S109, the user of the intelligent electronic device determines the function and state of the user according to the prompt in step S108, and triggers the sensor according to the user' S appeal to switch the function and state to the function and state that the user desires.
It should be noted that in step S103, when a trigger occurs, the trigger can be prompted in step S108, for example, if a short trigger is recognized, a short trigger "drop" sound is sent out, and after a long trigger is recognized, step S103 notifies step S108 to send out a long trigger "click" sound, so as to determine whether to use any prompt view function and facilitate human-computer interaction. And for short triggers, long triggers and longer duration triggers see table 1.
Figure 243080DEST_PATH_IMAGE001
Trigger code table exemplified by proximity sensor
Table 1 is a time duration trigger code table, for example, a proximity sensor, which may be formed by circuitry and systems that monitor the width of the trigger pulse using a sensor clock, such as a capacitive screen in combination with a clock circuit, or by an equivalent strategy such as a key switch clock, but in the field of intelligent electronics, users are often faced with functional degradation, such as mechanical noise and pressure and discomfort in smart headsets. In the smart phone, the proximity sensor is used for controlling the on-off of the screen during the call, so that the touch screen is prevented from being triggered by the face and the ears by mistake, and the electricity is saved.
The approach distance sensor is used for a smart phone to approach as 1 and leave as 0, the detection distance is 5cm, the start time of triggering is t1, the time of triggering end is t2, t2-t1< =450ms is set as short triggering, namely the triggering pulse width t2-t1< = Δ t1 of P101 in fig. 2, Δ t1=450ms, 450ms < t2-t1< =1200ms is set as long triggering, namely the P102 pulse in fig. 2, Δ t1< t2-t1< =Δt2, and Δ t2 is set as 1200 ms. From fig. 2, it can be seen that the corresponding pulse shape plots P101 and P102, if we define a short trigger as "·" (drop), and the long trigger is defined as "-" (click), which is the encoding basis of Morse code, that is, the trigger code can be made in the form of Morse code, this is in the form of morse code in table 1, but can of course also be in binary form, e.g. with 0 for short trigger and 1 for long trigger, it is the code table represented in binary form in table 1, but in practice, neither morse nor binary form is derived from the short and long trigger code table based on the long trigger code table in table 1, because short and long triggers are used, and because multiple bits are used, a 2-dimensional trigger code table is naturally formed, as to which form, e.g., binary or morse, depends on the scenario. It should be noted that the trigger code may be a counting code when the instruction required to be controlled is particularly few or the long trigger and the short trigger do not need to be distinguished under a certain state or function of the controlled device, that is, the short trigger and the short trigger are triggered to count once, for example, the following hidden SOS embodiment of this specification explains why the long trigger and the short trigger are used again. The longer trigger P103 in table 1 is typically a trigger greater than the upper limit of the long trigger, e.g. greater than Δ T2 in this example, i.e. greater than 1200ms, which is also a 1-bit command, but is typically used for a state switch, or some specific function, e.g. voice input during walkie-talkie, recording, e.g. starting recording after the trigger exceeds the longer threshold, e.g. greater than 1200ms (upper limit of the length of the long trigger), and ending recording after the trigger is released (T _ end in P103). Certainly, the trigger with longer duration does not have a long-short trigger instruction in a certain state or function, and only needs longer duration, the threshold value 1200ms is not needed, because the occurrence of the threshold value avoids false triggering caused by long-short trigger, for example, 900ms when one long trigger is used, if the threshold value is not set, the instruction with 900ms may have been executed when the long trigger is used, but the instruction is found to be originally long trigger rather than longer duration trigger, and when the trigger does not need to be identified, that is, when the trigger does not have short-long trigger and long trigger except for longer duration trigger, a policy that the threshold value is not needed, that is, the special case of t1= t2 in P103, may be adopted.
If "·" is regarded as 0 and "-" is regarded as 1, it is a multi-bit binary code, and if we don't care whether the trigger is "·" or "-", i.e. do not distinguish between long and short triggers, especially in a simple control function scenario, it can use times to form the trigger code. However, a more ingenious coding mode is adopted, namely the relation between the triggering time and the triggering duration and the adjacent triggering is formed to trigger the codes, and the codes are selected to be used for triggering according to the scenes.
If the duration of t2-t1 can be other Δ t according to a specific scenario, we can form multilevel codes according to Δ t1, Δ t2 and Δ t3 …, although table 1 does not list the form of multilevel codes, in practice, the method can flexibly organize the code table according to the scenario where the trigger codes are flexible, namely according to the trigger and the duration of the trigger, and table 1 is only a list of the most basic elements of the trigger codes.
Corresponding to "·", "-", "-", "·" and "·" four groups of 2-bit codes can be realized very simply, plus one bit of · "," - ", and longer time duration, and 7 trigger codes can be available at the maximum of two-bit codes. When the variable length trigger code is in a certain state or function, that is, when 1-bit, 2-bit or even 3-bit instructions are in a certain state, that is, when the variable length trigger instruction is in a certain state, an instruction window of the trigger driver needs to be set, and the corresponding trigger code is input in the instruction window. The most intuitive example which is easy to understand is that in the state a, when the first trigger is triggered, a command window is started, the window period is 3 seconds, if 0 and 0 are input in 3 seconds, the command of 00 is executed, if only 0 is input in 3 seconds, the command of 0 is executed, if no command window exists, the trigger of inputting the command of 00 becomes the trigger command of executing 0 once, and the command of triggering 0 is executed again, but not the command corresponding to 00. The instruction window is usually related to the state and the function, so that the operation corresponding to the state and the function is executed in the window period, at the end of the window period or after the end of the window period. If a 00 instruction is input within 3 seconds, the result is that the user inputs 00 within 2 seconds, and under the corresponding function, only two bits of instructions need to be input in the window period, so the system does not need to wait until the end of 3 seconds to execute the instruction, but executes the instruction after 00 is received and the instruction window is closed without judging. If no instruction follows after a 0 is entered within 3 seconds, then wait 3 seconds to execute the instruction and close the instruction window or wait 3 seconds minus a value, rather than waiting 3 seconds. The command window may also be a one-bit trigger followed by the opening of a parallel window, e.g. 800ms in length, within which window there is the next trigger, and if there is no trigger in the window the command input is ended. If there is a next trigger and there are multiple bits in this state, then the second bit continues to open a 800ms window after the trigger to determine if there is a next trigger or the instruction has ended, if there are only two bits in this function, then the second trigger does not have to open a 800ms window. The above are only two examples of parallel instruction windows, and these two examples are only identical in nature, that is, whether the instruction is finished or not is judged by judging the trigger in the given time of trigger driving, and of course, there is also an identification manner that is the same in nature, it is not obvious to set a window from t1 of the first trigger or t2, and the window period is opened according to the longest instruction, the maximum time of adjacent triggers, or the maximum time of adjacent triggers plus the longest trigger time. Because the fixed length instruction (the fixed length instruction is 3-bit instruction) is far more difficult to use than the variable length instruction (the trigger code with 1 bit, 2 bits or more unequal lengths is used in one function or state) in the touch scene, when the variable length instruction is used, the operation and control are more convenient due to the use of the instruction window when the multiple unequal lengths, namely the variable length trigger code, is used, otherwise, the multiple-bit variable length instruction cannot be identified.
In fig. 2, a trigger P103 with a longer duration is further defined, where T1 is the trigger start time, T2 is the threshold for judging that the trigger is a trigger with a longer duration (actually, it is greater than Δ T2 in fig. 2), that is, if the trigger exceeds the threshold, the trigger is still not released, so as to judge whether the trigger is a trigger with a longer duration, after the duration of T2, the user is prompted to enter the recording function, that is, the user is informed of the "recording function" by sound, voice, TTS or vibration after T2, at this time, the user releases the trigger after hearing the recording function, that is, T _ end falls between T2 and T3, the instruction for entering the recording function is executed, and then the recording function is entered, in fig. 2, we can see that T2 is prompted, and T3 is the time limit for finishing the prompt and using a duration such as 2 seconds for human reaction, waiting for the user to select the function, that the trigger is not released, then exceeding T3 prompts the next function as "SOS", while the user's de-trigger between T2 and T3 is selection, without de-trigger is selection of other functions, and so on, a longer duration trigger can select more functions and states, such as entering "speech recognition" function, when the scene permits, the intelligent electronic device can be operated by speech recognition, T _ end in fig. 2 is the de-triggered next-hop edge, which occurs between T2 and T3, for example, indicating the function prompted after selecting T2, and between T3 and T4, indicating the function or state prompted when selecting T3. The prompt is sound, voice, TTS or vibration, the prompt duration and the duration of the user reaction integrally form a selection interval between functions, for example, at T2, the playing recording function takes 2 seconds plus two seconds, namely, the time from T2 to T3 is 4 seconds, for example, after T3, the prompt SOS is 1 second, but the function is important, so that 4 seconds of reaction time is left, namely, the time from T3 to T4 is 5 seconds, and the time of T _ end falls between T2 to T3 or the time from T3 to T4, and the function is determined to be selected or executed. It should be noted that if there is only one function or state that needs to be switched, the prompt may be used or not (sound, speech, TTS, vibration), because the user knows that a longer-duration trigger can unambiguously execute an instruction or a state or function, at this time, only when it needs to be determined that T in P103 is greater than T2, the instruction can be executed instead of waiting for T _ end, that is, de-triggering, at this time, the longer-duration trigger is a trigger greater than a T2 threshold, when the longer-duration trigger is greater than the threshold, the instruction is executed, for example, the longer-duration trigger is used to turn on or off the intelligent electronic device, and the trigger duration is greater than T2, for example, 3 seconds, the system is turned off or on.
In our prior invention WO2016192622a1, instruction windows are used in large numbers, and in this specification, the use of instruction windows is applied more deeply, as in P104 in fig. 2, the pulse form is such that within a trigger-driven parallel window period Δ t3, there is a trigger and the trigger is Short, corresponding to Δ t3=800ms in the section [ 0034 ], i.e. the time interval between two triggers is defined as less than 800ms as one trigger as defined in P104, ". S." (where S stands for a Short/Short time interval between triggers), whereas P105 two Short triggers are defined as 3 seconds within Δ t4 as in the section [ 0034 ], if one Short trigger is 450ms at the maximum, two Short triggers take up 900ms at the most, where at least ms can be taken as the interval between two triggers, i.e. 3-t2< > 2100ms < > 150ms (if one Short trigger is a total of two triggers), this interval is 3000ms-150ms-300ms maximum), if 2100ms > t3-t2> Δ t3 is taken as the interval between two pulses in the 3-second window, when the user triggers the sensor, when both command windows are used in parallel, ". S." and ". cndot.", when one trigger is input, there is no doubt, the command is executed until the end of the window period, and when two commands are present, ". S." and ". cndot.", when one state or function, the window of P105 is taken as one window, and P104 is taken as the other window, when P105 contains P104, when the trigger receives. S.,. cndot.S.,. S. "command is executed immediately, and when the window period of P104 is exceeded, the second trigger is received, the corresponding command of". cndot. ", is executed, which when the trigger is used specifically, it is either a continuous fast (very short trigger interval) trigger or a continuous but not fast (two normal trigger interval) trigger interval (two normal trigger intervals) The trigger with different rhythms executes different instructions, takes two-bit coding as an example, namely 8 trigger codes of "· -", "-", and "- ·" and "· S-", ". S-", "-S-", and "-S · are used for coding, namely the application under two instruction windows can effectively shorten the trigger instruction bit number or increase controllable instructions under certain functions or states. In CN2016103632799, only the command window is used, but the extended trigger code formed by the command window set is not used, and the trigger capability is added in the method, so that a more complex function and scene can be served by a shorter command.
In fig. 2, the definition of short and long triggers, the definition of longer and longer triggers, and the logical relationship between the time of the instruction window in a given time and the time of the instruction window between adjacent triggers are described, so as to utilize the trigger time and trigger correlation better to serve the intelligent electronic user in the non-static scene, which is the fundamental part of the method. Because the embodiment is that the built-in sensor of the smart phone is close to the distance sensor, and other sensors or sensor groups which can sense the trigger time length, such as pressure sensors, can also obtain the trigger time length, but the pulse form is not necessarily standard square wave, for example, the pressure sensors may generate tooth-shaped pulses or irregular pulses, but can monitor the trigger start time and the trigger end time, the electronic equipment which uses the method and uses the sensor or the key and the clock to monitor the trigger time length by the same method and realize the operation control is just one specific embodiment of the method, but is not a deteriorated function.
It should be emphasized that the trigger code formed by the trigger time length is not a trigger instruction, and the trigger code is combined with the state and function of the target controlled device, and the scene is combined to form the trigger instruction, so the trigger instruction is formed by combining and orthogonalizing a plurality of tables such as a trigger code table with the function and state table, and compiling an instruction which is most convenient to trigger under the corresponding scene (when the trigger instruction has specificity).
Since 2016, apple has promoted all wireless earphone Air Pods, but this product is controlled by voice recognition, but actually, the wireless earphone cannot be well controlled when there is wind outdoors, the moving speed is high, the environment is noisy, or the environment cannot sound, and this is a non-all-scene product for electronic equipment. Later, other earphone manufacturers follow similar products, but in view of the weakness of apple Air Pods, control keys are added to make up for the product defects of apple Air Pods, but all wireless earphones are light and small, and keys are very small, so that the control is basically impossible during movement, if the control is required, the movement state can be changed, interrupted and stopped, and the control is convenient, and the products use control modes on the left side and the right side of the ears, so that partial functions are lost when a single ear is used, which is obviously a defective product, and the technologies cannot be performed when the future earphones are used as a controller and a feedback device to reversely operate a smart phone. The method can effectively and radically cure the defects of products such as a full wireless earphone represented by apples, can work in a full scene, and can be used as a controller and a feedback device to operate the smart phone without occupying hands and eyes of a user.
Also taking a proximity sensor as an example, a proximity sensor or a touch screen (which is contained in apple Air posts) is installed in a fully wireless headset, and the proximity sensor is triggered to control the connection, the rejection and the hanging of the telephone.
As shown in fig. 3, which is an embodiment of the present method for implementing call connection, call rejection and call hanging under a wireless headset, wherein a201, i.e. the left side frame, is the call function and the basic state of the smartphone, i.e. the steps S106, S107 mentioned in the present method. Since the call function is managed by the os, in this example, S106 and S107 are both completed by the os, and steps S104 and the like simply read the corresponding state and function from the os corresponding interface.
The communication function usually has an S201 communication state, namely, the communication is already in the process of communication, and S202 is a calling ring-back state, namely, the local phone dials the opposite side telephone, and the opposite side is dialed, the opposite side has ring-back, but has not picked up the telephone; s203, ringing the called party, namely the local telephone is called by others and does not receive the call; s204 is in a non-call state, i.e. not in a communication state, corresponding to the function and state of the telephone, which is a simple example in this example, and the specification of details about communication can be referred to the specific document of ITU-T, and is executed globally according to the specification.
The middle large frame is an identification instruction step of S104, wherein S205 to S208 are 4 groups of identified trigger codes, the trigger codes use instruction window sleeve application, S104 identifies specific instructions according to the functions and states at the moment and the identified triggers, then S105 steps, namely S209 to S211 in the right large frame are executed, the telephone is received, rejected and hung, and after the corresponding instruction function is executed, the state in the left A201 large frame is changed accordingly.
S201 is a call state, and if the user of the smartphone wants to hang up, S205 "· or S206" · ", i.e. triggers the proximity sensor or the touch screen on the all-wireless headset, when S103 recognizes the trigger and submits the recognized trigger to S104, S104 recognizes the command and executes S210 hang-up command according to the preset command corresponding to the state and the trigger code, and when S210 hangs up, the state changes to S204 state, i.e. non-call state.
S202 is the calling ringing state, namely as the calling call and is ring-backed, which indicates that the local telephone dials a certain telephone, and at this time, if the calling does not want to continue the call, it triggers S205 or S206, S104 receives the identified trigger code, and executes the instruction of S210 in combination with the state.
S203 is called ringing state, which shows that it is called by the called person, if it wants to answer the phone, it triggers S205 or S206, S104 identifies the state and triggers as defining instruction, then executes S209 answer instruction, thus the communication is established, the state of S203 is changed to S201 communication state. What the called party does not want to answer the call is triggered S207 or S208, S104 rejects the call according to the state and the command defined by the trigger code, and the state is changed from S203 to S204 after the command is executed.
In this embodiment, 4 groups of S205 to S208 are used for the trigger code, but actually, two groups of S205 and S207 can implement the receiving, rejecting and hanging of the phone. The trigger combinations used herein for "S" and "not" S "are intended to illustrate the flexibility of the trigger code of table 1, i.e., the code can be used relatively freely when the trigger instruction is unambiguous in some conditions.
In this embodiment of receiving, rejecting and hanging up the phone, when the called party rings, the S108 needs to compare the specific caller in the phone book according to the caller' S incoming number, and TTS outputs the name of the caller, if the caller is not a contact in the phone book, the TTS outputs the incoming number to let the called party decide to receive or reject, which is not available in the smart phone with this function at present, but in this method, this is only a specific prompt function when the S108 is used as the called party.
In the embodiment of fig. 3, the manner of applying the instruction window is not used for the difference function, but the function is extended in the embodiment of fig. 3, for example, if it is assumed that the call is being made, the call is also called as the called party, and then the called party has several options 1, hangs up the original call, and picks up a new call; 2. rejecting new call and continuing original call; 3. maintaining the original call and initiating a new call; in this practical scenario, because the implementation targets are different, 3 sets of instructions are usually required (because they are already in the same state), and when the method of using the instruction window set is used, 1 can be executed by e.g. "S-", 2 can be executed by "S-", and 3 can be executed by "S-", when the instruction is memorized, the short trigger represents disconnection, the long trigger represents connection, and "S-" represents disconnection of the original phone, connection of the new phone, because the called party is afraid of the calling party and the like or is afraid of missing important things, the rhythm is fast, and "-" represents disconnection of the new phone, and the original phone is continued, the operation can be slow, because there is no person and the like, that is, the user faces the actual mind state of 2; and for 3, both are connected, so with two long triggers, both window modes are supported. Therefore, when the user memorizes the instruction, the user can use the instruction only along with the scene and the mind state without memorizing too many fixed instructions. Thus, in the phone control scenario of fig. 3, it is also a significant feature of the method, namely the combination of scenarios, not only the usage scenario but also the possible emotional scenarios of the user. The realization of such functions is obviously not considered in the field of human-computer interaction today, because the prior art is too limited to the human-computer interaction with mouse, keyboard and GUI or multi-touch of GUI, and does not consider the problem that a user may face in a non-static scene.
We generally define three simplest states in the music function as shown in fig. 4, namely S301 music pause state, S302 music play state, and S303 music stop state, and the embodiment of fig. 4 only adopts an instruction window of P104 in fig. 2.
If the user of the all-wireless headset wants to start listening to music in S301 music pause state, triggering S306, and when recognizing that "· S" defined for triggering S306 is present and is in the music pause state, recognizing that the instruction is play, i.e. executing S311 to play music, and the state of music play changes to S302 after executing S311;
s302 is in music playing, at this time, if it is desired to switch to the next song, the coding of S304 is triggered, that is, a short trigger is performed, and there is no other trigger within S time after the short trigger is ended (there is no other trigger within the instruction window), when the above description is satisfied and the playing state is in the playing state, it is recognized that the instruction is switched to the next song, that is, S309 is performed, after the switching to the next song, the state is still the state of S302, and if it is desired to switch to the previous song, the coding of S305 is triggered, that is, a long trigger is performed, and there is no other trigger within S time after the long trigger (there is no other trigger within the instruction window). When the above description is satisfied and the playing state is still being performed, the instruction is identified as switching to the previous song, i.e., S310 is performed. If the music playing is to be paused in the music playing process, S306 is triggered, and if the trigger is identified in the music playing state, S306 executes the S308 instruction, pauses the music, and adjusts the playing state to the music pause state S301. If the music is to be stopped from being played in the state of S302, S307 is triggered, and if the identified trigger is S307, the music stop instruction is executed S312, and the state of the music is changed to S303 music stop. If the music stop state is to play music in S303, S306 is triggered, and when the trigger is recognized, the play instruction in S311 is executed, the state is switched to S302 after S311 is executed, and the music is played.
In this embodiment, we see that the 1-bit trigger and the 2-bit trigger are used in the same state, and we use the instruction window definition of P104 in fig. 2, and we use "S", and distinguish the 1-bit trigger and the 2-bit trigger, and then are orthogonal to the state and function, and are actually multi-dimensional trigger codes, so that the method can use simple 1-bit or 2-bit trigger to realize the control of the intelligent electronic device. In the scene of the all-wireless earphone, music control under noise and motion scenes can be easily realized through a sensor, but the all-wireless earphone Air pots of apples in the prior art can not be controlled (only voice recognition control can be realized, and the volume adjustment obviously has product defects in the noise or conversation process), and similar products in the industry have the condition of inconvenient control under motion and noise. Of course, the instruction window defined by P105 can also be used in FIG. 4, and the same result is achieved if the duration between t4-t1 of P105 is reduced to a value at which P104 is fully equivalent to P105, i.e., at t3-t2< = Δ t3 in FIG. 2, so as described earlier in this specification, such instruction window is intended in the same way, except for where the window starts and when it ends. In FIG. 2, P104 and P105 are defined to illustrate the following two window applying modes, t3-t2> Δ t3 of P105 is defined, and when two windows are not necessarily applied, P105 in FIG. 2 is not necessarily defined as such.
Fig. 5 is still based on this scene of full wireless earphone, realize hiding the SOS function, the smart mobile phone of industry does not have the hidden SOS function at present, this victim that leads to the smart mobile phone meets the attack, can not utilize smart mobile phone and annex to hide the SOS when robbing, thereby reduce injury and loss, why uses hidden SOS because the victim has not had the chance to open the screen to dial the number and say clearly the place, the incident many times, and under these scenes, the victim can directly aggravate the injury degree if finding the use cell-phone. And the hidden help seeking function is adopted, so that the position function and the voice acquisition function of the intelligent electronic equipment are utilized to send the sound information of the site and the site to the rescuer in real time, the rescuer can effectively rescue, and other people do not know that the help seeking information is sent, so that enough time is won to avoid damage or reduce possible damage. For covert rescue methods see my prior patent CN 201510835747.3.
In fig. 5, the wireless headset is taken as an example to implement the hidden help function, which does not exist in various headsets at present. After entering the help-seeking function, the function and the state of hidden help-seeking of the left large frame in fig. 5 are entered, and since hidden help-seeking does not exist in systems of smart electronic devices such as smart phones today, the APP needing to reside in the memory or the APP with the function manages and monitors the state in real time, so that S106 and S107 both need the corresponding APPs to realize (obviously, the SOS function can become a necessary function of the smart electronic devices in the future). After entering the function, there are 4 states, S401 distress triggering state, S402 distress sending state, S403 distress sending state and S404 distress response state.
In the state of S401, the user enters the hidden help-seeking function, and if the help-seeking person triggers the S405 or S406 code, the step S104 identifies that the instruction is to perform S407 first-level help-seeking or S408 second-level help-seeking.
The state S401 is entered, usually with longer trigger time, when the user hears/feels (shakes) the SOS prompt, the user releases the trigger and then enters the state S401, so any APP or intelligent system or operating system can add SOS function, and the trigger is by built-in or external sensor.
In the present embodiment, S405 and S406 use a number triggering code, that is, the number of triggers in a given instruction window is identified, for example, S405 is 2 triggers, and S406 is 3 triggers, because in an emergency, the triggers are only needed, but in an emergency, not all people can well control the triggers, so the triggers are only needed for a sufficient number of times. This is also an example of flexible trigger coding of the method, so as to meet the appeal of users in various scenes.
For the reason why the rescue grading is required in S407 and S408, the rescuer sends out a rescue to a preset rescuer according to the situation, for example, a small problem, which may be a rescue help from a preset person, and if a severe problem is encountered, the rescuer asks for a second-level rescue and directly asks for a rescue such as police and the like from a judicial institution. After the corresponding distress instruction is executed, namely the first-stage or second-stage distress in the S105 is executed, the state is switched to the S402 for sending the distress, some foreign distress APPs send the distress through short messages, but in fact, whether the information is sent can not be detected after the distress is sent, and the state of the distress which is reached in the complete distress process is required to be known by a person seeking help so as to deal with the operation. If the distress sending is successful, response information indicates that the distress information is sent to a preset rescuer, if the rescuer starts to respond to rescue, the state is switched to S404, and at the moment, the rescuer can wait for rescue with great reassurance.
In the states from S402 to S404, if the rescuer still has the ability to trigger the help-seeking, the codes of S405 and S406 can be triggered, so that the rescuer can also know that the help-seeking person still has the ability to send the help-seeking information temporarily and the help-seeking person is already urgent.
In this embodiment S108, i.e. the prompt message, is changed to a short vibration (on the smartphone), rather than speech or sound, in order to make the rescuer more concealed than exposed by a message such as a sound, while on the headset, it may be a vibration or voice prompt.
Through the above embodiments, we can clearly understand how the method can implement operations and controls on telephone, music and hidden help seeking in the wireless headset, and of course, the method can also be implemented directly in intelligent electronic equipment such as mobile phone. No matter the smart phone or the all-wireless earphone has the functions of the above embodiments.
The method is described below by taking a sensor capable of sensing directivity as an example, the sensor capable of sensing directivity is a contact type and a non-contact type, the non-contact type usually identifies the triggering direction through reflection of optics, acoustics and magnetics and change of an electromagnetic field, the contact type usually judges the contact position through the change of voltage, current and resistance at a specific (x, y) point on the sensor or the change of voltage, resistance, capacitance and the like of the specific (x, y) point on the sensor after being triggered, and the change track of (x, y) is the moving direction. It is therefore emphasized that the method is not implemented using any specific sensor, as two proximity sensors can sense the trigger in the a-to-B or B-to-a directions, but need not be defined in the coordinate system, whereas in touch screens, conductive fibers, capacitive screens, coordinates are needed to define trigger points and trajectories, and in any case, it is only necessary that one of the identified triggers in the method is a "directional" trigger.
The directional triggers are triggers representing spatial displacement such as up, down, left, right, forward, backward, and the like, and certainly include clockwise methods and counterclockwise methods, which are not listed here, but these triggers are one-dimensional trigger codes when triggered at a time, but if the directional triggers are combined, a 2-dimensional trigger code table is formed. If a forward trigger represents an instruction to execute the next song and two consecutive forward triggers within the time window represent music fast forward, then the directional trigger table is orthogonal to table 1, resulting in a richer trigger code table.
Table 2 illustrates a directional trigger code table by taking a sensor capable of sensing a trigger in a direction from a to B or a direction from B to a as an example, because in this specification, directions are also orthogonal, so the directions themselves can form a 2-dimensional code table, and as to the use of that code, functions and scenes can be determined, and in reality, a gesture sensor, a gesture radar, or a sensor group in which 2 or more proximity sensors are arranged in a triangle or diamond shape can sense triggers in multiple directions (only a direction from a to B or B direction to a direction in 2 cases), and table 2 is only used to illustrate the method rather than to list all trigger directions one by one, so only two directions are used as an example, and a two-dimensional table formed by directional triggers in reality is far larger than a two-dimensional table formed by time duration triggers, so that the two-dimensional table used to exemplarily describe the characteristics of the method in terms of trigger codes is not completely representative of all combinations of trigger direction codes, and on the contrary is only a simplest embodiment of the method. In table 2, it is known that when only 1-bit direction trigger is used, a 1-dimensional trigger code table such as left trigger, right trigger, forward trigger, and backward trigger is formed, but actually when 2-bit trigger is used, it is a two-dimensional trigger table, as shown in table 2 above, because we combine the codes in the trigger table with functions, states, and usage scenarios, we select the codes suitable for the functions, the states, and the usage scenarios, such as music fast forward, two right triggers can be used, and music fast backward uses two left triggers. Under certain state functions, the direction of direction triggering is not needed, and only the frequency of direction occurrence needs to be monitored, namely the frequency of the direction distinguishing of the bar is triggered.
Figure 875662DEST_PATH_IMAGE002
Trigger code table using sensor capable of sensing directivity as example
For convenience of description, we define the trigger in the B-to-a direction as "→" and the trigger in the a-to-B direction as "←", so that the trigger encoding table of table 2 can be formed according to the directionality, the encoding method is consistent with table 1, except that the trigger of table 1 is in the time duration form, and the trigger of table 2 is in the direction form.
Actually, table 1 and table 2 can be combined to form a 4-dimensional trigger code table at most, but actually, since the trigger instruction does not need to reach 4 bits, that is, the 4-dimensional trigger table formed by combining the duration and the direction, the common use scenario can be satisfied in 1, 2, and 3 bits. In fact, the sensor capable of sensing the directionality can also sense the non-directional trigger generally, for example, the trigger is not the trigger of displacement property but the proximity trigger and the coverage trigger, then the trigger is finished, and at this time, the trigger code of table 1 can be formed, so that the combination and the orthogonality of table 1 and table 2 form the basic trigger code of the sensor or the sensor group with the sensing directional trigger and the sensing trigger duration, and the very simple and effective operation and control of the intelligent electronic device can be realized by matching the aforementioned instruction window and the nesting use of the instruction window.
At present, many people watch the mobile phone video on the treadmill, but after the treadmill reaches a certain speed, the sporter is difficult to control the mobile phone video again, because the treadmill must run along with the certain speed, and at the moment, the video is controlled by fingers, the fine-tuning video cannot accurately click a control key on a screen because of fluctuation of the person during running, and meanwhile, because the fingers are sweaty or moist, the screen touch is difficult, and at the moment, the multi-point touch technology is ineffective, and the method can realize video fine tuning under the scene.
Video, especially network video, usually has a head and a tail, advertisements in the video and contents in the video that a viewer does not want to see or want to see again, and no one has studied how to adapt to video control of a runner in the scene of a treadmill all the day.
In fig. 6, S501 video pause, S502 video play, S503 video stop, and S504 fast forward/fast reverse, which are the most common states in video functions, are shown as follows, and S505 to S510 are trigger codes, in which the combination of table 1 and table 2 and orthogonal trigger codes are adopted, and S511 to S516 are instructions for controlling the execution of video.
S501 is a video pause state, when the video user wants to play the video, S507 is triggered, namely, ". S." coded in Table 1 but using the P104 instruction window in FIG. 2, when it is recognized that S507 is triggered, and the video is in the pause state, then an instruction for playing is executed, namely S514. After S514 is executed, the state of the video function is changed from S501 to S502 video play state. When the video is in the playing state of S502, if it is desired to switch the video to the next video, S505 is triggered, that is, a "→ S" direction trigger is triggered, that is, no other trigger exists within S seconds after the direction trigger is triggered (that is, an instruction window of the P104 type in fig. 2 is used), when the trigger is identified and the state of S502 is again, S512 is executed to switch to the next video, and when the video is switched, the video playing state of S502 is continued. If the user wants to switch to the previous video, the trigger S506 is the direction trigger "← S", and when the trigger is recognized and the state is the state of S502, the step S513 is executed to switch to the previous video; and S502, the playing state is continued after switching, if the playing is suspended, S507 is triggered, after the playing state is identified to be triggered by S507, a S511 suspending instruction is executed, and after S511 is executed, the video state is adjusted to be the S501 video suspending state. And in the state of S502, if the viewer wants to stop the video, S508 is triggered, after the playing state receives S508, the S515 instruction is executed to stop the playing, and after S515 is executed, the state of the video function is adjusted to S503 to stop the video. In the stop state of S503, if the video viewer wants to watch the video, S507 is triggered, and S514 is executed to play the video. In the video playing state of S502, if the video user wants to rewind quickly, the trigger is S509, when the video user is in the playing state and receives the trigger of S509, the fast-rewinding instruction of S516 is executed, and after the execution, the video state is adjusted to the fast-rewinding state of S504; if the video viewer wants to fast forward in the state of S502, the video viewer triggers the instruction of S510, if the video viewer is identified as S510 according to the state and the trigger, the video viewer executes the instruction of S516 fast forward, and after the instruction is executed, the video viewer adjusts the state to the state of S504 fast forward. In the S504 state, if fast forward or fast backward to the video position desired by the viewer, S507 is triggered, and in the step S514, normal playing is continued, in the above embodiment, the instruction window uses the instruction window mode defined by P104 in fig. 2, so the middle is indicated by "S", and in fig. 2, the mode P105 is used, so the "S" does not need to be particularly labeled, the above embodiment describes fast forward and fast backward, only uses one speed, and if the speed is 2 times the current fast forward speed, the speed can be "→ S →" or "· S →". I.e., 3-bit triggers, which 3-bit triggers originate from the trigger table in which the table 1 example is combined orthogonally with the table 2 example, if the P105 instruction window in fig. 2 is used, it can be expressed as "→ →" or "→", the starting point of combining the scene and the function is consistent when the command is coded by the method, the user memorizes the command of speed-multiplying fast forward by fast forward to the right or to the right and then to the right, the command is consistent with the target of realizing the function when being controlled by the user in a sense, the user does not have to remember the instruction, but can derive the instruction for double speed fast forward from the previous instruction, which is also logic that conventional codes such as morse or binary do not have, of course, the foregoing embodiment may also be used in a front instruction window application mode, such that "→" and "→ S →" can be differentiated, for example, the first "→" represents "fast forward" and "→ S →" represents fast forward.
The above embodiments take controlling video on a treadmill as an example, and describe how to control video by using a sensor or a sensor group capable of sensing time length and directionality, and in a motion state, a runner can very easily control video by using directional trigger or combination of long and short trigger, and can only wait to watch an advertisement or a video without waiting for the head and the tail. In the video embodiment, the prompt information is not voice but video change, and the combination of the direction and the duration can be used for controlling the electromechanical device by the intelligent electronic device, so that an operator only needs to see the working state of the electromechanical device as if the operator is not required to listen to sound, voice, TTS and vibration, that is, the method is used for enabling the user to view the state of the controlled device, and the visual signal is used as feedback.
For sensors that have a relatively high accuracy, that sense directionality and trigger duration, the speed dimension can also be increased, see table 3, corresponding to the touch scene, we define the speed as a certain value, higher than fast and lower than slow, which is convenient for triggering control, rather than monitoring the speed of continuity, e.g., faster at or above 2m/s and slower below 2m/s, the directional trigger is matched with the trigger speed to form trigger combined coding, obviously, a trigger coding system with more dimensions is added, namely, the method can avoid the problems that the gesture radar has less gestures, higher cost and larger volume and cannot be installed in tiny electronic equipment, such as volume adjustment, when the song is switched, the volume can be changed to be larger or smaller along with the displacement direction when the song is switched, and the song is switched when the song is switched to be switched. And two consecutive fast direction triggers become the trigger of music fast forward again. The function can be used for controlling the intelligent sound equipment at present, no matter the vehicle-mounted sound equipment or the household sound equipment is usually controlled by keys at present, and after a sensor which can sense external directivity and is particularly triggered in a non-contact mode is used, the method can be used for controlling the sound equipment without eyes, even fingers and gestures, particularly when the user drives, the user does not need to leave the eyes from the road surface, and therefore the driving risk is reduced.
Figure 470591DEST_PATH_IMAGE003
Encoding trigger table with directional sensor combined with speed attribute
After the fast, slow and direction triggers are combined, one-bit direction trigger is changed into 2 pieces of information in one state, for example, slow forward and fast forward, for example, in a book listening APP, jumping to the lower section is slow forward, and jumping to the next chapter is fast forward, so that users all use forward trigger, but the implementation effect is completely different, and today, the book listening APP can only be listened passively and cannot have rich capacity of being operated and controlled when running. The method is very well suitable for the application in the field, so that the user can freely and conveniently operate the corresponding APP or system in the non-static scene.
The speed is typically measured by dividing the distance from point a at the beginning of contact to point B after the end of the trigger by the length of time it takes from point a to point B. After the table 1 and the table 3 are combined and orthogonal, the problem that people operate intelligent terminals or intelligent electronic equipment in non-static scenes or quasi-static scenes can be solved very conveniently by combining the state and the function of a controlled object.
The flexibility and the multi-dimension of the trigger code of the method are orthogonal to the functions and the states of the target controlled intelligent electronic equipment, and the sufficient functions can be controlled by 1 or 2 bits of trigger codes generally. And the trigger can be in non-static states such as motion, driving and the like, and even fingers (non-contact sensor scenes) and eyes are not needed to participate, so that the available scenes of the intelligent electronic equipment can be greatly increased.
The foregoing describes embodiments of the method for different purposes in different scenarios by using sensors for sensing trigger duration, trigger direction, trigger speed, etc., and the following describes working embodiments of the method in a pressure sensor by using a widely-used pressure sensor.
In an alpine environment, the user typically uses a trekking pole to relieve lower limb stress, and the user typically uses a single pole or a pair of poles, so that at least one hand is occupied. In mountaineers, the mountaineers need to frequently use interphones to communicate with teammates or look at GPS to determine the track and orientation, and often hold hills and trees to stabilize the center of gravity in severe road sections. Therefore, electronic equipment for mountain climbing can be controlled and used without occupying hands, and the current intelligent electronic equipment such as a smart phone, a GPS (global positioning system), an intelligent interphone and the like is controlled by a screen and keys, so that the eyes of a climber are required to see the screen during control, and a finger point key is used for controlling a menu to realize control, so that the control inevitably influences the advancing of a queue during the advancing of the queue, and if the rest light of the eyes is used for seeing a path, the electronic equipment is in danger; rain, snow, fog and cold are common mountainous climate during mountaineering, the operation and control are affected no matter the hands are wet or the touch screen is wet in rainy days, and the gloves are frequently worn and taken off during the use of the electronic equipment in cold days so as to control the electronic equipment by fingers.
For the scene of mountain climbing, no special solution exists at present, voice recognition is outdoors, and particularly in mountain climbing, accurate recognition cannot be performed due to wind noise, so that the requirement cannot be normally met, and in a mountain climbing environment, a climber cannot frequently use eyes to search various keys of electronic equipment due to control, which can cause an accident, and for a road complex area, the climber needs to repeatedly see whether a track on a GPS is consistent with a preset road, so that the climber needs to use equipment in the process of going, see the road with the residual light of eyes or destroy a running queue, and stops using the electronic equipment.
The method can effectively solve the problem that firstly the alpenstock is needed in the scene, and if the alpenstock is used, the intelligent electronic equipment is controlled to use, such as broadcasting path, altitude, temperature and humidity, or music, communication, intercom and the like of a smart phone and the like are controlled, namely, a climber can finish the original things which can be done only by stopping or slowing down the progress by triggering the sensor of the alpenstock during the progress.
In this embodiment of the intelligent alpenstock, since the direction and the force of the prop held by the user when using the alpenstock vary according to the terrain, the sensor of a single trigger time, trigger speed and trigger direction is not suitable for this scenario (is likely to be triggered by mistake). Generally, for a hand using the alpenstock, only a thumb and an index finger are convenient to control, so that a concave shape suitable for a thumb triggering area or an index finger suitable for a triggering area is arranged at the top of a stick handle of the alpenstock, and a sensor or a sensor group capable of sensing triggering time duration and sensing pressure is arranged in the concave shape, so that the intelligent electronic equipment is controlled through pressure triggering and time duration triggering.
The force for triggering the pressure and the duration sensor or the sensor group by the thumb or the index finger is divided into light force attributes and heavy force attributes, and light force attributes are assumed to be smaller than 5KG and heavy force attributes are assumed to be larger than 5KG, so that the light force attributes and the heavy force attributes can be obtained by utilizing the pressure sensor for monitoring, and the pressure value is not tested by utilizing the linear general attribute of the pressure sensor.
Corresponding to the trigger, the digital circuits all have clocks, so that the digital circuits all have trigger starting time, trigger ending time and time relation between the trigger and the trigger, namely the trigger code of the table 1 can be used, and in the scene of the pressure sensor, after two dimensions of light and heavy are added, the table 4 is formed. When table 4 is seen, enough trigger codes exist to allow us to complete the operation and control of the intelligent electronic device, wherein the basic triggers are light short triggers, heavy short triggers, light long triggers, heavy long triggers, triggers with light length and longer time lengths, and triggers with light length and longer time lengths, so that each trigger is combined to form two-bit trigger codes, so that when the longest two-bit trigger instruction is reached, the whole code table has 6 1-bit triggers and 16 2-bit triggers, totaling 22 trigger code table 22 trigger combinations, while the normal operation and control is enough to control the operation of the intelligent electronic device when the state and the function are combined, and certainly, the table 1 can be combined into light triggers, heavy triggers, light and heavy triggers, which are the result that the ordinary technicians can not innovate naturally analogize on the basis of the codes.
In the scenario of the alpenstock, if the longer-duration re-trigger is set as a menu or function switching, when the climber uses a longer-duration trigger greater than 5KG, the TTS altitude starts according to the defined reporting function triggered by the longer-duration trigger in fig. 2, for example, T > = T2 time, and at this time, the trigger of the thumb of the climber is stopped, that is, T _ end is less than T3, indicating that the climber wants to know the current altitude, then the intelligent electronic device wirelessly connected to the alpenstock will TTS the current altitude on the GPS. By analogy, a lengthy re-trigger may be used to select a function or menu. The long and light trigger can be defined as a talk-back key, namely, when the long and light trigger is triggered, the talk-back can be started, and the trigger is ended, and only the user can listen, so that the mountaineer does not need TO press a PTT (PUST TO TALK) key of the interphone with one hand, and various inconveniences of using electronic equipment in mountaineering today can be solved very easily by using the method.
For another example, the climber uses a trekking stick to connect with the smart phone wirelessly, if the user wants to switch to the next song in the music playing state, the user can use light to trigger, and switch to the previous song can use heavy to pause light. Therefore, according to table 4 and the functions to be controlled, enough rich trigger instructions can be defined, so that when a climber is in wind, rain, snow and dangerous road sections, the intelligent electronic devices such as an intelligent interphone and an intelligent mobile phone are controlled without facing the difficulties and problems encountered today.
Figure 403912DEST_PATH_IMAGE004
Trigger coding table combining pressure sensor with trigger duration
In the above description, a skilled person will naturally think that since the method can combine the trigger codes, whether the time-duration trigger, the pressure-type trigger, the direction-type trigger and the speed-type trigger can be used or not on the capacitive touch screen with 3D touch capability to form a multi-dimensional trigger control table, and then the operation and control of the intelligent electronic device are performed by combining the functions and states of the controlled device, the answer is certainly positive, for example, in the case of the 3D capacitive touch screen, if the speed dimension is increased for two instructions in the case of the heavy directional trigger and the light directional trigger, if the speed dimension is increased, the operation and control becomes light, heavy, light, slow, heavy, and slow, and 4 codes appear in one directional trigger, 4 instructions can be executed. So the technical staff can analogize 2-bit instruction codes, and combine the long trigger, so can very convenient combination go out very abundant trigger table, the combination in this trigger table is from 1 bit to many triggers and combines state and function again, and intelligent electronic equipment controlled function and state can all be satisfied.
In the specification, the individual soldier can keep aiming and shooting during operation, and the individual soldier can operate the electronic equipment instead of one of the electronic equipment which is not discarded today, and how to realize the operation is described below. It is known that firearms are provided with a Trigger, the Trigger is provided with a ring at the periphery, the term Trigger Guard or Trigger Guard is called Trigger Guard, the aim is not to Trigger the Trigger by mistake and cause fire leakage, the training using the firearms usually has strict requirements, and an index finger can not be put on the Trigger or extend into the Trigger Guard in a non-aiming state, so that when holding the firearm and fighting, no matter a rifle or a pistol, as long as a habit of holding the firearm by both hands (rifle) or both hands is needed, both hands are occupied, the most sensitive finger head of the two hands, namely the index finger of the Trigger is idle, if the most sensitive finger head is outside the Trigger Guard of the firearms, an area which can be touched by the index finger flexibly is arranged at the front end of the Trigger position, and a touch device using the method is wirelessly interconnected with an individual intelligent electronic device, the touch technology can easily control the individual intelligent electronic device, no matter step communication, or data and instructions, and information report of the individual soldier can be completed through the earphone of the individual soldier and the touch device. When an individual combat, a combat crew can hold a firearm (not a pistol) with two hands tightly, control of combat and electronic equipment is not delayed, but today when the individual combat crew communicates or uses the electronic equipment, the individual electronic equipment such as a walkie-talkie needs to be controlled with one hand, and during the combat, the combat crew cannot lose an instant combat opportunity. Therefore, after the technology of the method is adopted, the informationized interaction capacity of individual combat can be effectively improved, other negative effects possibly caused today are avoided, the intelligent electronic equipment can be operated to interact with a commander in other periods except the shooting moment, and the two hands are kept holding the gun.
The implementation of the method in different scenes is described by using a plurality of embodiments and diagrams, and the scenes in the embodiments are not effectively solved in reality today, so that the method can well control the intelligent electronic equipment, the intelligent terminal, the intelligent mobile phone and the like by using sensors with different attributes in different scenes, and users of the intelligent electronic equipment in various scenes are safer and more free.
The method is also a very effective solution for the blind, and today, a blind system on an intelligent terminal, such as an apple, adopts a screen triggering medium-multipoint technology, namely, a triggering instruction is formed by complex instructions such as single click, double click, two-finger triggering, three-finger triggering and the like, and when the blind goes out in rainy days, the apple technology is completely disabled when the screen is wet. When the blind is on the go, two hands are usually required to trigger with the screen, which results in the blind having to stop. By using the method, the use of the electronic equipment is not influenced by rainy days or marching.
For the method, the sensor can be split, or can be on an intelligent electronic device, and for the split device, the form of wire, wireless and the like can be adopted to be interconnected with the controlled target intelligent electronic device, and the intelligent electronic device connected with the split sensor or the sensor electronic device is controlled.
The method can also be used for reading and sending and receiving instant messages when driving, moving and walking, for example, in a chat group, assume that "·" is defined as reading next information and "-" is defined as reading previous information (text information can be TTS, voice information can be directly played, picture information can be described by background AI in a text and then read by TTS), and an ultra-long trigger is a message, in this example, as shown in fig. 2, t2 is longer than the end time of a long trigger, so that no trigger interference is generated, when the ultra-long trigger is longer than t2, the message can be spoken under the prompt, and leaves the group, the "- -", in the chat group list, usually is sorted by the time of receiving the message, that is, the first position of the list after exiting, if the chat with another group is desired, the "·" is triggered as the next group and the "-" is the previous group, when entering a group, ". cndot" is used, and the pointer points to a certain group, TTS gives out the name of the group, for example, speech reads "high school classmates", and the triggering combination can be realized by using a proximity sensor on the smart phone of today, and too many low heads can be liberated under the method. Therefore, the method can ensure that the user of the intelligent electronic equipment is safer and freer in various scenes.
The method has rich available scenes and controllable functions, can be applied to any electronic equipment controlled by the CPU, mechanical equipment controlled by the CPU, wearable equipment and intelligent electronic equipment, and is convenient to operate and control intelligent electronic equipment with relatively more or complex functions, APP, even instant message APP and the like in the occasions of people with motion, driving and other occasions with inconvenient eyes and hands. The triggering of the method utilizes the direction, speed, pressure and triggering duration of a sensor, so the method is multidimensional triggering, wherein tables 1 to 4 only make a triggering code list for the simplest case and do not represent all triggering codes using the method, the triggering codes are orthogonal to the states and functions of controlled APP or intelligent electronic equipment, so the method is a multidimensional triggering instruction system, tables 1 to 4 only can list basic triggering codes under the method, and more complex triggering codes formed based on the basic triggering are only specific embodiments under the method and are only one implementation of the method.
The invention relates to a method for operating and controlling an intelligent terminal or an intelligent electronic device, which utilizes a sensor or a sensor group to form a 2-to-multi-dimensional trigger coding table by combining directional trigger or pressure type trigger or direction type and speed type trigger or directional trigger and speed type and pressure type trigger of the sensor or the sensor group on the basis of a trigger coding table formed in short, long and longer time, codes and the functions and states of the controlled intelligent electronic device or APP are combined to form a trigger instruction, and the operation and control of the intelligent electronic device are realized by triggering the sensor under the corresponding functions and states, and a user generally carries out instruction trigger by perceivable prompt information such as voice, sound, TTS, vibration and visual information.

Claims (13)

1. A method of controlling an intelligent electronic device, comprising:
identifying triggering by monitoring a sensor or a sensor group capable of sensing the triggering duration of an external object, and identifying and executing a corresponding instruction and a corresponding function according to the running state and the function of controlled equipment, a system or APP and an identified triggering code;
wherein:
the triggering includes short, long and longer triggering;
triggering to form a time length triggering code;
combining the trigger code with the state and function of the target controlled equipment to form an instruction;
identifying the input of a variable length trigger or a multi-bit trigger by adopting an instruction window;
in addition, longer-duration triggering is also used for function switching, and when the triggering duration is longer than the long-duration triggering duration, the functions are selected by releasing triggering under prompting;
the selection interval decision function in which the deactivation time falls is selected or executed;
the selection interval consists of a prompt duration and a wait user reaction duration.
2. The method of claim 1, comprising: wherein the sensor or group of sensors is contact or contactless.
3. The method of claim 1, comprising: the sensor or the sensor group comprises a sensor and a clock, and a circuit and a system for monitoring the trigger pulse width are formed.
4. The method of claim 1, comprising: wherein the instruction window further comprises an instruction window application.
5. The method of claim 1, comprising: the sensor or the sensor group can be arranged on split electronic equipment or intelligent electronic equipment; for the split electronic equipment, the split electronic equipment is connected with the controlled target intelligent electronic equipment in a wired or wireless mode; the intelligent electronic equipment connected with the split type electronic equipment containing the sensor or the sensor group is controlled by the split type electronic equipment.
6. The method of claim 1, comprising: wherein the prompt comprises voice, sound, TTS, and vibration; and judging the function and the state of the intelligent electronic equipment by a user of the intelligent electronic equipment according to the prompt.
7. The method of claim 1, including wherein said encoding uses count encoding when there is no need to resolve long and short triggers.
8. The method of claim 1, comprising: the intelligent electronic equipment comprises a CPU and a program control function.
9. An intelligent electronic device, comprising:
a method for controlling an intelligent electronic device, comprising a CPU, a sensor or a group of sensors triggered by a sensible time duration, as claimed in any one of claims 1 to 7.
10. The intelligent electronic device of claim 9, comprising: and the calling-for-help function is entered under the prompt through longer-time triggering.
11. An earphone, comprising:
a method for controlling an intelligent electronic device, comprising a CPU, a sensor or a group of sensors triggered by a sensible time duration, as claimed in any one of claims 1 to 7.
12. The headset of claim 11, comprising: and the calling-for-help function is entered under the prompt through longer-time triggering.
13. The headset of claim 11, comprising: and triggering graded help seeking by adopting codes containing times of triggering.
CN202011427340.4A 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone Active CN112367431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011427340.4A CN112367431B (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201711065398 2017-11-02
CN2017110653987 2017-11-02
CN201811253657.3A CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment
CN202011427340.4A CN112367431B (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811253657.3A Division CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment

Publications (2)

Publication Number Publication Date
CN112367431A true CN112367431A (en) 2021-02-12
CN112367431B CN112367431B (en) 2023-04-14

Family

ID=65608482

Family Applications (7)

Application Number Title Priority Date Filing Date
CN201811253657.3A Active CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment
CN202011425531.7A Active CN112367430B (en) 2017-11-02 2018-10-25 APP touch method, instant message APP and electronic device
CN202011462704.2A Active CN112653786B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment hidden help-seeking method, intelligent electronic equipment and earphone
CN202011425457.9A Active CN112351140B (en) 2017-11-02 2018-10-25 Video control method and intelligent electronic equipment
CN202011427340.4A Active CN112367431B (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone
CN202011425532.1A Active CN112351141B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment, alpenstock and touch device
CN202011433927.6A Withdrawn CN112261217A (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment and intelligent electronic equipment

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CN201811253657.3A Active CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment
CN202011425531.7A Active CN112367430B (en) 2017-11-02 2018-10-25 APP touch method, instant message APP and electronic device
CN202011462704.2A Active CN112653786B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment hidden help-seeking method, intelligent electronic equipment and earphone
CN202011425457.9A Active CN112351140B (en) 2017-11-02 2018-10-25 Video control method and intelligent electronic equipment

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202011425532.1A Active CN112351141B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment, alpenstock and touch device
CN202011433927.6A Withdrawn CN112261217A (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment and intelligent electronic equipment

Country Status (1)

Country Link
CN (7) CN109462690B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971255A (en) * 2019-11-29 2020-04-07 四川科道芯国智能技术股份有限公司 Wrist wearing equipment
CN113038285B (en) * 2021-03-12 2022-09-06 拉扎斯网络科技(上海)有限公司 Resource information playing control method and device and electronic equipment
CN114040286A (en) * 2021-10-28 2022-02-11 歌尔科技有限公司 True wireless earphone and true wireless earphone system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104991645A (en) * 2015-06-24 2015-10-21 宇龙计算机通信科技(深圳)有限公司 Cursor control method and apparatus
CN105245688A (en) * 2015-08-27 2016-01-13 广东欧珀移动通信有限公司 Communication event processing method and mobile terminal
CN105302432A (en) * 2014-06-09 2016-02-03 宏达国际电子股份有限公司 Portable device and operation method thereof
US9612741B2 (en) * 2012-05-09 2017-04-04 Apple Inc. Device, method, and graphical user interface for displaying additional information in response to a user contact
CN106686202A (en) * 2015-06-04 2017-05-17 单正建 Control method of intelligent terminal/mobile phone
CN107171945A (en) * 2017-06-29 2017-09-15 珠海市魅族科技有限公司 Image information processing method and device, computer installation and readable storage medium storing program for executing

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080293374A1 (en) * 2007-05-25 2008-11-27 At&T Knowledge Ventures, L.P. Method and apparatus for transmitting emergency alert messages
CN100589069C (en) * 2008-09-23 2010-02-10 杨杰 Computer control method and apparatus
KR101545582B1 (en) * 2008-10-29 2015-08-19 엘지전자 주식회사 Terminal and method for controlling the same
KR101092592B1 (en) * 2009-10-14 2011-12-13 주식회사 팬택 Mobile communication terminal and method for providing touch interface thereof
US20110221684A1 (en) * 2010-03-11 2011-09-15 Sony Ericsson Mobile Communications Ab Touch-sensitive input device, mobile device and method for operating a touch-sensitive input device
CN102043486A (en) * 2010-08-31 2011-05-04 苏州佳世达电通有限公司 Operation method of hand-held electronic device
WO2013169842A2 (en) * 2012-05-09 2013-11-14 Yknots Industries Llc Device, method, and graphical user interface for selecting object within a group of objects
JP6103970B2 (en) * 2013-02-08 2017-03-29 キヤノン株式会社 Information processing apparatus and information processing method
GB2516820A (en) * 2013-07-01 2015-02-11 Nokia Corp An apparatus
CN104598067B (en) * 2014-12-24 2017-12-29 联想(北京)有限公司 Information processing method and electronic equipment
US9904409B2 (en) * 2015-04-15 2018-02-27 Samsung Electronics Co., Ltd. Touch input processing method that adjusts touch sensitivity based on the state of a touch object and electronic device for supporting the same
KR102508147B1 (en) * 2015-07-01 2023-03-09 엘지전자 주식회사 Display apparatus and controlling method thereof
KR20170016752A (en) * 2015-08-04 2017-02-14 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN105141761A (en) * 2015-08-10 2015-12-09 努比亚技术有限公司 State switching device of mobile terminal, mobile terminal and state switching method
CN106817475B (en) * 2015-11-27 2019-02-19 单正建 It is a kind of based on intelligent terminal and its attached or associate device hidden method for seeking help
CN105511784B (en) * 2015-12-02 2019-05-21 北京新美互通科技有限公司 A kind of data inputting method based on pressure detecting, device and mobile terminal
CN105446540A (en) * 2015-12-31 2016-03-30 宇龙计算机通信科技(深圳)有限公司 Character input method and device
CN105719426A (en) * 2016-01-25 2016-06-29 广东小天才科技有限公司 Method and device for sorted calling for help
CN107544295A (en) * 2016-06-29 2018-01-05 单正建 A kind of control method of automobile equipment
CN107025019B (en) * 2017-01-12 2020-06-16 瑞声科技(新加坡)有限公司 Virtual key interaction method and terminal equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9612741B2 (en) * 2012-05-09 2017-04-04 Apple Inc. Device, method, and graphical user interface for displaying additional information in response to a user contact
CN105302432A (en) * 2014-06-09 2016-02-03 宏达国际电子股份有限公司 Portable device and operation method thereof
CN106686202A (en) * 2015-06-04 2017-05-17 单正建 Control method of intelligent terminal/mobile phone
CN104991645A (en) * 2015-06-24 2015-10-21 宇龙计算机通信科技(深圳)有限公司 Cursor control method and apparatus
CN105245688A (en) * 2015-08-27 2016-01-13 广东欧珀移动通信有限公司 Communication event processing method and mobile terminal
CN107171945A (en) * 2017-06-29 2017-09-15 珠海市魅族科技有限公司 Image information processing method and device, computer installation and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN112367430B (en) 2023-04-14
CN112351140A (en) 2021-02-09
CN112351140B (en) 2023-04-14
CN112653786B (en) 2023-03-14
CN112351141B (en) 2023-04-14
CN112367431B (en) 2023-04-14
CN112367430A (en) 2021-02-12
CN112261217A (en) 2021-01-22
CN109462690A (en) 2019-03-12
CN109462690B (en) 2021-01-05
CN112351141A (en) 2021-02-09
CN112653786A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN109462690B (en) Method for controlling intelligent terminal or intelligent electronic equipment
US20180063308A1 (en) System and Method for Voice Recognition
CN105900042B (en) Redirect the method and apparatus of audio input and output
CN105204846B (en) Display methods, device and the terminal device of video pictures in more people&#39;s videos
CN104902081B (en) Control method of flight mode and mobile terminal
CN110164469A (en) A kind of separation method and device of multi-person speech
CN110164440A (en) Electronic equipment, method and medium are waken up based on the interactive voice for sealing mouth action recognition
WO2016192622A1 (en) Control method for smart terminal/mobile phone
CN105988768A (en) Intelligent equipment control method, signal acquisition method and related equipment
CN108670260A (en) A kind of human fatigue detection method and mobile terminal based on mobile terminal
CN106325479A (en) Touch response method and mobile terminal
US20220217308A1 (en) Camera Glasses for Law Enforcement Accountability
CN105049802B (en) A kind of speech recognition law-enforcing recorder and its recognition methods
CN105718771A (en) Private mode control method and system
CN109155098A (en) Method and apparatus for controlling urgency communication
CN109240639A (en) Acquisition methods, device, storage medium and the terminal of audio data
KR20160132408A (en) Devices and methods for facilitating wireless communications based on implicit user cues
CN106200909A (en) Event-prompting method, device and mobile terminal
CN102752454B (en) Mobile phone warning method based on voice recognition
CN103257703B (en) A kind of augmented reality device and method
CN106027801A (en) Method and device for processing communication message and mobile device
CN109857282B (en) Touch device, intelligent terminal and individual soldier system
CN104821985A (en) Calling control method and device
CN111698600A (en) Processing execution method and device and readable medium
US11662804B2 (en) Voice blanking muscle movement controlled systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant