CN112351141B - Intelligent electronic equipment, alpenstock and touch device - Google Patents

Intelligent electronic equipment, alpenstock and touch device Download PDF

Info

Publication number
CN112351141B
CN112351141B CN202011425532.1A CN202011425532A CN112351141B CN 112351141 B CN112351141 B CN 112351141B CN 202011425532 A CN202011425532 A CN 202011425532A CN 112351141 B CN112351141 B CN 112351141B
Authority
CN
China
Prior art keywords
trigger
electronic equipment
intelligent electronic
sensor
controlled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011425532.1A
Other languages
Chinese (zh)
Other versions
CN112351141A (en
Inventor
单正建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011425532.1A priority Critical patent/CN112351141B/en
Publication of CN112351141A publication Critical patent/CN112351141A/en
Application granted granted Critical
Publication of CN112351141B publication Critical patent/CN112351141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72406User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by software upgrading or downloading
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

The invention comprises an intelligent electronic device, which utilizes a sensor or a sensor group to form a 2-to-multidimensional trigger coding table by combining the pressure type trigger of the sensor or the sensor group on the basis of a trigger coding table formed in short, long and longer time, codes and the functions and states of a controlled intelligent electronic device or APP are combined to form a trigger instruction, the operation and control of the intelligent electronic device are realized by triggering the sensor under the corresponding functions and states, and a user generally carries out instruction trigger by sensible prompt information such as voice, sound, TTS, vibration and visual information.

Description

Intelligent electronic equipment, alpenstock and touch device
Technical Field
The invention comprises a sensor or a sensor group which can sense the triggering time length of an external object, and on the basis, the sensor or the sensor group which has the capability of sensing the direction, the capability of sensing the triggering speed or the capability of sensing the triggering pressure is combined, and a multi-dimensional triggering coding table is formed by monitoring and identifying external triggering and according to the condition that the triggering time length is orthogonal to any two of direction type triggering, speed type triggering and pressure type triggering or is orthogonal to the two. The multidimensional trigger coding table is combined with the function and state of the target controlled electronic equipment or the function, state and use scene of APP running on the target controlled electronic equipment to form an instruction code, and after the corresponding instruction trigger is monitored under the corresponding state and function, the corresponding instruction and function are executed, so that a user of the intelligent terminal or the electronic equipment can operate and control the intelligent terminal or the intelligent electronic equipment through a contact type or non-contact type trigger sensor or group under the prompt of voice, sound, vibration, TTS and other forms of signals which can be recognized by the user (the method not only aims at controlling certain functions of the intelligent electronic equipment, but also effectively operates the intelligent electronic equipment). The method is particularly suitable for scenes (non-static scenes or quasi-static scenes) in which intelligent electronic equipment such as a driving, a sport or other scenes which do not use eyes and control an intelligent terminal, a smart phone and the like by hands, and is an effective solution for the situation that the existing intelligent electronic equipment cannot be safely and conveniently used during the movement and the driving.
Background
People often control electronic devices by "keys," which are typically physical keys as well as "keys" defined on a touch screen. Particularly, after intelligent electronic devices, intelligent terminals, smart phones and the like are started up, the functions of the devices are increasingly complex, so that users have to locate the 'keys' with eyes and press keys with fingers when using the devices, and the electronic devices are controlled in a manner familiar to the families. At present, data of traffic accidents of main countries indicate that the use of a mobile phone in driving is one of the most main causes of the traffic accidents. In various sports scenes, people can not leave the intelligent electronic equipment, but when the intelligent electronic equipment is used for control, the eyes and fingers are also required to be matched to complete the control, so that the original sports state of a sporter needs to be slowed down, changed and interrupted. And for personnel who need outdoor operation such as express delivery personnel, when running into rain, the screen at hand, smart mobile phone, terminal is stained with water, leads to these equipment all can not, inconvenient controlling. When people wear gloves in cold weather, people need to take off the gloves and draw out the intelligent electronic equipment for operation. The industry does not have a practical solution to the control of intelligent electronic equipment under these scenes, although a voice recognition technology can be used for control, the voice recognition cannot meet the normal control requirement under the conditions of outdoor wind noise, vehicle body noise during driving and the like, and the individual soldier system cannot rely on the voice recognition for control during combat. The voice recognition technology is limited by the signal-to-noise ratio, and uses scenes, but noise in motion and driving cannot be avoided, so that more practical technology and method are needed to help people. The method utilizes the equipment which is connected with the intelligent electronic equipment in a wired/wireless way and contains the sensor or the sensor group or the sensor group which is integrated with the intelligent electronic equipment (a built-in sensor or a sensor group) to monitor the triggering of the sensor or the sensor group, forms a triggering code according to the triggering direction, the triggering speed, the triggering strength and the triggering duration, combines (orthogonally) with the state and the function of the intelligent electronic equipment to form a triggering instruction, a user of the intelligent electronic equipment triggers the sensor under the prompting of voice, sound, vibration or other recognizable signals, and executes the corresponding instruction or function when monitoring the corresponding triggering and the corresponding state and function, thereby leading the simple touch technology which is suitable for the movement and the driving to be applied to the life of people, and leading the people to be safer, more free and more convenient when using the intelligent electronic equipment in various scenes. The intelligent electronic equipment usually comprises a Central Processing Unit (CPU) for control, and a user interacts with the intelligent electronic equipment through a human-computer interaction interface so as to realize operation and control; the intelligent electronic equipment comprises a smart phone, an intelligent terminal, an intelligent talkback party, an intelligent watch, an intelligent earphone, a control panel on an automobile and electronic equipment with program control functions such as a CPU (central processing unit) in a control system or other types of electronic equipment. The method is an optimization and supplement for the inventor of the invention WO2016192622A1 or the granted patent CN2016103632799 under more comprehensive scenes, by adding pressure type and speed type triggering and orthogonally triggering the speed or triggering the pressure again by using a triggering coding method orthogonal to the time length or the time length and the direction, so that the method is suitable for more scenes for use, for example, in the using scene of a alpenstock, WO2016192622A1, if the pressure dimension is increased, the usability of the scenes is obviously higher than that of the scenes only used for operation and control, in a military scene, if a pressure type triggering instruction is adopted, such as during single-soldier combat, the control of combat and electronic equipment is not interfered, and the restriction existing in actual combat today can be solved.
Technologies of voice recognition, lip language recognition and gesture radar are explored in the industry, but in a scene of motion, driving, all-weather, middle-speed and quick motion, the voice recognition technology is found to cause that the recognition rate cannot meet the basic requirements of people on the control of intelligent electronic equipment due to environmental noise and wind noise (caused by natural wind or motion speed), lip language recognition requires better illumination intensity, and correspondingly collects equipment for lip change and collection space and position, the method is obviously not feasible during motion and running, and the defect of illumination (all-weather) exists at night when driving, so that lip language recognition is more consistent with the driving scene than voice recognition, but also depends on illumination in the vehicle, and when knowing that the illumination intensity in the vehicle at night is too high, the observation of external objects is directly influenced to cause traffic accidents; the gesture radar is obviously not preferable during running and other sports, because hands are difficult to fix in a detection area of the gesture radar to perform continuous gestures during the sports, and gloves must be removed firstly in winter, so that the technology cannot be adopted in many practical scenes. The touch technology is ignored in the industry, and the essence of the method is that the state and the function of the controlled equipment are orthogonal by combining a plurality of dimensions triggered by a sensor, and the specific function or state switching is controlled by triggering the sensor under the prompting of voice, sound and vibration, so that the intelligent electronic equipment can still serve people in the scene that the intelligent electronic equipment cannot be used well today; in a special environment such as military combat, the technologies such as voice recognition and the like cannot be used in at least one individual soldier link, but the method can effectively solve the problem that when the individual soldier combat, combat personnel control the electronic equipment of the individual soldier such as communication and the like without influencing the combat, so that when the electronic equipment is controlled in the actual combat at present, one hand for operating a gun is required to be occupied, the combat personnel can lose favorable opportunity or cause unnecessary risks, and the problem which is not effectively solved at present is solved.
Disclosure of Invention
In order to overcome the problem that the current intelligent electronic equipment is forced to depend on hands and eyes of a user when in use, the method utilizes a sensor or a sensor group which can sense the triggering direction, the triggering speed, the triggering strength and the triggering duration of an external object to form a triggering code by monitoring and identifying external triggering and according to the triggering duration, the triggering direction, the triggering speed, the triggering strength and the like, and combines (orthogonally) with the functions and states of an intelligent terminal, an intelligent mobile phone and the intelligent electronic equipment or the functions or states of APP running on the intelligent terminal, the intelligent mobile phone and the intelligent electronic equipment to form an instruction code. The method can enable a user of the intelligent electronic equipment to control the intelligent electronic equipment through the sensor and the sensor group under the prompting of voice, sound, vibration and TTS, and can complete the intelligent electronic equipment which can be controlled by eyes and fingers without the participation of eyes, fingers and even hands.
There are various sensors for sensing the external trigger direction, such as a radar, a gesture sensor, two or more proximity distance sensor groups separated by a distance (e.g., 5 cm), a conductive fiber fabric, a touch screen, and the like, and the sensors can be classified into a contact trigger and a non-contact trigger according to the trigger, for example, the sensors of the conductive fiber fabric need to be contacted and the radar and the gesture radar do not need to be contacted and triggered.
There are many sensors with sensing speed, for example, two close distance sensor groups with 5cm interval and high precision, when the first triggered time is t1 and the second triggered time is t2, the speed is 5 cm/(t 2-t 1), for the radar sensor, when the precision reaches the requirement, there are two or more sensors receiving radar reflected wave to calculate the triggering speed of the triggering object; of course, the approaching trigger speed is also provided, such as an approaching distance sensor with high precision and a laser pulse reflection sensor, which can measure the approaching speed; of course, after the touch pad and the screen are moved from point a (x 1, y 1) to point B (x 2, y 2), such as the finger, the time length between the two points is removed, and the speed can also be measured, so that the fast speed and the slow speed can be distinguished, for example, on the music control, the fast speed represents the fast forward, and the slow speed represents the next song, and in combination with the directionality, the fast forward (fast backward) or the next (upper) song can be realized. This is a very useful touch method in contact or non-contact sensors, such as in the case of cycling, and in sports helmets for downhill skiing where the duration and direction and speed triggers are orthogonal, allowing the user to control the player's communication or entertainment equipment with the use of sports gloves without the need for non-contact triggering, and today when downhill skiing, the user cannot operate the smart electronic device in motion.
Pressure sensors are typically used to measure changes in pressure values, and in the present method, the light and heavy pressure values and the relationship between the trigger duration, trigger time and trigger can also be used to form a trigger code rather than just a pressure test.
All the sensors or sensor groups with similar attributes can measure the trigger time length, so that the method can form multidimensional trigger codes by combining the orthogonality of two or more items of the trigger attributes, the trigger time length, the direction, the speed, the pressure and the like. For the method, a sensor formed by acoustic, optical, electrical or magnetic field and any combination thereof can be used for controlling the intelligent electronic equipment under the method as long as the sensor can accurately and sensitively provide the triggering time length, the triggering direction, the triggering pressure, the triggering speed and 2 or more orthogonal combinations to form the triggering code, and the triggering code is combined with the state and the function of the controlled object.
Drawings
The method is further explained below with reference to the accompanying drawings.
FIG. 1: schematic diagram of the method.
Table 1.
FIG. 2 is a schematic diagram: trigger duration pulse patterns corresponding to table 1 proximity sensor example.
FIG. 3: embodiments for controlling telephone communications with a proximity sensor on a fully wireless headset.
FIG. 4: embodiments of controlling music with a proximity sensor on a fully wireless headset.
FIG. 5 is a schematic view of: embodiments of concealed distress using a proximity distance sensor on a fully wireless headset.
Table 2.
FIG. 6: embodiments of video control are implemented on a treadmill based on a directional sensor.
Table 3.
Table 4.
Detailed Description
The embodiments and specific parameters such as time, code, etc. described in the following exemplary embodiments do not represent all embodiments consistent with the present method, but rather they are only consistent with the method described in the appended claims, both with independent sensing devices wired/wirelessly interconnected with the intelligent electronic device or sensors, sensor groups, etc. that are integral with the intelligent electronic device, monitoring the triggering thereof, and forming a trigger code in accordance with the direction of the triggering, the speed of the triggering, the strength of the triggering, the duration of the triggering, or two or more orthogonal, and in combination with the state, function of the intelligent electronic device or the function and state and usage scenario of the APP running thereon, forming a trigger instruction, the sensor being triggered by the user of the intelligent electronic device at the prompt of voice, sound, TTS, vibration, and when the corresponding trigger code and the corresponding state, function are monitored and identified, the corresponding instruction or function is executed, thereby enabling the user of the intelligent electronic device to use the intelligent electronic device in more scenarios than is limited to certain scenarios today.
As shown in fig. 1, S101 is to initialize a sensor, where the purpose of initializing the sensor is to enable the sensor to work in a target scene suitable for the target scene, such as setting a sampling frequency, a trigger distance, a trigger threshold, and the like. Generally, initialization is performed when the device is started or the sensor is called, so that the device can meet a target scene, and the step is not required to be performed after the initialization is successful.
S102, monitoring a sensor, and monitoring triggering of the sensor; when the sensor and the intelligent electronic device are integrated, the operation of the intelligent electronic device is usually monitored by an operating system or a circuit and a program of the intelligent electronic device, if the intelligent electronic device is split and comprises the sensor to control the main intelligent electronic device through wireless or wired electronic devices, the circuit or the program in the device is needed to monitor the operation of the intelligent electronic device, and the monitored data is fed back to the intelligent electronic device through a wired or wireless link and then processed next step.
Step S103 is to identify the trigger, when the user of the intelligent electronic device wants to control the intelligent electronic device through the sensor, step S102 monitors the trigger, and step S103 needs to analyze whether the trigger is a trigger code defined previously, such as trigger, trigger duration, an orthogonal relationship of each dimension, whether the trigger is in a corresponding instruction window, and the like, or information removal of false trigger. In the method, in step S103, multiple sets of parallel time/clocks are needed to analyze and identify the trigger, or after the trigger time window is over, the instruction corresponding to the trigger or the subsequent instruction after the window is over is executed.
Step S104 is to identify the command, and after the trigger code is identified, it is necessary to identify whether the command is a legal command or not based on the state and function of the intelligent electronic device at that time in combination with the trigger code identified in step S103, corresponding to the preset command code, that is, the method is that the state and function are orthogonal to the trigger code. And the identification of the trigger instruction is realized by the step S104, so that the step S104 needs two inputs, one is S107, and the other is S103, and when the identification confirms that the command is coded, the step S105 is executed to execute the command or the corresponding function.
Step S105 is to execute the command or the function corresponding to the command, and after the function is executed, the state and the function are usually changed in the intelligent electronic device, for example, in the music playing function, the music playing state is in the music playing function, but at this time, if the triggering command is received as pause, after S105 is executed, the state of the music function is adjusted from the playing state to the pause state.
Step S106 represents the running state \ function of the controlled device, system and APP, such as the telephone function and the calling dialing state. In an intelligent electronic device, an operating system generally manages functions and states of all the operating systems, when the states or the functions are changed, the system is linked, for example, the state before a telephone call is music playing, when the telephone is called, the music is automatically paused, but after the telephone is hung up, the music starts to be automatically played again. In an operating system of a smart phone, a plurality of state flags are defined for system functions or APP calls. In the APP, the APP needs to manage the state, function, and the like defined by the APP, and when the state or function changes, the APP needs to manage and monitor by the APP. For example, when the system plays music, S106 indicates music function and playing status, and when the system is paused, indicates music function and pausing status. And when hiding the SOS, this function intelligent terminal, smart mobile phone do not possess at present, only can be this function of APP management of resident memory, when getting into the function of hiding the SOS from the music state, acquiescence music pauses, and through S107 monitoring function state change to indicate the SOS person through S108 and have got into the function of hiding the SOS or SOS present state.
Step S107 is the content when the state to be monitored is not defined by the system itself function, state or the combination of state and function, which provides the monitoring result to the step S104 and provides the corresponding function and state data to the step S108. When all functions and states are standard functions and states of the intelligent electronic device system, S106 and S107 are usually integrated and completed by the operating system (usually processed by different functional modules), and when the required functions and states are not covered by the operating system, it needs to be managed and monitored by a program outside the operating system.
Step S108 is TO prompt the user of the intelligent electronic device with sound, voice, vibration, etc., that is, a prompt that can be recognized by the user, for example, when music is being played, the music being played is a status prompt, and when the user is on the phone, the ringing tone is a status prompt, and when the menu function is switched, the user is informed of the menu item through TTS (TEXT TO speech), and when the user is hidden for help, the user is informed of the help seeker in a vibration manner, so that the user is prevented from being informed of an injury when the voice prompt is performed. Therefore, in step S108, some functions are used by the user according to system rules, some functions need to be specially defined, and the definition needs to be based on TTS, sound, vibration, or the like, or a combination thereof, according to the data and scene provided in S107, because the man-machine interaction of the method is oriented to a non-static scene, while the man-machine interaction of the traditional GUI is oriented to a static scene, and the variable factors in the non-static scene are much more than those in the static scene, more prompting methods are needed, for example, TTS only supports limited languages in a mobile phone system, so that a voice recording recorded in advance needs to be played during prompting, or menu information is sent to a server capable of providing TTS service through the internet, and then a sound file for playing back, so that the scene and the scene are combined to be one characteristic of the method, and the characteristic is not available for the man-machine interaction of the multi-touch technology (invention in 2004) or mouse keyboard plus GUI (invention in 1963), because the user of the electronic device is unable to rely on the basic premise that the eyes and fingers cannot meet the behaviors of the scenes.
In step S109, the user of the intelligent electronic device determines the function and state of the user according to the prompt in step S108, and triggers the sensor according to the user' S request to switch the function and state to the requested function and state.
It should be noted that in step S103, when a trigger occurs, the trigger may be prompted through step S108, for example, if a short trigger is recognized, a short-triggered "drop" sound prompt is issued, and after a long trigger is recognized, S103 notifies S108 to issue a long-triggered "click" sound prompt, as to whether to use what prompting view field function and facilitate human-computer interaction. And for triggers for short triggers, long triggers and longer durations see table 1.
Figure SMS_1
Trigger code table exemplified by proximity sensor
Table 1 is a time duration trigger code table exemplified by a proximity sensor, which can be formed by circuits and systems that can use a sensor plus a clock to monitor the trigger pulse width, such as a capacitive screen in combination with a clock circuit, or by equivalent strategies such as a key switch plus a clock, but in the field of intelligent electronics, users are often faced with functional degradation, such as mechanical noise and tenderness and discomfort in smart headsets. In the smart phone, the proximity sensor is used for controlling the on-off of the screen during the call, so that the touch screen is prevented from being triggered by the mistake of the face and the ear, and the electricity is saved.
The proximity sensor will typically approach as 1 and leave as 0 in a smartphone, whereas the typical detection distance is 5cm, assuming a start time of the trigger as t1 and a time of the trigger as t2, setting t2-t1< =450ms as a short trigger, i.e. a trigger pulse width t2-t1< =Δt1 in P101 in fig. 2, [ delta ] t1=450ms, whereas 450ms < > t2-t1< =1200ms as a long trigger, i.e. a P102 pulse in fig. 2, [ delta ] t1< t2-t1< = Δ t2, where [ delta ] t2 is 1200ms. From fig. 2, it can be seen that the corresponding pulse waveform diagrams P101 and P102, if we define the short trigger as "·" (drop) and the long trigger as "-" (click), which is the coding basis of morse code, that is, the trigger code can be formulated in morse code form, that is, in morse code form in table 1, but also in binary form, for example, 0 represents the short trigger and 1 represents the long trigger, that is, the code table represented in binary form in table 1, but actually, both morse and binary form are derived from the short and long trigger code table based on the long trigger code table in table 1, and because the short and long triggers are used and multiple bits are used, the 2-dimensional code table is naturally formed, as the case may be determined by the scenario in form such as binary or morse. It should be noted that the trigger code may be a counting code when the command required to be controlled is very few or the command required to be controlled does not need to distinguish between long trigger and short trigger under a certain state or function of the controlled device, that is, the short trigger and the short trigger, and the counting is triggered once, for example, the following hidden SOS embodiments of this specification may explain why the short trigger and the long trigger are used again. The longer trigger P103 in table 1 is usually a trigger larger than the upper limit of the long trigger, for example, larger than Δ T2, i.e. larger than 1200ms in this example, the trigger is also a 1-bit command, but is usually used for state switching, or some specific function such as voice input during walkie-talkie, recording, for example, the recording is started when the trigger exceeds the longer threshold, for example, larger than 1200ms (the upper limit of the long trigger), and the recording is ended when the trigger is released (T _ end in P103). Certainly, the trigger with longer duration has no long and short trigger instruction in a certain state or function, and only longer duration is needed, the threshold value of 1200ms is not needed, because the occurrence of the threshold value avoids false trigger caused by long and short triggers, for example, 900ms when one long trigger occurs, if the threshold value is not set, the instruction with 900ms may be executed when the long trigger occurs, but the instruction is found to be originally long trigger rather than longer duration trigger, and when the trigger does not need to be identified, that is, when the trigger does not have short and long triggers except for longer duration trigger, a policy that the threshold value is not needed, that is, a specific example of t1= t2 in P103, can be adopted.
If "·" is regarded as 0 and "-" is regarded as 1, it is a multi-bit binary code, and if we don't care whether the trigger is "·" or "-", i.e. do not distinguish between long and short triggers, especially in a simple control function scenario, it can use times to form the trigger code. However, a more ingenious coding mode is adopted, namely the relation between the triggering time and the triggering duration and the adjacent triggering is formed to trigger the codes, and the codes are selected to be used for triggering according to the scenes.
If the duration of t2-t1 can be other Δ t according to a specific scenario, we can form multilevel codes according to Δ t1, Δ t2, Δ t3 …, although table 1 does not list the form of multilevel codes, in practice, the method can flexibly organize the code table according to the scenario where the trigger codes are flexible, and table 1 is only a list of the most basic elements of the trigger codes.
Corresponding to "·", "-", "-", "·" and "·" four groups of 2-bit codes can be realized very simply, plus one bit of · "," - ", and longer time duration, and 7 trigger codes can be available at the maximum of two-bit codes. When the variable length trigger code is in a certain state or function, that is, when 1-bit, 2-bit or even 3-bit instructions are in a certain state, that is, when the variable length trigger instruction is in a certain state, an instruction window of the trigger driver needs to be set, and the corresponding trigger code is input in the instruction window. The most intuitive example which is easy to understand is that in the state a, when the first trigger is triggered, a command window is started, the window period is 3 seconds, if 0 and 0 are input in 3 seconds, the command of 00 is executed, if only 0 is input in 3 seconds, the command of 0 is executed, if no command window exists, the trigger of inputting the command of 00 becomes the trigger command of executing 0 once, and the command of triggering 0 is executed again, but not the command corresponding to 00. Since the instruction window is usually related to the state and the function, the operation corresponding to the state and the function is executed during or after the window period. If a 00 instruction is input within 3 seconds, the result is that the user inputs 00 within 2 seconds, and under the corresponding function, only two bits of instructions need to be input in the window period, so the system does not need to wait until the end of 3 seconds to execute the instruction, but executes the instruction after 00 is received and the instruction window is closed without judging. And if no instruction follows after a 0 is input within 3 seconds, executing the instruction when the instruction window is closed or subtracting a value from 3 seconds without waiting for 3 seconds. The instruction window may also be a parallel window that is started after a trigger, for example, the window has a length of 800ms, there is a next trigger in the window, and if there is no trigger in the window, the instruction input is ended. If there is a next trigger and there are multiple bits in this state, then the second bit continues to open a 800ms window after the trigger to determine if there is a next trigger or the instruction has ended, if there are only two bits in this function, then the second trigger does not have to open a 800ms window. The above are only two examples of parallel instruction windows, and in fact, the two examples are only identical in nature, that is, whether the instruction is finished or not is judged by judging the trigger in the given time of trigger driving, and of course, there is also an identification mode which is the same in nature, it is not obvious to set a window from t1 or t2 of the first trigger, and the window period is opened according to the longest instruction, or opened according to the maximum time of adjacent triggers plus the longest trigger time. Because the fixed length instruction (the fixed length instruction is 3-bit instruction) is far more difficult to use than the variable length instruction (the trigger code with 1 bit, 2 bits or more unequal lengths is used in one function or state) in the touch scene, when the variable length instruction is used, the operation and control are more convenient due to the use of the instruction window when the multiple unequal lengths, namely the variable length trigger code, is used, otherwise, the multiple-bit variable length instruction cannot be identified.
In fig. 2, a trigger P103 with a longer duration is further defined, where T1 is a trigger start time, T2 is a threshold for determining that the trigger is a trigger with a longer duration (actually, it is greater than Δ T2 in fig. 2), that is, if the trigger is still not released after exceeding the threshold, it can be determined whether the trigger is a trigger with a longer duration, if the trigger exceeds the duration of T2, the user is prompted to enter a "recording function", that is, the user is notified of the "recording function" by sound, voice, TTS, or vibration after T2, at this time, the user releases the trigger after hearing the recording function, that is, T _ end falls between T2 and T3, an instruction for entering the recording function is executed, and then the recording function is entered, in fig. 2, we can see that T2 starts prompting, and T3 is another duration after the prompting is ended, such as 2 seconds for human reaction, waiting for the time limit for the user to select the function, if the user does not select the function, that is, if the user does not release the trigger, prompting the next function such as "SOS" when T3 is exceeded, and the user's release trigger between T2 and T3 is a selection, and if the user does not release the trigger, continuing to select other functions, and so on, the trigger with longer time length can select more functions and states, such as entering a "voice recognition" function, operating the intelligent electronic device by voice recognition when the scene permits, and T _ end in fig. 2 is the release-triggered next-hop edge, which occurs between T2 and T3, indicating the function prompted after T2 is selected, and between T3 and T4, indicating the function or state prompted when T3 is selected. The prompt is sound, voice, TTS or vibration, the prompt duration and the duration of the user reaction integrally form a selection interval between functions, for example, at T2, the playing of the recording function takes 2 seconds plus two seconds, namely, the time from T2 to T3 is 4 seconds, for example, after T3, the prompt SOS is 1 second, but the function is important, so that 4 seconds of reaction time is reserved, namely, the time from T3 to T4 is 5 seconds, and the time from T _ end falls between the time from T2 to T3 or the time from T3 to T4, and the function is determined to be selected or executed. It is worth mentioning that if only one function or state that needs to be switched exists, a prompt (sound, voice, TTS, vibration) may be used or not be used, because the user knows that a longer-duration trigger can unambiguously execute an instruction or reach a state or function, at this time, only when it is determined that T in P103 is greater than T2, the instruction can be executed instead of waiting for T _ end, that is, de-triggering, at this time, the longer-duration trigger is a trigger greater than a threshold T2, when the longer-duration trigger is greater than the threshold, the instruction is executed, for example, the longer-duration trigger is used to turn on or off the intelligent electronic device, and when the trigger duration is greater than T2, for example, 3 seconds, the system is turned off or on.
In our prior invention WO2016192622A1, instruction windows are used in large numbers, whereas in this specification a deeper application is made to the use of instruction windows, as in P104 in fig. 2, the pulse form illustrates that within the parallel window period Δ t3 of the trigger drive there are triggers and the triggers are Short, corresponding to Δ t3=800ms in the [ 0034 ] section, i.e. the time interval between two triggers is defined as less than 800ms as one trigger as defined in P104 ". S." (where S stands for a Short/Short time interval between triggers), whereas P105 two Short triggers are defined as 3 seconds within Δ t4 as in the [ 0034 ] section, if one Short trigger is 450ms, two Short triggers take up to a maximum of 900ms, where at least ms can be taken as the interval between two triggers, i.e. 2100-t 2< 150ms (if there are two Short triggers), this interval is 3000ms-150ms-300ms maximum), if 2100ms > -t 3-t2 >. DELTA.t 3 is taken as the interval between two pulses in the 3-second window, when a user triggers a sensor, when two command windows are used in parallel, ". S." and ". Cndot.", when one trigger input, there is no doubt, the command is executed until the window period is ended, and when two command windows occur, ". Cndot." and ". Cndot.", when one state or function is taken as one window, and P104 is taken as the other window, and when P105 includes P104, when the trigger receives "S.", the command of. S. "is executed immediately, and when the window period of P104 is exceeded, the corresponding command of". Cndot. ", is executed, and when the trigger is used specifically, the trigger triggers continuously (extremely Short trigger interval) or continuously but not (normal speed interval), the trigger interval is continuously (extremely Short trigger interval) ) Triggering two triggers with different rhythms, executing different instructions, taking two-bit coding as an example, namely 8 trigger codes of "· -," · "," - - - ", and" - · "and" · S., ". S-," -S- ", and" -S. ", that is, application under two instruction windows, can effectively shorten the number of trigger instruction bits or increase controllable instructions under certain functions or states. In patent CN2016103632799, only the instruction window is used, but the extension trigger code formed by the application of the instruction window is not used, and the trigger capability is increased in the method, so that a shorter instruction serves more complex functions and scenes.
In fig. 2, the definition of short and long triggers, the definition of longer and longer triggers, and the logical relationship between the time of the instruction window in a given time and the time of the instruction window between adjacent triggers are described, so as to utilize the trigger time and trigger correlation better to serve the intelligent electronic user in the non-static scene, which is the fundamental part of the method. Because the embodiment is that the built-in sensor of the smart phone approaches to the distance sensor, and other sensors or sensor groups capable of sensing the triggering time length, such as a pressure sensor, can also obtain the triggering time length, but the pulse form is not necessarily a standard square wave, for example, the pressure sensor may generate a tooth-shaped pulse or an irregular pulse, but can monitor the triggering start time and the triggering end time, the electronic equipment which uses the method and monitors the triggering time length by using the sensor or the key and the clock in the same method and realizes operation control is just a specific embodiment of the method, but is not a degraded function.
It should be emphasized that the trigger code formed by the trigger time length is not a trigger instruction, and the trigger code is combined with the state and function of the target controlled device, and the scene is combined to form the trigger instruction, so the trigger instruction is formed by combining and orthogonalizing a plurality of tables such as a trigger code table with the function and state table, and compiling an instruction which is most convenient to trigger under the corresponding scene (when the trigger instruction has specificity).
Since 2016, apples have introduced all wireless earphones Air Pods, but this product is controlled by voice recognition, but actually, the all wireless earphones cannot be well controlled when there is wind outdoors, the moving speed is high, the environment is noisy, or the environment cannot sound, and this is a product which is not a full scene for electronic devices. Later, other earphone manufacturers follow similar products, but in view of the weakness of apple Air Pods, control keys are added to make up for the product defects of the apple Air Pods, but all wireless earphones are light and small, and certainly the keys are very small, so that the control is basically impossible during movement, if the control is needed, the movement state can be changed, interrupted and stopped, and the control is convenient, and the products all use control modes on the left and right ears, so that partial functions are lost when a single ear is used, which is obviously a defective product, and the technologies cannot be competent when the future earphones are used as a controller and a feedback device to reversely operate a smart phone. The method can effectively and radically cure the defects of products such as a full wireless earphone represented by apples, can work in a full scene, and can be used as a controller and a feedback device to operate the smart phone without occupying hands and eyes of a user.
Also taking a proximity sensor as an example, a proximity sensor or a touch screen (which is contained in apple Air posts) is installed in a fully wireless headset, and the proximity sensor is triggered to control the connection, the rejection and the hanging of the telephone.
As shown in fig. 3, which is an embodiment of the present method for implementing call connection, call rejection and call hanging under a wireless headset, wherein a201, i.e. the left side frame, is the call function and the basic state of the smartphone, i.e. the steps S106, S107 mentioned in the present method. Since the call function is managed by the os, in this example, S106 and S107 are both completed by the os, and steps S104 and so on simply read the corresponding state and function from the os corresponding interface.
The communication function usually has an S201 communication state, namely, the communication is already in the process of communication, and S202 is a calling ring-back state, namely, the local phone dials the opposite side telephone, and the opposite side is dialed, the opposite side has ring-back, but has not picked up the telephone; s203, the called party rings, namely the local machine is called by others and does not receive the call; s204 is in a non-call state, i.e. not in a communication state, corresponding to the function and state of the telephone, which is a simple example in this example, and the specification of details about communication can be referred to the specific document of ITU-T, and is executed globally according to the specification.
The middle large frame is an instruction identification step of S104, wherein S205 to S208 are 4 groups of identified trigger codes, the trigger codes use instruction window sleeve application, S104 identifies specific instructions according to the current functions and states and identified triggers, then S105 steps of S209 to S211 in the right large frame are executed, the telephone is connected, disconnected and hung, and after the corresponding instruction functions are executed, the state in the left A201 large frame is changed accordingly.
S201 is a call state, and if the user of the smartphone wants to hang up, S205"· or S206" · ", i.e. triggers the proximity sensor or the touch screen on the all-wireless headset, when S103 recognizes the trigger and submits the recognized trigger to S104, S104 recognizes the command and executes S210 hang-up command according to the preset command corresponding to the state and the trigger code, and when S210 hangs up, the state changes to S204 state, i.e. non-call state.
S202 is the calling ringing state, namely as the calling call and is ring-backed, which indicates that the local telephone dials a certain telephone, and at this time, if the calling does not want to continue the call, it triggers S205 or S206, S104 receives the identified trigger code, and executes the instruction of S210 in combination with the state.
S203 is called ringing state, which shows that the calling is called by the called person, if the calling is wanted, S205 or S206 is triggered, S104 identifies the state and executes S209 receiving instruction after triggering as the defined instruction, thus the communication is established, and the state of S203 is changed to the communication state of S201. And what the called party does not want to answer the call, triggering S207 or S208, S104 rejecting the call according to the state and the command defined by the trigger code, and after the command is executed, changing the state from S203 to S204.
In this embodiment, 4 groups of S205 to S208 are used for the trigger code, but actually, two groups of S205 and S207 can implement the receiving, rejecting and hanging of the phone. The trigger combinations used herein for "S" and "not" S "are intended to illustrate the flexibility of the trigger code of table 1, i.e., the code can be used relatively freely when the trigger instruction is unambiguous in some conditions.
In this embodiment of receiving, rejecting and hanging up the phone, when the called party rings, the S108 needs to compare the specific caller in the phone book according to the caller' S incoming number, and TTS outputs the name of the caller, if the caller is not a contact in the phone book, the TTS outputs the incoming number to let the called party decide to receive or reject, which is not available in the smart phone with this function at present, but in this method, this is only a specific prompt function when the S108 is used as the called party.
In the embodiment of fig. 3, the manner of applying the command window is not actually used for the difference function, but the function is extended in the embodiment of fig. 3, for example, if it is assumed that the call is in progress and is called again as a called party, and then the called party has several options 1, hangs up the original call, and picks up a new call; 2. rejecting new call and continuing original call; 3. keeping the original conversation and starting a new call; in this actual scenario, because the implementation targets are different, 3 sets of instructions are usually required (because they are already in the same state), and when the method of using the instruction window to apply mechanically is used, for example, "· -" is used to execute 1, "· -" is used to execute 2, and "-" or "-S-" is used to execute 3, when the instruction is memorized, the short trigger represents disconnection, the long trigger represents connection, and "· -" represents disconnection of the original phone and connection of the new phone, because the called party is afraid of the called party and the like or afraid of important things, the rhythm is fast, and "· -" represents disconnection of the new call and continues the original phone, the operation can be slow because there is no person and the like, that is, the user faces the actual mind state of 2; and for 3, both are connected, so with two long triggers, both window modes are supported. Therefore, when the user memorizes the instruction, the user can use the instruction only along with the scene and the mind state without memorizing too many fixed instructions. Thus, in the phone control scenario of fig. 3, which is also a significant feature of the method, it is the combination of scenarios, not only the usage scenario but also the possible emotional scenario of the user. The realization of such functions is obviously not considered in the field of human-computer interaction today, because the prior art is too limited to the human-computer interaction with mouse and keyboard + GUI or GUI plus multi-touch, and does not consider the problem that a user may face in a non-static scene and a special scene.
We generally define three simplest states in the music function as shown in fig. 4, namely S301 music pause state, S302 music play state, and S303 music stop state, and the embodiment of fig. 4 only adopts an instruction window of P104 in fig. 2.
If the user of the all wireless headset wants to start listening to music in the music pause state of S301, S306 is triggered, and when the trigger is identified as "· S" defined in S306 and is now in the music pause state, the instruction is identified as play, i.e. S311 is executed to play music, and the music play state is changed to S302 after S311 is executed;
s302 is in music playing, at this time, if it is desired to switch to the next song, the coding of S304 is triggered, that is, a short trigger is performed, and there is no other trigger within S time after the short trigger is ended (there is no other trigger within the instruction window), when the above description is satisfied and the playing state is in the playing state, it is recognized that the instruction is switched to the next song, that is, S309 is performed, after the switching to the next song, the state is still the state of S302, and if it is desired to switch to the previous song, the coding of S305 is triggered, that is, a long trigger is performed, and there is no other trigger within S time after the long trigger (there is no other trigger within the instruction window). When the above description is satisfied and the playing state is still being performed, the instruction is identified as switching to the previous song, i.e., S310 is performed. If the music playing is to be paused in the music playing process, S306 is triggered, and if the trigger is identified in the music playing state, S306 executes the S308 instruction, pauses the music, and adjusts the playing state to the music pause state S301. If the state wants to stop playing music in S302, S307 is triggered, and if the identified trigger is S307, S312 music stop instruction is executed, and the state of music is changed to S303 music stop. If the music stop state is to play music in S303, S306 is triggered, and when the trigger is recognized, the play instruction in S311 is executed, the state is switched to S302 after S311 is executed, and the music is played.
In this embodiment, we see that the 1-bit trigger and the 2-bit trigger are used in the same state, the instruction window definition of P104 in fig. 2 is used, and we use "S" to distinguish the 1-bit trigger and the 2-bit trigger, and then the states and functions are orthogonal to each other, which is a multidimensional trigger code, so that the method can use simple 1-bit or 2-bit trigger to implement control over the intelligent electronic device. In the scene of the all-wireless earphone, music control under noise and motion scenes can be easily realized through a sensor, but the all-wireless earphone Air pots of apples in the prior art can not be controlled (only voice recognition control can be realized, and the volume adjustment obviously has product defects in the noise or conversation process), and similar products in the industry have the condition of inconvenient control under motion and noise. Of course, the instruction window defined by P105 can also be used in fig. 4, and the same result is achieved if the time length between t4 and t1 of P105 is reduced, when the time length is reduced to a value, P104 is fully equivalent to P105, that is, when t3-t2< =Δt3 in fig. 2, so as described earlier in this specification, the purpose of this kind of instruction window is the same, except that the window starts from where and ends. When P104 and P105 are defined in fig. 2, t3-t2 >. DELTA.t 3 of P105 is defined for explaining the following two window applying modes, and when two windows are not necessarily applied, P105 in fig. 2 is not necessarily defined as such.
Fig. 5 is still based on this scene of full wireless earphone, realize hiding the SOS function, the smart mobile phone of industry does not have the hidden SOS function at present, this victim that leads to the smart mobile phone meets the attack, can not utilize smart mobile phone and annex to hide the SOS when robbing, thereby reduce injury and loss, why uses hidden SOS because the victim has not had the chance to open the screen to dial the number and say clearly the place, the incident many times, and under these scenes, the victim can directly aggravate the injury degree if finding the use cell-phone. And the hidden help seeking function is adopted, so that the position function and the voice acquisition function of the intelligent electronic equipment are utilized to send the sound information of the site and the site to the rescuer in real time, the rescuer can effectively rescue, and other people do not know that the help seeking information is sent, so that enough time is won to avoid damage or reduce possible damage. The method for concealed rescue is disclosed in the prior patent CN201510835747.3 of the inventor.
In fig. 5, the wireless headset is taken as an example to implement the hidden help function, which does not exist in various headsets at present. After entering the help-seeking function, the function and the state of hidden help-seeking of the left large frame in fig. 5 are entered, and since hidden help-seeking does not exist in systems of smart electronic devices such as smart phones today, the APP needing to reside in the memory or the APP with the function manages and monitors the state in real time, so that S106 and S107 both need the corresponding APPs to realize (obviously, the SOS function can become a necessary function of the smart electronic devices in the future). After entering the function, there are 4 states, S401 a distress triggering state, S402 a distress sending state, S403 a distress sending state and S404 a distress responding state.
In the state of S401, the user enters the hidden help-seeking function, and if the help-seeking person triggers the S405 or S406 code, the step S104 identifies that the instruction is to perform S407 first-level help-seeking or S408 second-level help-seeking.
The state S401 is entered, usually with longer trigger time, when the user hears/feels (shakes) the SOS prompt, the user releases the trigger and then enters the state S401, so any APP or intelligent system or operating system can add SOS function, and the trigger is by built-in or external sensor.
In the present embodiment, S405 and S406 use a number triggering code, that is, the number of triggers in a given instruction window is identified, for example, S405 is 2 triggers, and S406 is 3 triggers, because in an emergency, the triggers are only needed, but in an emergency, not all people can well control the triggers, so the triggers are only needed for a sufficient number of times. This is also an example of flexible trigger coding of the method, so as to meet the appeal of users in various scenes.
The reason why the rescue grades are required in S407 and S408 is that the rescuer asks for help to a preset rescuer according to circumstances, for example, a small problem may be that the preset rescuer helps to save help, and if a severe problem is encountered, the rescuer asks for help to a second level, and directly asks for help to a judicial organization, such as police and the like. After the corresponding distress instruction is executed, namely the first-stage or second-stage distress in the S105 is executed, the state is switched to the S402 for sending the distress, some foreign distress APPs send the distress through short messages, but in fact, whether the information is sent can not be detected after the distress is sent, and the state of the distress which is reached in the complete distress process is required to be known by a person seeking help so as to deal with the operation. If the distress sending is successful, response information indicates that the distress information is sent to a preset rescuer, if the rescuer starts to respond to rescue, the state is switched to S404, and at the moment, the rescuer can wait for rescue with great reassurance.
In the states from S402 to S404, if the rescuer still has the ability to trigger the help-seeking, the codes of S405 and S406 can be triggered, so that the rescuer can also know that the help-seeking person still has the ability to send the help-seeking information temporarily and the help-seeking person is already urgent.
In this embodiment S108, i.e. the prompt message, is changed to a short vibration (on the smartphone), rather than speech or sound, in order to make the rescuer more concealed than exposed by a message such as a sound, while on the headset, it may be a vibration or voice prompt.
Through the embodiments, it can be clearly understood how the method can implement operations and controls on telephone, music and hidden help seeking in the wireless headset, and of course, the method can also be directly implemented in intelligent electronic equipment such as a mobile phone. No matter the smart phone or the all-wireless earphone has the functions of the above embodiments.
The method is described below by taking a sensor capable of sensing directivity as an example, the sensor capable of sensing directivity is a contact type and a non-contact type, the non-contact type usually identifies the triggering direction through reflection of optics, acoustics and magnetics and change of an electromagnetic field, the contact type usually judges the contact position through the change of voltage, current and resistance at a specific (x, y) point on the sensor or the change of voltage, resistance, capacitance and the like of the specific (x, y) point on the sensor after being triggered, and the change track of (x, y) is the moving direction. It is therefore emphasized that the method is not implemented using any specific sensor, as two proximity sensors can sense the trigger in the a-to-B or B-to-a directions, but need not be defined in the coordinate system, whereas in touch screens, conductive fibers, capacitive screens, coordinates are needed to define trigger points and trajectories, and in any case, it is only necessary that one of the identified triggers in the method is a "directional" trigger.
The directional triggers are triggers representing spatial displacement such as up, down, left, right, forward, backward, and the like, and certainly include clockwise methods and counterclockwise methods, which are not listed here, but these triggers are one-dimensional trigger codes when triggered once, but if the directional triggers are combined, a 2-dimensional trigger code table is formed. If a forward trigger represents an instruction to execute the next song and two consecutive forward triggers within the time window represent music fast forward, then the directional trigger table is orthogonal to table 1, resulting in a richer trigger code table.
Table 2 illustrates a directional trigger code table by taking a sensor capable of sensing a trigger in a direction from a to B or a direction from B to a as an example, because in this specification, directions are also orthogonal, so the directions themselves may also form a 2-dimensional code table, and as for that code, functions and scenes are considered, and actually, a gesture sensor, a gesture radar, or a sensor group in which 2 or more proximity sensors are arranged in a triangle or a diamond shape may sense triggers in multiple directions (2 are only in a direction from a to B or in a direction from B to a), and table 2 is only an example for illustrating this method rather than listing all trigger directions one by one, so the two-dimensional table formed by directional triggers in reality is far larger than a two-dimensional table formed by time duration triggers, so that it is used for exemplarily describing the characteristics of the method in terms of trigger codes, and does not completely represent all combinations of trigger direction codes, and on the contrary, this method is just one simplest embodiment. In table 2, it is known that when only 1-bit direction trigger is used, a 1-dimensional trigger code table such as left trigger, right trigger, forward trigger, and backward trigger is formed, but actually when 2-bit trigger is used, the table is a two-dimensional trigger table, as shown in table 2 above, because when triggering control, the codes in the trigger table are combined with functions, states, and usage scenarios, so that codes suitable for the functions, the states, and the usage scenarios are selected, for example, fast forward of music can use two right triggers, and fast backward of music can use two left triggers. Under certain state functions, the direction of direction triggering is not needed, and only the times of direction occurrence need to be monitored, namely the times of direction distinguishing triggering of the bar is needed.
Figure SMS_2
Trigger code table using sensor capable of sensing directivity as example
For convenience of description, we define the trigger in the B-to-a direction as "→" and the trigger in the a-to-B direction as "←", so that the trigger code table of table 2 can be formed in accordance with the directionality, in a manner consistent with table 1 except that the trigger of table 1 is in the time length form, and the trigger of table 2 is in the direction form.
Actually, table 1 and table 2 can be combined to form a 4-dimensional trigger code table at most, but actually, since the trigger instruction does not need to reach 4 bits, that is, the 4-dimensional trigger table formed by combining the duration and the direction, the common use scenario can be satisfied in 1, 2, and 3 bits. In fact, the sensor capable of sensing the directionality can also sense the non-directional trigger generally, for example, the trigger is not the trigger of the displacement property but the approaching and covering trigger, then the trigger is finished, and at the moment, the trigger code of the table 1 can be formed, so that the table 1 and the table 2 are combined and orthogonal to form the basic trigger code of the sensor or the sensor group with the sensing directionality trigger and the sensing trigger duration, and the very simple and effective operation and control of the intelligent electronic device can be realized by matching the aforementioned instruction window and the nesting use of the instruction window.
At present, many people watch the mobile phone video on the treadmill, but after the treadmill reaches a certain speed, the sporter is difficult to control the mobile phone video again, because the treadmill must run along with the certain speed, and at the moment, the video is controlled by fingers, the fine-tuning video cannot accurately click a control key on a screen because of fluctuation of the person during running, and meanwhile, because the fingers are sweaty or moist, the screen touch is difficult, and at the moment, the multi-point touch technology is ineffective, and the method can realize video fine tuning under the scene.
Video, especially network video, usually has a head and a tail, advertisements in the video and contents in the video that a viewer does not want to see or want to see again, and no one has studied how to adapt to video control of a runner in the scene of a treadmill all the day.
In fig. 6, S501 video pause, S502 video play, S503 video stop, and S504 fast forward/fast reverse, which are the most common states in video functions, are shown as follows, and S505 to S510 are trigger codes, in which the combination of table 1 and table 2 and orthogonal trigger codes are adopted, and S511 to S516 are instructions for controlling the execution of video.
S501 is a video pause state, when the video user wants to play the video, S507 is triggered, namely, ". S." coded in Table 1 but using the P104 instruction window in FIG. 2, when it is recognized that S507 is triggered, and the video is in the pause state, then an instruction for playing is executed, namely S514. After S514 is executed, the state of the video function is changed from S501 to S502 video play state. When the video is in the playing state of S502, if it is desired to switch the video to the next video, S505 is triggered, that is, → S "direction trigger is triggered, that is, no other trigger exists within S seconds after the direction trigger is triggered (that is, a command window of the P104 type in fig. 2 is used), when it is recognized that the trigger is in the state of S502 again, S512 is executed to switch to the next video, and when the video is switched, the video playing state of S502 is continued. If the user wants to switch to the previous video, the trigger S506 is the direction trigger "← S", and when the trigger is recognized and the state is the state of S502, the step S513 is executed to switch to the previous video; and S502, the playing state is continued after switching, if the playing is suspended, S507 is triggered, after the playing state is identified to be triggered by S507, a S511 suspending instruction is executed, and after S511 is executed, the video state is adjusted to be the S501 video suspending state. And in the state of S502, if the viewer wants to stop the video, S508 is triggered, after the playing state receives S508, the S515 instruction is executed to stop the playing, and after S515 is executed, the state of the video function is adjusted to S503 to stop the video. In the stop state of S503, if the video viewer wants to watch the video, S507 is triggered, and S514 is executed to play the video. In the video playing state of S502, if the video user wants to rewind quickly, the trigger is S509, after the playing state and the trigger of S509 are received, the fast backward instruction of S516 is executed, and after the execution, the video state is adjusted to be the fast backward state of S504; if the video viewer wants to fast forward in the state of S502, the video viewer triggers the instruction of S510, if the video viewer is identified as S510 according to the state and the trigger, the video viewer executes the instruction of S516 fast forward, and after the instruction is executed, the video viewer adjusts the state to the state of S504 fast forward. In the S504 state, if fast forward or fast backward to the video position desired by the viewer, S507 is triggered, and in the step S514, normal playing is continued, in the above embodiment, the instruction window uses the instruction window mode defined by P104 in fig. 2, so the middle is indicated by "S", and in fig. 2, the mode P105 is used, so the "S" does not need to be particularly labeled, the above embodiment describes fast forward and fast backward, only uses one speed, and if the speed is 2 times the current fast forward speed, the speed can be "→ S →" or "· S →". That is, 3 bit trigger, actually the 3 bit trigger is derived from the trigger table orthogonal to the combination of table 1 and table 2 examples, if the P105 instruction window in fig. 2 is used, it can be expressed as "→ →" or "· →", consistent with the starting point of combining the scene and the function when the method is programmed, the user memorizes the fast forward command with fast speed right or right and then right, consistent with the target of realizing the function when we control, so the user does not need to memorize the command strongly, but can derive the fast forward command from the previous command, which is the logic that the conventional coding does not have like molars or binary, of course, the foregoing instruction window can be used in the above embodiment, so that "→" and "→ S →" can be differentiated, for example, the first "→" represents "fast forward" and "S →" represents "and" S "represents" double speed ".
The above embodiments take controlling video on a treadmill as an example, and describe how to control video by using a sensor or a sensor group capable of sensing time length and directionality, and in a motion state, a runner can very easily control video by using directional trigger or combination of long and short trigger, and can only wait to watch an advertisement or a video without waiting for the head and the tail. In the video embodiment, the prompt information is not voice but is a change of video, and the combination of the direction and the duration can be used for controlling the electromechanical device by the intelligent electronic device, so that an operator only needs to see the working state of the electromechanical device as is rather than listening to sound, voice, TTS and vibration, that is, the method is used for a user to view the state of the controlled device, and a visual signal is used as feedback.
For a sensor with high precision and capable of sensing directivity and triggering duration, speed dimension can be increased, see table 3, for a touch scene, speed is defined as a certain value, which is higher than fast speed and lower than slow speed, so that control is triggered conveniently, instead of monitoring the speed of continuity, for example, the speed is higher than or equal to 2m/s and the speed is lower than 2m/s, directional triggering is matched with the triggering speed to form triggering combined coding, obviously, a triggering coding system with more dimensions is added, that is, the method can avoid the problems that a gesture radar gesture is less, the cost is high, the size is too large and the small electronic equipment cannot be installed, for example, volume adjustment, when the gesture radar enters a volume adjustment state, when the slow displacement is triggered, the volume can be increased or decreased along with the displacement direction, and when the slow displacement is triggered, the gesture radar becomes a trigger song for switching. And two consecutive fast direction triggers become the trigger of music fast forward again. The function can be used for controlling the intelligent sound equipment at present, no matter the vehicle-mounted sound equipment or the household sound equipment is usually controlled by keys at present, and after a sensor which can sense external directivity and is particularly triggered in a non-contact mode is used, the method can be used for controlling the sound equipment without eyes, even fingers and gestures, particularly when the user drives, the user does not need to leave the eyes from the road surface, and therefore the driving risk is reduced.
Figure SMS_3
Encoding trigger table with directional sensor combined with speed attribute
After the fast, slow and direction triggers are combined, one-bit direction trigger is changed into 2 pieces of information in one state, for example, slow forward and fast forward, for example, in a book listening APP, jumping to the lower section is slow forward, and jumping to the next chapter is fast forward, so that users all use forward trigger, but the implementation effect is completely different, and today, the book listening APP can only be listened passively and cannot have rich capacity of being operated and controlled when running. The method is very well suitable for the application in the field, so that the user can freely and conveniently operate the corresponding APP or system in the non-static scene.
The speed is typically measured by dividing the distance from point a at which contact is initiated to point B at the end of the trigger by the length of time it takes from point a to point B. After the table 1 and the table 3 are combined and orthogonal, the problem that people operate intelligent terminals or intelligent electronic equipment in non-static scenes or quasi-static scenes can be solved very conveniently by combining the state and the function of a controlled object.
The flexibility and the multi-dimension of the trigger code of the method are orthogonal to the functions and the states of the target controlled intelligent electronic equipment, and the sufficient functions can be controlled by 1 or 2 bits of trigger codes generally. And the trigger can be in non-static states such as motion, driving and the like, and even fingers (non-contact sensor scenes) and eyes are not needed for participating, so that the available scenes of the intelligent electronic equipment can be greatly increased.
The foregoing describes embodiments of the method for different purposes in different scenarios by using sensors for sensing trigger duration, trigger direction, trigger speed, etc., and the following describes working embodiments of the method in a pressure sensor by using a widely-used pressure sensor.
In an alpine environment, the user typically uses a trekking pole to relieve lower limb stress, and the user typically uses a single pole or a pair of poles, so that at least one hand is occupied. In mountaineers, the mountaineers need to frequently use interphones to communicate with teammates or look at GPS to determine the track and orientation, and often hold hills and trees to stabilize the center of gravity in severe road sections. Therefore, electronic equipment for mountain climbing can be controlled and used without occupying hands, and the current intelligent electronic equipment such as a smart phone, a GPS (global positioning system), an intelligent interphone and the like is controlled by a screen and keys, so that the eyes of a climber are required to see the screen during control, and a finger point key is used for controlling a menu to realize control, so that the control inevitably influences the advancing of a queue during the advancing of the queue, and if the rest light of the eyes is used for seeing a path, the electronic equipment is in danger; rain, snow, fog and cold are common mountainous climate during mountaineering, the operation and control are affected no matter the hands are wet or the touch screen is wet in rainy days, and the gloves are frequently worn and taken off during the use of the electronic equipment in cold days so as to control the electronic equipment by fingers.
For the scene of mountain climbing, no special solution exists at present, voice recognition is outdoors, and particularly in mountain climbing, accurate recognition cannot be performed due to wind noise, so that the requirement cannot be normally met, and in a mountain climbing environment, a climber cannot frequently use eyes to search various keys of electronic equipment due to control, which can cause an accident, and for a road complex area, the climber needs to repeatedly see whether a track on a GPS is consistent with a preset road, so that the climber needs to use equipment in the process of going, see the road with the residual light of eyes or destroy a running queue, and stops using the electronic equipment.
The method can effectively solve the problem that firstly the alpenstock is needed in the scene, and if the alpenstock is used, the intelligent electronic equipment is controlled to use, such as broadcasting path, altitude, temperature and humidity, or music, communication, intercom and the like of a smart phone and the like are controlled, namely, a climber can finish the original things which can be done only by stopping or slowing down the progress by triggering the sensor of the alpenstock during the progress.
In this embodiment of the intelligent alpenstock, since the direction and force of the pole held by the alpenstock during its use vary according to the terrain, a single trigger time, trigger speed and trigger direction sensor is not suitable for this scenario (is prone to false triggering). Generally, for a hand using the alpenstock, only a thumb and an index finger are convenient to control, so that a concave shape suitable for a thumb triggering area or an index finger suitable for a triggering area is arranged at the top of a pole handle of the alpenstock, a sensor or a sensor group capable of sensing triggering time length and sensing pressure is arranged in the concave shape, and the intelligent electronic equipment is controlled through pressure triggering and time length triggering.
The force for triggering the pressure and the duration sensor or the sensor group by the thumb or the index finger is divided into light force attributes and heavy force attributes, and light force attributes are assumed to be smaller than 5KG and heavy force attributes are assumed to be larger than 5KG, so that the light force attributes and the heavy force attributes can be obtained by utilizing the pressure sensor for monitoring, and the pressure value is not tested by utilizing the linear general attribute of the pressure sensor.
Corresponding to the trigger, the digital circuits all have clocks, so that the digital circuits all have trigger starting time, trigger ending time and time relation between the trigger and the trigger, namely the trigger code of the table 1 can be used, and in the scene of the pressure sensor, after two dimensions of light and heavy are added, the table 4 is formed. When table 4 is seen, enough trigger codes exist to allow us to complete the operation and control of the intelligent electronic device, wherein the basic triggers are light short triggers, heavy short triggers, light long triggers, heavy long triggers, triggers with light length and longer time lengths, and triggers with light length and longer time lengths, so that each trigger is combined to form two-bit trigger codes, so that when the longest two-bit trigger instruction is reached, the whole code table has 6 1-bit triggers and 16 2-bit triggers, totaling 22 trigger code table 22 trigger combinations, while the normal operation and control is enough to control the operation of the intelligent electronic device when the state and the function are combined, and certainly, the table 1 can be combined into light triggers, heavy triggers, light and heavy triggers, which are the result that the ordinary technicians can not innovate naturally analogize on the basis of the codes.
In the scenario of the alpenstock, if the longer-duration re-trigger is set as a menu or function switching, when the climber uses a longer-duration trigger greater than 5KG, the TTS altitude starts according to the defined reporting function triggered by the longer-duration trigger in fig. 2, for example, T > = T2 time, and at this time, the trigger of the thumb of the climber is stopped, that is, T _ end is less than T3, it indicates that the climber wants to know the current altitude, and the intelligent electronic device wirelessly connected to the alpenstock can TTS the current altitude on the GPS. By analogy, a lengthy re-trigger may be used to select a function or menu. The light trigger can be defined as a TALK-back key, namely, when the light trigger is triggered, TALK-back can be started, and when the light trigger is triggered, the TALK-back key can be triggered TO be ended, and only listening can be carried out, so that a climber does not need TO press a PTT (PUST TO TALK) key of an interphone by using one hand, and various inconveniences caused by using electronic equipment in mountain-climbing sports today can be solved very easily by using the method.
For another example, the climber uses a trekking stick to connect with the smart phone wirelessly, if the user wants to switch to the next song in the music playing state, the user can use light to trigger, and switch to the previous song can use heavy to pause light. Therefore, according to the table 4 and the functions to be controlled, enough abundant trigger instructions can be defined, so that when a climber is in wind, rain, snow and dangerous road sections, the intelligent electronic devices such as an intelligent interphone and an intelligent mobile phone are controlled without facing the difficulties and problems encountered today.
Figure SMS_4
Trigger coding table combining pressure sensor with trigger duration
In the above description, a skilled person will naturally think that since the method can combine the trigger codes, whether the time-duration trigger, the pressure-type trigger, the direction-type trigger and the speed-type trigger can be used or not on the capacitive touch screen with 3D touch capability to form a multi-dimensional trigger control table, and then the operation and control of the intelligent electronic device are performed by combining the functions and states of the controlled device, the answer is certainly positive, for example, in the case of the 3D capacitive touch screen, if the speed dimension is increased for two instructions in the case of the heavy directional trigger and the light directional trigger, if the speed dimension is increased, the operation and control becomes light, heavy, light, slow, heavy, and slow, and 4 codes appear in one directional trigger, 4 instructions can be executed. So the technical staff can analogize 2-bit instruction codes, and combine the long trigger, so can very convenient combination go out very abundant trigger table, the combination in this trigger table is from 1 bit to many triggers and combines state and function again, and intelligent electronic equipment controlled function and state can all be satisfied.
In the specification, the individual soldier can keep aiming and shooting during operation and operate the individual soldier electronic equipment instead of one of the individual soldier electronic equipment which is not discarded today, and how to realize the aiming and shooting is described below. The firearms are known to have triggers, a ring is arranged on the periphery of the triggers, the Trigger is called Trigger Guard or Trigger Guard ring, the purpose is to prevent the triggers from being triggered by mistake to cause fire, strict requirements are usually required when using firearms for training, and an index finger cannot be put on the triggers or extend into the Trigger Guard ring in a non-aiming state, so that when holding the firearms for fighting, no matter a rifle or a pistol, as long as a person who holds the firearms by both hands (rifle) or both hands is used, both hands are occupied, the most sensitive finger head of the two hands, namely the index finger which is used for fastening the triggers, is idle, and if the most sensitive finger head is arranged outside the Trigger Guard ring of the firearms, a touch device using the method is arranged in an area which the index finger can touch flexibly, and is wirelessly interconnected with an intelligent electronic device of a single soldier. When the individual combat, the combat crew can hold the firearm tightly with two hands (except for a pistol), the control of the combat and electronic equipment is not delayed, but the control of the electronic equipment is not delayed by using one hand to control the individual electronic equipment such as a walkie-talkie when the individual combat fighter communicates or uses the electronic equipment today. Therefore, after the technology of the method is adopted, the informationized interaction capacity of individual combat can be effectively improved, other negative effects possibly caused today are avoided, the intelligent electronic equipment can be operated to interact with a commander in other periods except the shooting moment, and the two hands are kept holding the gun.
The implementation of the method in different scenes is described by using a plurality of embodiments and diagrams, and the scenes in the embodiments are not effectively solved in reality today, so that the method can well control the intelligent electronic equipment, the intelligent terminal, the intelligent mobile phone and the like by using sensors with different attributes in different scenes, and users of the intelligent electronic equipment in various scenes are safer and more free.
The method is also a very effective solution for the blind, and today, a blind system on an intelligent terminal, such as an apple, adopts a screen triggering medium-multipoint technology, namely, a triggering instruction is formed by complex instructions such as single click, double click, two-finger triggering, three-finger triggering and the like, and when the blind goes out in rainy days, the apple technology is completely disabled when the screen is wet. When the blind is on the go, two hands are usually required to trigger with the screen, which results in the blind having to stop. By using the method, the use of the electronic equipment is not influenced by rainy days or marching.
For the method, the sensor can be split, or can be on an intelligent electronic device, and for the split device, the form of wire, wireless and the like can be adopted to be interconnected with the controlled target intelligent electronic device, and the intelligent electronic device connected with the split sensor or the sensor electronic device is controlled.
The method can also be used for reading and sending instant messages during driving, moving and walking, for example, in a chat group, it is assumed that "·" is defined as reading next information and "-" is reading previous information (text information can be TTS, voice information can be directly played, picture information can be described by background AI in a text and then read by TTS), and the ultra-long coverage is left message, in this example, as shown in fig. 2, t2 is longer than the end time of long trigger, so that no triggering interference is generated, when the ultra-long trigger is longer than t2, a message can be spoken under prompt, and leaves the group, the "-", in the chat group list, usually the message is sorted by time, i.e. the first position in the list after exiting, if a chat with another group is desired, the "·" is triggered as the next group, and the "-" is the previous group, enters the group, and the "·" is pointed to a certain group, and the TTS reads the name of the group, for example, and the intelligent combination of TTS can be used to realize that the next group can be reached by a distance of a user. Therefore, the method can ensure that the user of the intelligent electronic equipment is safer and freer in various scenes.
The method has rich available scenes and controllable functions, can be applied to any electronic equipment controlled by a CPU, mechanical equipment controlled by the CPU, wearable equipment and intelligent electronic equipment, and is convenient for controlling intelligent electronic equipment, APP and even instant messages APP and the like with relatively more functions or complicated functions in the occasions where people do not have enough eyes and hands in motion and driving. The triggering of the method utilizes the direction, speed, pressure and triggering duration of a sensor, so the method is multidimensional triggering, wherein tables 1 to 4 only make a triggering code list for the simplest case and do not represent all triggering codes using the method, the triggering codes are orthogonal to the states and functions of controlled APP or intelligent electronic equipment, so the method is a multidimensional triggering instruction system, tables 1 to 4 only can list basic triggering codes under the method, and more complex triggering codes formed based on the basic triggering are only specific embodiments under the method and are only one implementation of the method.
The invention relates to a method for operating and controlling an intelligent terminal or an intelligent electronic device, which utilizes a sensor or a sensor group to form a 2-to-multi-dimensional trigger coding table by combining directional trigger or pressure type trigger or direction type and speed type trigger or directional trigger and speed type and pressure type trigger of the sensor or the sensor group on the basis of a trigger coding table formed in short, long and longer time, codes and the functions and states of the controlled intelligent electronic device or APP are combined to form a trigger instruction, and the operation and control of the intelligent electronic device are realized by triggering the sensor under the corresponding functions and states, and a user generally carries out instruction trigger by perceivable prompt information such as voice, sound, TTS, vibration and visual information.

Claims (17)

1. An intelligent electronic device, comprising:
the sensor or the sensor group comprises a CPU and a sensor or a sensor group which can sense the pressure and time length to trigger; identifying a trigger by monitoring a sensor or group of sensors;
the intelligent electronic equipment executes an instruction according to the state or function of the APP running on the controlled electronic equipment, the system or the controlled electronic equipment and the identified trigger code;
wherein: the trigger is triggered by a basis comprising light short trigger, heavy short trigger, light long trigger, heavy long trigger, light longer trigger and heavy longer trigger to form a trigger code;
the trigger code and the controlled electronic equipment, the controlled electronic system or the state or the function of the APP running on the controlled electronic equipment or the controlled electronic system are combined to form an instruction; identifying the input of a variable length trigger or a multi-bit trigger by adopting an instruction window;
in addition, the longer-duration trigger is used for selecting switching among a plurality of functions, and when the trigger duration is longer than the long-duration trigger duration, the function is selected by releasing the trigger under prompting;
the selection interval at which the deactivation time falls determines the function selected or executed;
the selection interval consists of a prompt time length and a user reaction time length;
the prompt comprises at least one of speech, sound, TTS, vibration or vision;
the controlled electronic equipment comprises the intelligent electronic equipment or electronic equipment wirelessly and wirelessly interconnected with the intelligent electronic equipment.
2. An intelligent electronic device as claimed in claim 1, comprising: wherein the instruction window further comprises an instruction window application.
3. An intelligent electronic device as claimed in claim 1, comprising: and judging the function and the state of the controlled electronic equipment, the controlled electronic system or the APP operated on the controlled electronic equipment or the controlled electronic system by the user according to the prompt.
4. An intelligent electronic device as claimed in claim 1, comprising: the trigger code adopts counting code when the long trigger and the short trigger do not need to be distinguished.
5. The intelligent electronic device of claim 1, comprising: the sensor or the sensor group can be arranged on the split electronic equipment and is connected with the intelligent electronic equipment in a wired or wireless mode.
6. An alpenstock, comprising:
the sensor or the sensor group comprises a CPU and a sensor or a sensor group which can sense the pressure and time length to trigger; the alpenstock is wirelessly interconnected with the controlled intelligent electronic equipment; identifying triggers by monitoring sensors or groups of sensors;
the controlled intelligent electronic equipment executes the instruction according to the state or the function of the equipment or the system thereof or the APP running on the controlled intelligent electronic equipment and the identified trigger code;
wherein:
the trigger is triggered by a basis comprising light short trigger, heavy short trigger, light long trigger, heavy long trigger, light longer trigger and heavy longer trigger to form a trigger code;
the trigger code is combined with the functions and states of the controlled intelligent electronic equipment and system or the APP running on the controlled intelligent electronic equipment and system to form an instruction;
identifying the input of a variable length trigger or a multi-bit trigger by adopting an instruction window;
in addition, when the longer-duration trigger is used for selecting switching among a plurality of functions, when the trigger duration is longer than the long-duration trigger duration, the function is selected by releasing the trigger under the prompt;
the selection interval at which the deactivation time falls determines the function selected or executed;
the selection interval consists of a prompt time length and a user reaction time length;
the prompt comprises at least one of speech, sound, TTS, vibration, or vision.
7. An alpenstock according to claim 6, comprising:
the alpenstock is characterized in that a sensor or a sensor group capable of sensing triggering time and pressure is arranged in a concave area suitable for being triggered by a thumb or an index finger at the top.
8. The alpenstock according to claim 6, comprising: and judging the function and the state of the controlled intelligent electronic equipment, the system or the APP operated on the controlled intelligent electronic equipment and the system by the user according to the prompt.
9. The alpenstock according to claim 6, comprising: wherein the instruction window further comprises an instruction window application.
10. An alpenstock according to claim 6, comprising: wherein, the trigger code adopts counting code if the long trigger and the short trigger are not needed to be distinguished.
11. The alpenstock according to claim 6, comprising: the alpenstock adopts light or heavy trigger with longer time for switching menus and functions or talkback communication of controlled equipment.
12. An alpenstock according to claim 6, comprising: the alpenstock is connected with the intelligent electronic equipment through wireless; the intelligent electronic equipment comprises an intelligent interphone and an intelligent mobile phone.
13. A touch device, comprising:
the sensor or the sensor group comprises a CPU and a sensor or a sensor group which can sense the pressure and time length to trigger; the touch device is wirelessly interconnected with the controlled intelligent electronic equipment, and the triggering is identified by monitoring a sensor or a sensor group; the controlled intelligent electronic equipment executes the instruction according to the state or the function of the equipment or the system thereof or the APP operated on the controlled intelligent electronic equipment and the identified trigger code;
wherein:
the trigger is triggered by a basis comprising light short trigger, heavy short trigger, light long trigger, heavy long trigger, light longer trigger and heavy longer trigger to form a trigger code;
the trigger code is combined with the functions and states of the controlled intelligent electronic equipment and system or the APP running on the controlled intelligent electronic equipment and system to form an instruction;
identifying the input of a variable length trigger or a multi-bit trigger by adopting an instruction window;
in addition, the longer-duration trigger is used for selecting switching among a plurality of functions, and when the trigger duration is longer than the long-duration trigger duration, the function is selected by releasing the trigger under prompting;
the selection interval within which the deactivation time falls determines the function to be selected or executed;
the selection interval consists of a prompt duration and a waiting user reaction duration;
the prompt comprises at least one of speech, sound, TTS, vibration, or vision.
14. The touch device of claim 13, wherein the touch control device comprises: and judging the function and the state of the controlled intelligent electronic equipment, the system or the APP operated on the controlled intelligent electronic equipment and the system by the user according to the prompt.
15. The touch device of claim 13, comprising: wherein the instruction window further comprises an instruction window application.
16. The touch device of claim 13, comprising: the coding adopts counting coding when the long and short triggers are not needed to be distinguished.
17. The touch device of claim 13, comprising: the touch device is arranged on the outer side of a trigger guard ring of the firearm, the front end of the trigger position and the area touched by the forefinger control the individual electronic equipment wirelessly interconnected with the touch device by triggering.
CN202011425532.1A 2017-11-02 2018-10-25 Intelligent electronic equipment, alpenstock and touch device Active CN112351141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011425532.1A CN112351141B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment, alpenstock and touch device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2017110653987 2017-11-02
CN201711065398 2017-11-02
CN202011425532.1A CN112351141B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment, alpenstock and touch device
CN201811253657.3A CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811253657.3A Division CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment

Publications (2)

Publication Number Publication Date
CN112351141A CN112351141A (en) 2021-02-09
CN112351141B true CN112351141B (en) 2023-04-14

Family

ID=65608482

Family Applications (7)

Application Number Title Priority Date Filing Date
CN202011425532.1A Active CN112351141B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment, alpenstock and touch device
CN202011427340.4A Active CN112367431B (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone
CN202011425457.9A Active CN112351140B (en) 2017-11-02 2018-10-25 Video control method and intelligent electronic equipment
CN202011462704.2A Active CN112653786B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment hidden help-seeking method, intelligent electronic equipment and earphone
CN201811253657.3A Active CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment
CN202011425531.7A Active CN112367430B (en) 2017-11-02 2018-10-25 APP touch method, instant message APP and electronic device
CN202011433927.6A Withdrawn CN112261217A (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment and intelligent electronic equipment

Family Applications After (6)

Application Number Title Priority Date Filing Date
CN202011427340.4A Active CN112367431B (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment, intelligent electronic equipment and intelligent earphone
CN202011425457.9A Active CN112351140B (en) 2017-11-02 2018-10-25 Video control method and intelligent electronic equipment
CN202011462704.2A Active CN112653786B (en) 2017-11-02 2018-10-25 Intelligent electronic equipment hidden help-seeking method, intelligent electronic equipment and earphone
CN201811253657.3A Active CN109462690B (en) 2017-11-02 2018-10-25 Method for controlling intelligent terminal or intelligent electronic equipment
CN202011425531.7A Active CN112367430B (en) 2017-11-02 2018-10-25 APP touch method, instant message APP and electronic device
CN202011433927.6A Withdrawn CN112261217A (en) 2017-11-02 2018-10-25 Method for controlling intelligent electronic equipment and intelligent electronic equipment

Country Status (1)

Country Link
CN (7) CN112351141B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971255A (en) * 2019-11-29 2020-04-07 四川科道芯国智能技术股份有限公司 Wrist wearing equipment
CN113038285B (en) * 2021-03-12 2022-09-06 拉扎斯网络科技(上海)有限公司 Resource information playing control method and device and electronic equipment
CN114040286A (en) * 2021-10-28 2022-02-11 歌尔科技有限公司 True wireless earphone and true wireless earphone system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105511784A (en) * 2015-12-02 2016-04-20 北京新美互通科技有限公司 Information input method, device and mobile terminal based on pressure detection
CN107025019A (en) * 2017-01-12 2017-08-08 瑞声科技(新加坡)有限公司 The exchange method and terminal device of virtual key

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080293374A1 (en) * 2007-05-25 2008-11-27 At&T Knowledge Ventures, L.P. Method and apparatus for transmitting emergency alert messages
CN100589069C (en) * 2008-09-23 2010-02-10 杨杰 Computer control method and apparatus
KR101545582B1 (en) * 2008-10-29 2015-08-19 엘지전자 주식회사 Terminal and method for controlling the same
KR101092592B1 (en) * 2009-10-14 2011-12-13 주식회사 팬택 Mobile communication terminal and method for providing touch interface thereof
US20110221684A1 (en) * 2010-03-11 2011-09-15 Sony Ericsson Mobile Communications Ab Touch-sensitive input device, mobile device and method for operating a touch-sensitive input device
CN102043486A (en) * 2010-08-31 2011-05-04 苏州佳世达电通有限公司 Operation method of hand-held electronic device
WO2013169842A2 (en) * 2012-05-09 2013-11-14 Yknots Industries Llc Device, method, and graphical user interface for selecting object within a group of objects
WO2013169846A1 (en) * 2012-05-09 2013-11-14 Yknots Industries Llc Device, method, and graphical user interface for displaying additional information in response to a user contact
JP6103970B2 (en) * 2013-02-08 2017-03-29 キヤノン株式会社 Information processing apparatus and information processing method
GB2516820A (en) * 2013-07-01 2015-02-11 Nokia Corp An apparatus
TWI568186B (en) * 2014-06-09 2017-01-21 宏達國際電子股份有限公司 Portable device, manipulation method and non-transitory computer readable storage medium
CN104598067B (en) * 2014-12-24 2017-12-29 联想(北京)有限公司 Information processing method and electronic equipment
US9904409B2 (en) * 2015-04-15 2018-02-27 Samsung Electronics Co., Ltd. Touch input processing method that adjusts touch sensitivity based on the state of a touch object and electronic device for supporting the same
CN106686202B (en) * 2015-06-04 2018-05-04 单正建 A kind of control method of intelligent terminal/mobile phone
CN104991645A (en) * 2015-06-24 2015-10-21 宇龙计算机通信科技(深圳)有限公司 Cursor control method and apparatus
KR102508147B1 (en) * 2015-07-01 2023-03-09 엘지전자 주식회사 Display apparatus and controlling method thereof
KR20170016752A (en) * 2015-08-04 2017-02-14 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN105141761A (en) * 2015-08-10 2015-12-09 努比亚技术有限公司 State switching device of mobile terminal, mobile terminal and state switching method
CN105245688B (en) * 2015-08-27 2018-09-04 广东欧珀移动通信有限公司 A kind of processing method and mobile terminal of communication event
CN106817475B (en) * 2015-11-27 2019-02-19 单正建 It is a kind of based on intelligent terminal and its attached or associate device hidden method for seeking help
CN105446540A (en) * 2015-12-31 2016-03-30 宇龙计算机通信科技(深圳)有限公司 Character input method and device
CN105719426A (en) * 2016-01-25 2016-06-29 广东小天才科技有限公司 Method and device for sorted calling for help
CN107544295A (en) * 2016-06-29 2018-01-05 单正建 A kind of control method of automobile equipment
CN107171945A (en) * 2017-06-29 2017-09-15 珠海市魅族科技有限公司 Image information processing method and device, computer installation and readable storage medium storing program for executing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105511784A (en) * 2015-12-02 2016-04-20 北京新美互通科技有限公司 Information input method, device and mobile terminal based on pressure detection
CN107025019A (en) * 2017-01-12 2017-08-08 瑞声科技(新加坡)有限公司 The exchange method and terminal device of virtual key

Also Published As

Publication number Publication date
CN112367430B (en) 2023-04-14
CN112367431A (en) 2021-02-12
CN112351141A (en) 2021-02-09
CN112367431B (en) 2023-04-14
CN109462690B (en) 2021-01-05
CN112351140A (en) 2021-02-09
CN112653786B (en) 2023-03-14
CN112351140B (en) 2023-04-14
CN112653786A (en) 2021-04-13
CN109462690A (en) 2019-03-12
CN112261217A (en) 2021-01-22
CN112367430A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN112351141B (en) Intelligent electronic equipment, alpenstock and touch device
US20180063308A1 (en) System and Method for Voice Recognition
WO2016192622A1 (en) Control method for smart terminal/mobile phone
CN108134913B (en) Intelligent adjustment method for video call and wearable device
EP2630558A1 (en) Electronic control glove
CN108595003A (en) Function control method and relevant device
CN106033248B (en) A kind of information processing method and electronic equipment
CN102752437A (en) Mobile terminal and handheld response method thereof
EP3299946B1 (en) Method and device for switching environment picture
US20220217308A1 (en) Camera Glasses for Law Enforcement Accountability
CN105718771A (en) Private mode control method and system
CN106200909A (en) Event-prompting method, device and mobile terminal
CN103257703B (en) A kind of augmented reality device and method
CN102752454B (en) Mobile phone warning method based on voice recognition
CN109410535A (en) A kind for the treatment of method and apparatus of scene information
CN109857282B (en) Touch device, intelligent terminal and individual soldier system
CN107852431B (en) Information processing apparatus, information processing method, and program
CN106293810A (en) Application processing method based on VR equipment, device and VR equipment
CN104932689B (en) A kind of information processing method and electronic equipment
CN111698600A (en) Processing execution method and device and readable medium
CN205321310U (en) Intelligence gloves
CN204635161U (en) A kind of multifunctional intellectual gloves
GB2395282A (en) System monitoring reaction of users to a performance
CN108521519A (en) A kind of method of open air smart mobile phone easy to use
CN109462793A (en) The Intelligence Feedback type running rhythm and pace of moving things guides system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant