WO2022143110A1 - 颈动数据采集与处理方法及装置 - Google Patents

颈动数据采集与处理方法及装置 Download PDF

Info

Publication number
WO2022143110A1
WO2022143110A1 PCT/CN2021/137201 CN2021137201W WO2022143110A1 WO 2022143110 A1 WO2022143110 A1 WO 2022143110A1 CN 2021137201 W CN2021137201 W CN 2021137201W WO 2022143110 A1 WO2022143110 A1 WO 2022143110A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
motion
user
origin
neck
Prior art date
Application number
PCT/CN2021/137201
Other languages
English (en)
French (fr)
Inventor
王艳召
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022143110A1 publication Critical patent/WO2022143110A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/025Exercising apparatus specially adapted for particular parts of the body for the head or the neck
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/22Social work or social welfare, e.g. community support activities or counselling services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • the present application relates to the field of human motion data processing, and in particular, to a method and device for collecting and processing neck motion data.
  • neck movement data can help people better grasp the neck movement. They can make targeted adjustments according to the neck movement data, and can also use the neck movement to achieve flexible people. machine interaction.
  • people can only summarize their own neck movements according to common sense or some methods summarized in the network, and there is no method to accurately collect and analyze neck movement data, so that people cannot clearly understand their own neck movements. , and can not make targeted neck health adjustments.
  • Embodiments of the present application provide a method and device for collecting and processing neck movement data, which can solve the problem that people cannot accurately grasp neck movement data.
  • a method for collecting and processing neck movement data includes: collecting initial data of a user's head movement within a unit time, where the initial head movement data represents the user's neck movement. Then, the initial data of the head movement is preprocessed to obtain the preprocessed data of the head movement per unit time when the user is stationary or relatively stationary. After that, the preprocessing data per unit time is drawn into a curve to obtain the head motion curve. Finally, the head motion curves are classified, and information on the frequency of occurrence of various head motion curves per unit time is obtained.
  • the user's neck movement can be obtained by collecting initial data of the user's head movement within a unit time.
  • the preprocessing data of the head movement per unit time when the user is stationary or relatively stationary can be obtained, and the movement of the user's neck relative to the body torso can be obtained;
  • the data is drawn into a curve, the head movement curve is obtained, then the head movement curve is classified, and the frequency information of the occurrence of various head movement curves is obtained, which can realize the processing of the head movement data, and analyze the neck through the head data.
  • the type of movement and the number of times of each movement of the part in a period of time can be obtained to obtain the analysis results of neck movement, which helps users to accurately grasp their own neck movement data.
  • the collected initial data of the head movement includes: initial coordinate data located in the first three-dimensional coordinate system, acceleration data of the user, and facial orientation data of the user.
  • the initial coordinate data and acceleration data in the first three-dimensional coordinate system are beneficial to help analyze the user's motion state and corresponding state data, and the state data combined with the facial orientation data is helpful for judging the user's movement route and the like.
  • the state data combined with the facial orientation data is helpful for judging the user's movement route and the like.
  • preprocessing the initial data includes: acquiring the user's state data based on the initial coordinate data, acceleration data and orientation data, and determining the user's state. Then, based on the initial coordinate data and the state data, preprocessing data is obtained.
  • the preprocessed data is the first coordinate data with the first three-dimensional coordinate system as the reference system.
  • the user's motion state and corresponding state data can be obtained, and the motion data of the user's head relative to the body's torso can be obtained according to the initial coordinate data and state data, thereby obtaining the neck relative to the body.
  • the movement data is convenient for users to understand their own neck movement.
  • preprocessing the initial data further includes: converting the first coordinate data into second coordinate data with a second three-dimensional coordinate system as a reference system, and the origin of the second three-dimensional coordinate system is the user's The center point of the line connecting the origin of the left ear and the origin of the right ear.
  • the first coordinate data into the second coordinate data with the second three-dimensional coordinate system as the reference system, it is beneficial to intuitively reflect the movement curve of the head, and can better analyze the movement of the head , so as to analyze the user's neck movement.
  • the head motion curves are classified, including: left and right steering motion curves, where the head motion curve satisfies the first preset condition; left and right roll motion curves, where the head motion curve satisfies the second preset condition ; The pitch motion curve, the head motion curve satisfies the third preset condition.
  • the classification of the head movement is realized, and accordingly, the classification of the neck movement is realized, so that the user can have a more intuitive understanding of the analysis result of the neck movement.
  • the first preset condition may include: within a unit time, the coordinate data of the preprocessed data in the second three-dimensional coordinate system satisfies:
  • the second preset condition may include: within a unit time, the coordinate data of the preprocessed data in the second three-dimensional coordinate system satisfies:
  • the third preset condition may include: within a unit time, the coordinate data of the preprocessed data in the second three-dimensional coordinate system satisfies:
  • a is the distance between the origin of the second coordinate system and the origin of the left ear or the origin of the right ear
  • b is the distance between the origin of the second coordinate system and the origin of the neck
  • i and k are positive numbers.
  • the user's state may include: a resting state and a moving state.
  • a resting state When the user is in an exercise state, based on the state data per unit time, the user's exercise curve graph is drawn, and the exercise curve graph is classified and saved.
  • a neck motion data collection and processing device is provided.
  • the neck motion data collection and processing device is used to execute the neck motion data collection and processing method provided in the first aspect.
  • the present application can divide the functional modules of the neck motion data collection and processing device.
  • each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the present application may divide the neck motion data acquisition and processing device into a data acquisition module, a preprocessing module, a curve drawing module, an information acquisition module, and the like according to functions.
  • the neck motion data acquisition and processing device includes: a memory and one or more processors, the memory and the processor are coupled.
  • the memory is used for storing computer instructions
  • the processor is used for invoking the computer instructions to perform any one of the methods provided by the first aspect and any possible design manners thereof.
  • the present application provides a computer readable storage medium, such as a computer non-transitory readable storage medium.
  • a computer program (or instruction) is stored thereon, and when the computer program (or instruction) runs on the neck movement data collection and processing device, the neck movement data collection and processing device is made to perform any of the above-mentioned first aspects. Any method provided by the implementation.
  • the present application provides a computer program product that, when run on a computer, enables any one of the methods provided by any one of the possible implementations of the first aspect to be executed.
  • the present application provides a chip system, including: a processor, where the processor is configured to call and run a computer program stored in the memory from a memory, and execute any one of the methods provided in the implementation manner of the first aspect.
  • the present application provides a neck motion data collection and processing system, including: a first terminal and a second terminal.
  • the first terminal is used to collect the initial data of the head movement
  • the second terminal processes the collected initial data to realize the collection and processing of the neck movement data.
  • the neck motion data collection and processing system includes a third terminal, and the third terminal is configured to execute any one of the methods provided by the implementation manner in the first aspect.
  • any of the above-mentioned neck movement data collection and processing devices, computer storage media, computer program products or neck movement data collection and processing systems can be applied to the corresponding methods provided above. Therefore, for the beneficial effects that can be achieved, reference may be made to the beneficial effects in the corresponding method, which will not be repeated here.
  • FIG. 1 is one of the schematic diagrams of the structure of a neck motion data acquisition and processing system provided by an embodiment of the present application;
  • FIG. 2 is the second schematic diagram of the structure of the neck motion data collection and processing system provided by the embodiment of the present application;
  • FIG. 3 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 4 is one of the schematic diagrams of the second three-dimensional coordinate system provided by the embodiment of the present application.
  • FIG. 5 is the second schematic diagram of the second three-dimensional coordinate system provided by the embodiment of the present application.
  • FIG. 6 is one of the schematic flowcharts of the neck motion data collection and processing method provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of a left-right turning motion curve of the head provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a threshold interval of a left-right turning motion of the head according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a left-right roll motion curve of the head provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the threshold interval of the left and right roll motion of the head according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a curve of a head pitching motion provided by an embodiment of the present application.
  • FIG. 12 is the second schematic flowchart of the method for collecting and processing neck motion data provided by the embodiment of the present application.
  • FIG. 13 is a schematic diagram of a neck motion data collection and processing device provided by an embodiment of the present application.
  • FIG. 14 is one of the system architecture diagrams applied by the embodiment of the present application.
  • FIG. 15 is the second system architecture diagram applied by the embodiment of the present application.
  • FIG. 16 is the third system architecture diagram applied by the embodiment of the present application.
  • FIG. 17 is a schematic flowchart applied to the system architecture shown in FIG. 16 according to an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • Figure 19 is a conceptual partial view of a computer program product provided by embodiments of the present application.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • first and second are only used for description purposes, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • the size of the sequence number of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not be used in the embodiment of the present application. Implementation constitutes any limitation.
  • determining B according to A does not mean that B is only determined according to A, and B may also be determined according to A and/or other information.
  • the term “if” may be interpreted to mean “when” or “upon” or “in response to determining” or “in response to detecting.”
  • the phrases “if it is determined" or “if a [statement or event] is detected” can be interpreted to mean “when determining" or “in response to determining... ” or “on detection of [recited condition or event]” or “in response to detection of [recited condition or event]”.
  • references throughout the specification to "one embodiment,” “an embodiment,” and “one possible implementation” mean that a particular feature, structure, or characteristic related to the embodiment or implementation is included in the present application at least one embodiment of .
  • appearances of "in one embodiment” or “in an embodiment” or “one possible implementation” in various places throughout this specification are not necessarily necessarily referring to the same embodiment.
  • the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
  • FIG. 1 is one of the schematic structural diagrams of a neck motion data collection and processing system provided by an embodiment of the present application. As shown in FIG. 1 , the neck motion data collection and processing system includes a first terminal 11 and a second terminal 12 .
  • the first terminal 11 is used to collect the initial data of the neck movement of the user in a unit time, and then send the collected initial data of the neck movement to the second terminal 12 .
  • the first terminal 11 Before collecting the initial data of the user's neck movement, it is necessary to determine the specific collection point. Because the neck movement data is difficult to collect directly, the directly collected data is also difficult to reflect the real movement of the neck, such as when the head is pitching , the data collected by the neck epidermis does not change much, resulting in a large error. However, the movement of the neck will drive the head to perform corresponding movement. Therefore, the movement of the neck can be characterized by collecting the movement of a certain point or multiple points on the head.
  • the motion data of the left ear and the right ear are collected, and the origin of the left ear and the origin of the right ear are taken as the collection points, and the motion of the neck is analyzed by analyzing the motion of the origin of the left ear and the origin of the right ear.
  • the origin of the left ear is the center of the left ear data acquisition device
  • the origin of the right ear is the center of the right ear data acquisition device.
  • the first terminal 11 in this embodiment may be an earphone capable of data acquisition, and the earphone is configured with motion sensors, including an acceleration sensor, a gyroscope, and an electronic compass sensor. , atmospheric pressure sensor, etc.
  • the left earphone is worn on the left ear and the right earphone is worn on the right ear, wherein the center of the left earphone is the origin of the left ear, and the center of the right earphone is the origin of the right ear.
  • the initial data of the neck movement may include: the three-dimensional coordinate data of the left ear origin and the right ear origin movement in unit time, the acceleration of the left ear origin and the right ear origin, and the orientation data collected by the left earphone and the right earphone.
  • the three-dimensional coordinate data of the left ear origin and the right ear origin movement is the coordinate data located in the first three-dimensional coordinate system, and the first three-dimensional coordinate system refers to the preset three-dimensional coordinate system in the first terminal 11 .
  • the second terminal 12 is configured to receive the initial data of the neck movement collected by the first terminal 11, and preprocess the initial data to obtain the preprocessed data of the neck movement per unit time when the user is stationary or relatively stationary. Then the obtained preprocessing data is drawn into a curve to obtain the neck motion curve. Then, the neck motion curves are classified to obtain the frequency of occurrence of various types of neck motion curves per unit time.
  • the second terminal 12 in this embodiment may be an intelligent terminal with corresponding data processing capabilities, such as a mobile phone, a tablet, a computer, or a vehicle-mounted computer.
  • the second terminal 12 is provided with a second three-dimensional coordinate system, the origin of the second three-dimensional coordinate system is the midpoint of the line connecting the origin of the left ear and the origin of the right ear, and the positive direction of the x-axis of the second three-dimensional coordinate system is that the origin of the coordinate system points to the left
  • the direction of the ear origin, the positive direction of the y-axis of the second three-dimensional coordinate system is a vertical upward direction, and the z-axis of the second three-dimensional coordinate system is a direction perpendicular to both the x-axis and the y-axis and pointing to the face.
  • the data preprocessed by the second terminal 12 takes the second three-dimensional coordinate system as the reference system, the obtained second three-dimensional coordinate data, and the neck motion curve is drawn based on the second three-dimensional coordinate data.
  • a basic information database is preset in the second terminal 12.
  • the user is required to fill in the relevant basic information, including Height, the distance between the origin of the left ear and the origin of the right ear, the distance between the origin of the left ear (or the origin of the right ear) and the ground (or the top of the head), and the distance from the origin of the neck to the ground (or the top of the head).
  • the origin of the neck is the center of the rotation of the neck.
  • FIG. 2 is the second schematic diagram of the structure of the neck motion data collection and processing system provided by the embodiment of the present application. As shown in FIG. 2 , the neck motion data collection and processing system includes a third terminal 13 .
  • the third terminal 13 is used to collect the initial data of the user's neck movement in a unit time, and then preprocess the initial data to obtain the preprocessed data of the neck movement per unit time when the user is stationary or relatively stationary. Then the obtained preprocessing data is drawn into a curve to obtain the neck motion curve. Then, the neck motion curves are classified to obtain the frequency of occurrence of various types of neck motion curves per unit time.
  • the third terminal 13 in this embodiment may be an intelligent terminal device such as a headset having an information acquisition function and a data processing capability.
  • FIG. 3 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 30 may be the first terminal 11 or the second terminal 12 in FIG. 1 , or may be the third terminal 13 in FIG. 2 .
  • the terminal device 30 may include a processor 31 , a memory 32 , a communication interface 33 and a bus 34 .
  • the processor 31 , the memory 32 and the communication interface 33 may be connected through a bus 34 .
  • the processor 31 is the control center of the terminal device 30, and can be a general-purpose central processing unit (central processing unit, CPU), or can be other general-purpose processors.
  • the general-purpose processor may be a microprocessor or any conventional processor.
  • processor 31 may include one or more CPUs, such as CPU 0 and CPU 1 shown in FIG. 4 .
  • the memory 32 may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (RAM) or other type of static storage device that can store information and instructions
  • ROM read-only memory
  • RAM random access memory
  • a dynamic storage device that can also be an electrically erasable programmable read-only memory (EEPROM), a magnetic disk storage medium, or other magnetic storage device, or can be used to carry or store instructions or data structures with in the form of desired program code and any other medium that can be accessed by a computer, but is not limited thereto.
  • EEPROM electrically erasable programmable read-only memory
  • magnetic disk storage medium or other magnetic storage device, or can be used to carry or store instructions or data structures with in the form of desired program code and any other medium that can be accessed by a computer, but is not limited thereto.
  • the memory 32 may exist independently of the processor 31 .
  • the memory 32 may be connected to the processor 31 through a bus 34 for storing data, instructions or program codes.
  • the processor 31 calls and executes the instructions or program codes stored in the memory 32, the prediction method provided by the embodiments of the present application can be implemented.
  • the memory 32 may also be integrated with the processor 31 .
  • the communication interface 33 is used for connecting the terminal device 30 with other devices (such as servers, etc.) through a communication network, and the communication network can be an Ethernet, a radio access network (RAN), a wireless local area network (wireless local area networks, WLAN), etc.
  • the communication interface 33 may include a receiving unit for receiving data, and a transmitting unit for transmitting data.
  • the bus 34 can be an industry standard architecture (Industry Standard Architecture, ISA) bus, a peripheral device interconnect (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus and the like.
  • ISA Industry Standard Architecture
  • PCI peripheral device interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 3, but it does not mean that there is only one bus or one type of bus.
  • the structure shown in FIG. 3 does not constitute a limitation on the terminal device 30.
  • the terminal device 30 may include more or less components than those shown in the figure, or Combining certain components, or different component arrangements.
  • An embodiment of the present application provides a method for collecting and processing neck motion data, and the method can be applied to the first terminal 11 and the second terminal 12 shown in FIG. 1 , or to the third terminal 13 shown in FIG. 2 . . Specifically, the method can be applied to the terminal device 30 shown in FIG. 3 .
  • the processor 31 can execute the program instructions in the memory 32 to realize the implementation of the present application.
  • Example of neck motion data acquisition and processing methods By executing the neck motion data collection and processing method provided by the embodiments of the present application, the key features of the neck motion data can be extracted, so that the neck motion data can be fully utilized.
  • the neck motion data collection and processing method adopted in this embodiment may be applied to the neck motion data collection and processing system shown in FIG. 1 .
  • the collection and processing flow of the embodiment of the present application is described by taking the form in which the first terminal 11 collects the user's neck motion data, and the second terminal 12 analyzes and processes the data.
  • the first terminal 11 may be a headset with a data collection function
  • the second terminal 12 may be a smart phone.
  • FIG. 6 shows one of the schematic flowcharts of the method for collecting and processing neck motion data according to an embodiment of the present application.
  • the method may include the following steps:
  • the first terminal 11 collects initial data of the user's head movement within a unit time, and sends the initial data to the second terminal 12 .
  • the initial data of the head movement can represent the user's neck movement.
  • the origin of the neck refers to the center of the interface between the neck and the torso.
  • the user wears the headset on both ears and collects the user's head movement data within a unit time.
  • the headset can be set with a button for whether to collect data, or whether to collect data through a smart device. the status of the relevant data.
  • the unit time refers to a period of time set artificially, and the length of the unit time may be set according to the needs of the user, for example, the unit time may be 1 minute, 5 minutes or 10 minutes.
  • the initial data of the head movement includes: initial coordinate data of the user in the first three-dimensional coordinate system, acceleration data of the user, and facial orientation data of the user.
  • the initial coordinate data of the user in the first three-dimensional coordinate system refers to the three-dimensional coordinate data of the origin of the left ear and the origin of the right ear when the user wears the headset, and the origin of the left ear when the user is exercising within a unit time.
  • the user's acceleration data refers to the user's acceleration detected by the acceleration sensor, and the user's motion state in a unit time can be determined through the acceleration data.
  • the user's acceleration when the user's acceleration is 0, it can be judged that the user is in a static state or a state of uniform motion; when the acceleration data is not 0, the user must be in a state of non-uniform motion, and the specific motion needs to be based on the origin of the left ear and the right ear.
  • the coordinate data of the origin is changed, and it is calculated in combination with the acceleration data.
  • the user's face orientation data refers to the user's face orientation data.
  • the user's movement path and other information can be determined by the user's face orientation data, combined with the dynamic three-dimensional coordinate data and acceleration data of the origin of the left ear and the origin of the right ear.
  • the earphone sends the collected initial data of the head movement to the mobile phone
  • the data transmission method may adopt Bluetooth transmission, network transmission or other transmission methods, and the specific transmission method is not limited in this embodiment.
  • the second terminal 12 preprocesses the initial data to obtain preprocessed data of the head movement per unit time when the user is stationary or relatively stationary.
  • the mobile phone After the mobile phone receives the initial data collected by the headset, it preprocesses the initial data.
  • the three-dimensional coordinate data in the initial data is the three-dimensional coordinate data obtained by using the first three-dimensional coordinate system as a reference system.
  • a second three-dimensional coordinate system is established. Referring to FIGS. 4 and 5 , FIG. 4 is one of the schematic diagrams of the second three-dimensional coordinate system provided by the embodiment of the present application, and FIG. 5 is the second schematic diagram of the second three-dimensional coordinate system provided by the embodiment of the present application. As shown in FIG. 4 and FIG.
  • the second three-dimensional coordinate system takes the midpoint of the line connecting the origin of the left ear and the origin of the right ear as the origin, and the direction passing through the origin to the origin of the left ear is the positive direction of the x-axis, so as to pass through the origin of the left ear.
  • the vertical upward direction of the origin is the positive direction of the y-axis
  • the direction passing through the origin to the face is the positive direction of the z-axis.
  • the three-dimensional coordinate data determined by the second three-dimensional coordinate system as the reference system can intuitively reflect the head. The motion curve can better analyze the movement of the head, and thus analyze the movement of the user's neck.
  • Preprocessing the initial data includes: converting the first coordinate data obtained by using the first three-dimensional coordinate system as the reference system for the origin of the left ear and the right ear origin into second coordinate data using the second three-dimensional coordinate system as the reference system.
  • the initial data of the user's head motion is collected, the initial data is motion data relative to the ground, and when the user is stationary, the head motion data is also motion data relative to the center of gravity of the body.
  • the initial data includes the motion data of the head relative to the user's center of gravity and the motion data of the user's center of gravity relative to the ground.
  • This application aims to analyze the motion data of the user's head relative to the body's center of gravity, that is, the user is in The motion data of the head when it is still or relatively still, so as to realize the utilization of the head motion data.
  • the motion data of the user's center of gravity relative to the ground is analyzed based on the motion data of the left ear origin and the right ear origin in a unit time, as well as the user's acceleration data, and this data is called background state data.
  • the method for obtaining the background state data is in the prior art, such as the method for obtaining the user's motion status through a wristband or a mobile phone in the prior art, which will not be described in detail in this embodiment. Since the collected preprocessing data is the sum of the motion data of the head relative to the user's center of gravity and the motion data of the user's center of gravity relative to the ground, subtract the background state data from the initial data of the head motion to obtain the head relative to the user's center of gravity.
  • Movement data which is the preprocessing data of the head movement per unit time when the user is stationary or relatively stationary.
  • the preprocessed data is the second coordinate data with the second three-dimensional coordinate system as the reference system.
  • the background state data of the user's center of gravity relative to the ground motion can be obtained, and the user's background state data may not be exactly the same in different units of time.
  • its background state data is 0; when the user moves at a constant speed, its background state data changes at a constant speed, and the changing value is related to the speed and direction of the constant speed motion; when the user moves at a non-uniform speed, its background state data It also changes, and the specific changes are related to the speed, acceleration and direction of motion during non-uniform motion.
  • a background state database can be set up on the mobile phone to store the background state data obtained in the preprocessing process. For different background states, typical background state data corresponding to each background state can be established, which is convenient for the collected data The user refers to the preprocessing of the initial data in different background states.
  • the second terminal 12 draws the preprocessed data per unit time into a curve to obtain a head motion curve.
  • the second coordinate data in the preprocessed data is the coordinate data of the origin of the left ear in a unit time and the coordinate data of the origin of the right ear in a unit time.
  • the motion trajectory curve of the left ear origin and the motion trajectory curve of the right ear origin are symmetrical with respect to the origin of the second three-dimensional coordinate system.
  • the motion of the head is represented by the motion of the origin of the left ear and the origin of the right ear
  • the motion of the user's neck is represented by the motion of the head. Therefore, the motion trajectory curve of the origin of the left ear or the motion trajectory curve of the origin of the right ear can represent the motion curve of the neck.
  • the neck motion curve is obtained by proportionally reducing the motion trajectory curve of the origin of the left ear or the motion trajectory curve of the origin of the right ear.
  • the scale of the distance from the origin of the system is determined.
  • the second terminal 12 classifies the head movement curves, and acquires frequency information of occurrences of various types of head movement curves per unit time.
  • the mobile phone draws the preprocessed data into a curve, it obtains the motion trajectory curve of the origin of the left ear or the motion trajectory curve of the origin of the right ear. Since the motion curve of the neck is the motion trajectory curve of the origin of the left ear or the motion trajectory curve of the origin of the right ear Therefore, the movement of the neck can be analyzed by analyzing the motion trajectory curve of the origin of the left ear or the motion trajectory curve of the origin of the right ear. Since the user's head movement may be complicated in a unit time, the movement trajectory curve of the origin of the left ear and the movement trajectory curve of the origin of the right ear may be a continuous and complex curve in a unit time, which is inconvenient for the neck. Analysis of the movement of the department.
  • the neck movements are classified.
  • Each type of head movement has a corresponding characteristic curve.
  • the frequency of the motion curve appears the user's head motion category is determined according to the head motion curve, and then the user's neck motion category is analyzed according to the user's head motion category to analyze the neck motion in a unit time.
  • the second terminal 12 classifies the head motion curve, which may be implemented in the following manner.
  • the head and the neck are integrated, the head is supported by the neck and drives the movement. When the head moves, the neck will also move accordingly. Therefore, in this embodiment of the present application, the movement situation corresponding to the neck can be analyzed by analyzing the movement situation of the head.
  • the second terminal 12 judges whether the head movement curve satisfies the first preset condition, and if the first preset condition is met, judges that the head movement is a left-right steering movement, and the neck movement corresponding to the head movement is also a left-right steering movement .
  • the head movement and the neck movement are consistent, and the head movement curve can represent the neck movement curve.
  • the motion trajectory curve of the left ear origin or the motion trajectory curve of the right ear origin can represent the head motion curve. Therefore, it is analyzed whether the motion trajectory curve of the left ear origin or the motion trajectory curve of the right ear origin satisfies the first preset condition. According to the corresponding preset conditions, it can be determined whether the neck movement is a left-right steering movement.
  • FIG. 7 is a schematic diagram of a left-right turning motion curve of the head according to an embodiment of the present application.
  • the first preset condition may be: whether the coordinate data of the origin of the left ear or the origin of the right ear in the second three-dimensional coordinate system simultaneously satisfies the conditions within a unit time:
  • a is the distance between the origin of the second three-dimensional coordinate system and the origin of the left ear or the origin of the right ear
  • b is the distance between the origin of the second three-dimensional coordinate system and the origin of the neck
  • i is a positive number, indicating the origin of the left ear or The distance by which the origin of the right ear can be offset left and right.
  • the head movement paths cannot be completely overlapped each time, and there is a certain offset. Therefore, correspondingly, the neck movement paths will not be completely overlapped each time.
  • it can be judged by determining whether the head movement curve corresponding to a certain neck movement is located in a certain threshold space. Types of neck movements.
  • the motion trajectory curve of the origin of the left ear and the motion trajectory curve of the origin of the right ear are symmetrical with respect to the origin of the second three-dimensional coordinate system. Therefore, the entire head can be analyzed by analyzing the motion curve of the origin of the left ear or the origin of the right ear. part of the motion curve.
  • the following description takes the motion trajectory of the origin of the left ear as an example.
  • FIG. 8 is a schematic diagram of a threshold interval for a left-right turning motion of the head according to an embodiment of the present application.
  • a in the figure represents the rightmost position of the origin of the left ear within the threshold interval when the neck is not moving, corresponding to the position indicated by a-i on the x-axis in Figure 6;
  • B represents the origin of the left ear when the neck is not moving
  • the leftmost position in the threshold interval corresponds to the position represented by a+i on the x-axis in Figure 6, and the ABCD area represents the threshold interval in which the origin of the left ear can move when the user performs the left-turning movement of the neck, which is located in this area.
  • the head motion curve of is the left-turn motion curve.
  • the ABFE area represents the threshold interval in which the origin of the left ear can move when the user performs a right-turning motion of the neck, and the head movement curve located in this area is the right-turning movement curve.
  • the second terminal 12 judges whether the head movement curve satisfies the second preset condition, and if the second preset condition is met, judges that the head movement is a left-right roll movement, and the neck movement corresponding to the head movement is also a left-right side movement Tilt movement.
  • the head motion curve can represent the user's neck motion curve.
  • the motion trajectory curve of the left ear origin is the same as the motion trajectory curve of the right ear origin.
  • the center point of the left ear origin and the right ear origin is used as the reference point to analyze the user's head. sports situation.
  • FIG. 9 is a schematic diagram of a left-right roll motion curve of the head provided by an embodiment of the present application
  • FIG. 10 is a schematic diagram of a threshold interval of the head-to-side roll motion provided by an embodiment of the present application.
  • the second preset condition may be: within a unit time, whether the coordinate data of the center points of the origin of the left ear and the origin of the right ear in the second three-dimensional coordinate system At the same time meet the following conditions:
  • b is the distance between the origin of the second coordinate system and the origin of the neck
  • k is the distance that can be shifted downward between the center point of the origin of the left ear and the origin of the right ear
  • k is a positive number.
  • the second preset condition is a region judgment, and it is judged whether the movement of the head is a The head movement corresponding to the second preset condition, and then it is judged whether the neck movement belongs to the corresponding movement. This is because when the user performs head movement, the head movement paths cannot be completely overlapped each time, and there is a certain offset, and therefore, the neck movement paths cannot be completely overlapped each time.
  • the region determination conditions a certain neck movement can be effectively classified into this type of neck movement corresponding to the region determination conditions.
  • AOB in Fig. 10 represents the motion curve of the center point of the origin of the left ear and the origin of the right ear when the user's head is tilted left and right when the origin of the neck is at the position shown in Fig. 9, which can represent the head of the user at this time.
  • the motion curve, at this time, the center points of the origin of the left ear and the origin of the right ear are exactly at the origin of the second three-dimensional coordinate system.
  • CO 1 D represents the motion curve of the center point of the left ear origin and the right ear origin when the center point of the left ear origin and the right ear origin is shifted downward by k.
  • the ACDB area represents the threshold interval in which the center point of the left ear origin and the right ear origin on the head can move when the user performs the left and right roll motion of the neck.
  • the OACO 1 area represents the threshold interval in which the center point of the left ear origin and the right ear origin on the corresponding head can move when the user performs a right-tilting motion of the neck, and the head motion curve located in this area is the right-tilting motion curve.
  • the OBDO 1 area represents the threshold interval in which the center points of the left and right ear origins on the corresponding head can move when the user performs a left-tilt neck motion, and the head motion curve located in this area is the left-tilt motion curve.
  • the user When the user performs a neck movement, if the coordinate values (x, y, z) of the center points of the origin of the left ear and the origin of the right ear in the second three-dimensional coordinate system satisfy two of the above-mentioned second preset conditions, And, x ⁇ 0, y ⁇ 0, it means that the motion curve of the center point of the origin of the left ear and the origin of the right ear is located in the OACO 1 area, and it can be judged that the motion curve of the user's head is a right-leaning motion curve.
  • the user's neck If the upper motion curve is the right tilt motion curve of the neck, the user's neck motion is the right tilt motion of the neck.
  • the coordinate values (x, y, z) of the center points of the origin of the left ear and the origin of the right ear in the second three-dimensional coordinate system satisfy two of the above-mentioned second preset conditions, and , x>0, y ⁇ 0, it means that the motion curve of the center point of the origin of the left ear and the origin of the right ear is located in the OBDO 1 area, and it can be judged that the motion curve of the user's head is a left-leaning motion curve.
  • the user's neck The motion curve is the left-tilt motion curve of the neck
  • the user's neck motion is the left-tilt motion of the neck.
  • the second terminal 12 determines whether the head motion curve satisfies the third preset condition, and if the third preset condition is met, judges that the head motion is a pitch motion, and the neck motion corresponding to the head motion is also a pitch motion.
  • the third preset condition may be: within a unit time, whether the coordinate data of the origin of the left ear or the origin of the right ear in the second three-dimensional coordinate system satisfies the following conditions:
  • b is the distance between the origin of the second coordinate system and the origin of the neck.
  • the neck and head When the user's neck performs a pitching motion, the neck and head perform the pitching motion centered on the origin of the neck. At this time, the motion trajectory curve of the origin of the left ear is the same as the motion trajectory curve of the origin of the right ear. Therefore, by analyzing the motion curve of the origin of the left ear or the origin of the right ear, the motion curve of the entire head can be analyzed, and then the motion of the neck can be analyzed. . The following takes the motion curve of the origin of the right ear as an example for description.
  • FIG. 11 is a schematic diagram of a curve of head pitching motion provided by an embodiment of the present application.
  • point O in Figure 11 is the origin of the right ear
  • point C is the position that the origin of the right ear can reach when the head is raised
  • point D is the position that the origin of the right ear can reach when the head is lowered.
  • the curve OC in the figure is the standard motion curve of the origin of the right ear when the head is raised
  • the curve OD in the figure is the standard motion curve of the origin of the right ear when the head is lowered.
  • a threshold interval can be set.
  • the OBC area in the figure represents the threshold range in which the origin of the right ear on the head can be moved when the user is moving up.
  • the head movement curve in this area is the head-up movement curve, and the corresponding neck movement curve It is also the head-up motion curve; the OAD area in the figure represents the threshold interval in which the origin of the right ear on the corresponding head can be moved when the user performs the head-down motion.
  • the head motion curve located in this area is the head-down motion curve, and the corresponding , the neck motion curve is also the bow motion curve.
  • b 2 , it means that the motion curve of the origin of the right ear is located in the OBC area, it can be judged that the user's head motion curve is the head-up motion curve, and correspondingly, the user's neck motion curve can be judged is a head-up motion curve, and the user's neck motion is a head-up motion.
  • b 2 , it means that the motion curve of the origin of the right ear is located in the OAD area, and it can be judged that the user's head motion curve is the bowing motion curve, and correspondingly, it can be judged that the user's neck motion curve is If the head bowing motion curve is used, the neck motion of the user is the bowing motion.
  • the method for collecting and processing neck movement data in the embodiment of the present application can be applied to sports health. Specifically, after collecting the initial data of neck movement, the first terminal 11 sends the initial data of neck movement to the second terminal 12 . , the second terminal 12 processes the initial data to obtain corresponding neck motion information.
  • the neck motion information may be the neck motion curve and neck motion category obtained in the embodiment of the present application.
  • the second terminal 12 can also perform data visualization on the initial data of the neck movement and the processed data. Data visualization refers to comparing the collected initial data and the data generated by the analysis in a table, a line graph, a pie graph or a bar graph, etc. Intuitive form presented to the user.
  • the second terminal 12 generates the user's motion graph according to the collected initial data, or generates the neck motion graph according to the preprocessed data, or according to the obtained neck motion information: including the occurrence of various neck motions.
  • the frequency, type of neck movement and other information generate the user's neck movement chart over a period of time. This chart can be used to help users understand the various types of neck movements and frequency they made during this time.
  • the user makes corresponding coping strategies after learning about his movement status according to the generated neck movement chart.
  • a certain neck movement chart reflects that the proportion of the user's neck bowing is higher than 70% for a period of time. If the user bows his head for a long time, various cervical vertebra diseases are likely to occur. Then, according to the information, the user can improve the corresponding sitting posture, properly move the neck, and cooperate with other neck movements to improve the state of the neck.
  • a corresponding neck health knowledge base can also be set up in the mobile phone.
  • the mobile phone analyzes the collected initial data of neck movement, it outputs the neck health status obtained by the analysis, combined with the neck health knowledge base.
  • the prompt information may include neck health status and suggestions to the user.
  • the mobile phone can display the prompt information directly on the screen, or output the prompt information by voice broadcast. When the prompt information is output by voice broadcast, the prompt information can be broadcast to the user through the earphone worn by the user.
  • the embodiment of the present application does not limit the manner of outputting the prompt information.
  • FIG. 14 is one of the system architecture diagrams applied by the embodiments of the present application.
  • the preprocessing module in the figure completes the preliminary processing of the data, filtering out interference, such as running, walking, etc.; the execution module transmits the data to the model analysis, and matches the model database according to the data in the model analysis.
  • the execution module receives the results of the model analysis and feedback to process the data, and the output module completes the classification and count and output of neck motion parameters;
  • the visualization module can present the motion data in various ways according to the classification of neck motion in sports health Interface; strategy analysis further analyzes neck exercise data to visualize the movement of neck muscles; on the other hand, it analyzes exercise intensity and emphasis according to the user's health knowledge base;
  • the health steward broadcast can summarize the user's exercise status, in the way of a healthy steward Realize sports broadcast or reminder through headphones, etc.
  • the above-mentioned neck muscles generally include: superficial cervical muscle: platysma; lateral cervical muscle: sternocleidomastoid muscle; deep cervical muscle: medial group (prevertebral group), lateral group (vertebral lateral group).
  • FIG. 15 is the second system architecture diagram applied by the embodiment of the present application.
  • the method for collecting and processing neck motion data in the embodiment of the present application can also be applied to scenarios such as sports entertainment interaction.
  • the relevant information obtained by this method is applied to the air-space operation of the APP.
  • the mobile phone After the mobile phone processes the initial data of neck movement, it can obtain different types of neck movement curves, including the left-turning movement curve of the neck, the right-turning movement curve of the neck, the movement curve of the left neck tilt, and the movement curve of the right neck movement. , head-up motion curve and head-down motion curve, etc.
  • Each neck motion curve can be used as a separate command to operate the APP in the mobile phone.
  • the music APP can be started when the user's neck turns to the left; When turning right, you can exit the music APP; set the head-up motion curve as the playback instruction of the music APP, when the user looks up, the music APP can play music; set the head-down motion curve as the pause of the music APP Play instruction, when the user bows his head, the music APP can be paused to play music; set the motion curve of the neck tilt to the left as the instruction of the music APP to switch to the previous song, when the user tilts the neck to the left, the music APP can be switched Go to the previous song; set the motion curve of the neck tilt to the right as the instruction of the music APP to switch to the next song, when the user tilts the neck to the right, the music APP can be switched to the next song.
  • different neck motion curves can also be set as different commands for starting or closing different apps, or the neck motion curves can be set as corresponding commands for taking pictures with the camera and other scenes.
  • the space interaction with the mobile APP can be realized through the neck movement, which is not only easy to operate, but also frees the hands, which is highly interesting.
  • the above method can also be applied to operating the mobile phone in the air, for example, operating the mobile phone through head and neck motions to turn pages, select, pull down, select, unlock, take pictures, input methods, and the like.
  • the neck movement curve can also be customized by the method provided in the embodiments of the present application, and the user-defined application is triggered by identifying the custom neck movement curve, for example, opening the APK application, locking the screen, unlocking, and the like.
  • the above method can also be applied to the headset navigation guidance. After setting the destination on the mobile phone, navigate through the headset to realize the navigation screen without relying on the mobile phone, and liberate the user's visual interaction with the mobile phone.
  • the embodiments of the present application do not limit the specific application of the method for collecting and processing neck motion data.
  • FIG. 16 is the third system architecture diagram applied by the embodiment of the present application
  • FIG. 17 is a schematic flowchart of the application to the system architecture shown in FIG. 16 provided by the embodiment of the present application.
  • the neck motion data collection and processing method adopted in the embodiment of the present application can be applied with reference to the system shown in FIG. 16 , and the specific application method and scenario, etc., can be applied with reference to the system shown in FIG. 16 .
  • the above data processing model can provide a variety of interfaces for different application modules to use.
  • Motion data interface (1) Counting interface for different motion curves; (2) interface for the speed of neck movement curve; (3) interface for neck movement type.
  • Switch data interface (1) An effective neck movement curve per unit time can be used as a switch quantity data interface; (2) An effective neck movement curve has directional attributes and scene attributes and can be used as a switch type data interface for selection. ; (3) The frequency of the effective neck movement curve per unit time can be used as a custom switch data interface; (4) The neck static posture can be used as a switch data interface; (5) The head and neck space orientation can be used as a pointing data interface.
  • Personalized neck movement curve interface the user inputs a custom personalized neck movement curve as a personalized operation command interface applied by the user.
  • the neck movement data collection and processing method adopted in this embodiment may be applied to the neck movement data collection and processing system shown in FIG. 2 .
  • the embodiments of the present application are described by taking as an example that the third terminal 13 is an earphone capable of collecting neck motion data and capable of analyzing and processing the data.
  • FIG. 12 shows the second schematic flowchart of the method for collecting and processing neck motion data provided by the embodiment of the present application.
  • the method may include the following steps:
  • the third terminal 13 collects the initial data of the head movement of the user in unit time
  • the third terminal 13 preprocesses the initial data to obtain the preprocessed data of the head movement per unit time when the user is stationary or relatively stationary;
  • the third terminal 13 draws the preprocessed data per unit time into a curve to obtain a head motion curve
  • the third terminal 13 classifies the head movement curves, and acquires frequency information of occurrences of various types of head movement curves per unit time.
  • the functions and functions of the third terminal 13 in this embodiment are equivalent to the combination of the functions and functions of the first terminal 11 and the functions and functions of the second terminal 12 in the first embodiment.
  • the technical solutions and benefits of each step in this embodiment are For the description of the effect, reference may be made to the description of the corresponding steps in the above-mentioned first embodiment, which will not be repeated here.
  • the initial data collected by the third terminal 13 and the processing result of the initial data need to be Send to the intelligent terminal in the corresponding scene, and the remaining application steps and methods refer to the description in the first embodiment, which will not be repeated here.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module. middle.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 13 is a schematic diagram of an apparatus for collecting and processing neck motion data according to an embodiment of the present application.
  • the neck movement data collection and processing device is used to execute the above-mentioned neck movement data collection and processing method, for example, the neck movement data collection and processing method shown in FIG. 6 .
  • the apparatus for collecting and processing neck motion data may include: a data collection module 1 , a preprocessing module 2 , a curve drawing module 3 and an information acquisition module 4 .
  • the data collection module 1 is used to collect the initial data of the head movement of the user in a unit time, and the initial data of the head movement represents the neck movement of the user.
  • the preprocessing module 2 is used for preprocessing the initial data of the head movement to obtain the preprocessing data of the head movement per unit time when the user is still or relatively still.
  • the curve drawing module 3 is used to draw the preprocessed data per unit time into a curve to obtain a head motion curve.
  • the information acquisition module 4 is used for classifying the head motion curves, and acquiring frequency information of the occurrence frequency of various types of head motion curves per unit time.
  • the data acquisition module 1 may execute S101 and/or S102
  • the preprocessing module 2 may execute S103
  • the curve drawing module 3 may execute S104
  • the information acquisition module 4 may execute S105.
  • the initial data collected by the data collection module 1 includes: initial coordinate data located in the first three-dimensional coordinate system, acceleration data a of the user, and facial orientation data of the user.
  • the preprocessing module 2 is specifically configured to obtain user state data based on initial coordinate data, acceleration data and orientation data, and determine the user's state; obtain preprocessing data based on the initial coordinate data and state data; preprocessing The data is the first coordinate data with the first three-dimensional coordinate system as the reference system.
  • the preprocessing module 2 is also specifically used to: convert the first coordinate data into second coordinate data with the second three-dimensional coordinate system as the reference system, and the origin of the second three-dimensional coordinate system is the origin of the user's left ear and the The center point of the line connecting the origin of the right ear.
  • the curve drawing module 3 classifies the head movement curve, including: the left and right steering movement curve, the head movement curve meets the first preset condition; the left and right roll movement curve, the head movement curve meets the second preset condition. ; The pitch motion curve, the head motion curve satisfies the third preset condition.
  • the curve drawing module 3 classifies the head movement curve based on the first preset condition, the second preset condition and the third preset condition.
  • the first preset condition includes: within a unit time, the coordinate data of the preprocessed data in the second three-dimensional coordinate system satisfies:
  • the second preset condition includes: within a unit time, the coordinate data of the preprocessed data in the second three-dimensional coordinate system satisfies:
  • the third preset condition includes: within a unit time, the coordinate data of the preprocessed data in the second three-dimensional coordinate system satisfies:
  • a is the distance between the origin of the second coordinate system and the origin of the left ear or the origin of the right ear
  • b is the distance between the origin of the second coordinate system and the origin of the neck
  • i and k are positive numbers.
  • the preprocessing module 2 is specifically used to determine the state of the user.
  • the state of the user may include: a static state and a moving state.
  • draw the user's movement curve graph. classify and save the motion curve graph.
  • the processor 31 executes the program code implementation in the memory 32 in FIG. 3 .
  • the chip system 100 includes at least one processor 110 and at least one interface circuit 120 .
  • the processor may be the processor 110 shown in the solid line box in FIG. 18 (or the processor 110 shown in the dotted line box)
  • the one interface circuit may be the interface circuit 120 shown in the solid line box in FIG. 18 (or the interface circuit 120 shown in the dotted line box).
  • the two processors include the processor 110 shown in the solid line frame and the processor 110 shown in the dotted line frame in FIG. 18
  • the two interfaces The circuit includes the interface circuit 120 shown in the solid line box and the interface circuit 120 shown in the dashed line box in FIG. 18 . This is not limited.
  • the processor 110 and the interface circuit 120 may be interconnected by wires.
  • the interface circuit 120 may be used to receive signals (eg, from a vehicle speed sensor or an edge service unit).
  • the interface circuit 120 may be used to send signals to other devices (eg, the processor 110).
  • the interface circuit 120 may read the instructions stored in the memory and send the instructions to the processor 110 .
  • the apparatus for predicting the disease risk level can be made to perform each step in the above embodiment.
  • the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
  • Another embodiment of the present application further provides a computer-readable storage medium, where an instruction is stored in the computer-readable storage medium.
  • the disease risk level prediction apparatus executes the above method embodiments Each step performed by the disease risk level prediction device in the shown method flow.
  • the disclosed methods may be implemented as computer program instructions encoded in a machine-readable format on a computer-readable storage medium or on other non-transitory media or articles of manufacture.
  • FIG. 19 schematically shows a conceptual partial view of a computer program product provided by an embodiment of the present application, where the computer program product includes a computer program for executing a computer process on a computing device.
  • the computer program product is provided using the signal bearing medium 130 .
  • the signal bearing medium 130 may include one or more program instructions that, when executed by one or more processors, may provide the functions, or portions thereof, described above with respect to FIG. 5 .
  • reference to one or more features of S101 - S104 in FIG. 6 may be undertaken by one or more instructions associated with the signal bearing medium 130 .
  • the program instructions in Figure 19 also describe example instructions.
  • the signal bearing medium 130 may include a computer readable medium 131 such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (read only memory) -only memory, ROM) or random access memory (RAM), etc.
  • a computer readable medium 131 such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (read only memory) -only memory, ROM) or random access memory (RAM), etc.
  • the signal bearing medium 130 may include a computer recordable medium 132 such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like.
  • a computer recordable medium 132 such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like.
  • signal bearing medium 130 may include communication medium 133, such as, but not limited to, digital and/or analog communication media (e.g., fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • communication medium 133 such as, but not limited to, digital and/or analog communication media (e.g., fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • Signal bearing medium 130 may be conveyed by a wireless form of communication medium 133 (eg, a wireless communication medium that conforms to the IEEE 802.11 standard or other transmission protocol).
  • the one or more program instructions may be, for example, computer-executable instructions or logic-implemented instructions.
  • a neck motion data acquisition and processing apparatus such as that described with respect to FIG. 19 , may be configured, in response to passing through one or more of computer readable medium 131 , computer recordable medium 132 , and/or communication medium 133 , Program instructions that provide various operations, functions, or actions.
  • the computer may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • a software program it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer-executed instructions are loaded and executed on the computer, the flow or function according to the embodiments of the present application is generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means to transmit to another website site, computer, server or data center.
  • Computer-readable storage media can be any available media that can be accessed by a computer or data storage devices including one or more servers, data centers, etc., that can be integrated with the media.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Epidemiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Hardware Design (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)

Abstract

一种颈动数据采集与处理方法及装置,涉及人体运动数据处理领域,能够解决人们无法准确掌握颈动数据的问题,可应用于颈动数据采集与处理系统中。该方法包括:采集用户在单位时间内头部运动的初始数据,所述头部运动的初始数据表征所述用户的颈部运动情况;对所述头部运动的初始数据进行预处理,得到所述用户处于静止或者相对静止时所述单位时间内头部运动的预处理数据;将所述单位时间内的所述预处理数据绘制成曲线,得到头部运动曲线;对所述头部运动曲线进行分类,并获取所述单位时间内各类所述头部运动曲线出现的频次信息。

Description

颈动数据采集与处理方法及装置
本申请要求于2020年12月30日提交国家知识产权局、申请号为202011616244.4、申请名称为“颈动数据采集与处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人体运动数据处理领域,尤其涉及一种颈动数据采集与处理方法及装置。
背景技术
随着社会的进步,人们的生活节奏越来越快,对于大部分上班族来说,由于人体长期坐在电脑前工作,使得颈部承受着较大的负担,若颈部长时间不活动,容易产生颈椎病、肩周炎等疾病。越来越多的人也开始选择活动颈部,颈部运动数据可以帮助人们更好地掌握颈部运动情况,可以根据颈部运动数据进行针对性调整,也可以利用颈部运动实现灵活的人机交互。但现有技术中,人们只能根据常识或者网络中归纳的一些方法去总结自己的颈部活动,没有准确地采集和分析颈动数据的方法,导致人们无法清楚地了解自身颈部运动的情况,也无法进行针对性的颈部健康调整。
发明内容
本申请实施例提供一种颈动数据采集与处理方法及装置,能够解决人们无法准确掌握颈动数据的问题。
为达到上述目的,本申请采用如下技术方案:
第一方面,提供一种颈动数据采集与处理方法,该方法包括:采集用户在单位时间内头部运动的初始数据,头部运动的初始数据表征用户的颈部运动情况。然后,对头部运动的初始数据进行预处理,得到用户处于静止或者相对静止时单位时间内头部运动的预处理数据。之后,将单位时间内的预处理数据绘制成曲线,得到头部运动曲线。最后,对头部运动曲线进行分类,并获取单位时间内各类头部运动曲线出现的频次信息。
基于第一方面提供的采集与处理方法,通过采集用户在单位时间内头部运动的初始数据,可以获取用户的颈部运动情况。通过对头部运动的初始数据进行预处理,得到用户处于静止或者相对静止时单位时间内头部运动的预处理数据,可以得出用户的颈部相对于身体躯干的运动情况;通过将预处理数据绘制成曲线,得到头部运动曲线,然后对头部运动曲线进行分类,并获取各类头部运动曲线出现的频次信息,可以实现对头部运动数据的处理,通过头部数据分析出颈部在一段时间内的运动类型和每种运动的次数,从而得到颈部运动的分析结果,有助于帮助用户准确地掌握自身的颈部运动数据。
在一种可能的设计中,采集的头部运动的初始数据包括:位于第一三维坐标系内的初始坐标数据,用户的加速度数据和用户的面部方位数据。
在此情况下,第一三维坐标系内的初始坐标数据和加速度数据有利于帮助分析用 户的运动状态和对应的状态数据,状态数据结合面部方位数据,有利于判断用户的运动路线等。通过采集多种初始数据,有利于综合分析用户的运动情况,从而得出更为准确的颈部运动情况。
在一种可能的设计中,对初始数据进行预处理,包括:基于初始坐标数据、加速度数据和方位数据,获取用户的状态数据,并确定用户的状态。然后,基于初始坐标数据和状态数据,获取预处理数据。预处理数据为以第一三维坐标系为参考系的第一坐标数据。
在此情况下,通过预处理,获得用户的运动状态和对应的状态数据,可以根据初始坐标数据和状态数据,得出用户的头部相对于身体躯干的运动数据,从而得到颈部相对于身体的运动数据,方便用户了解自身的颈部运动情况。
在一种可能的设计中,对初始数据进行预处理,还包括:将第一坐标数据转换为以第二三维坐标系为参考系的第二坐标数据,第二三维坐标系的原点为用户的左耳原点与右耳原点连线的中心点。
在此情况下,通过将第一坐标数据转换为以第二三维坐标系为参考系的第二坐标数据,有利于直观的反映出头部的运动曲线,可以更好地分析头部的运动情况,从而分析用户的颈部运动情况。
在一种可能的设计中,对头部运动曲线进行分类,包括:左右转向运动曲线,头部运动曲线满足第一预设条件;左右侧倾运动曲线,头部运动曲线满足第二预设条件;俯仰运动曲线,头部运动曲线满足第三预设条件。
在此情况下,通过对头部曲线进行分类,实现对头部运动进行分类,相应地,实现了对颈部运动的分类,使得用户对颈部运动的分析结果可以有更直观的了解。
在一种可能的设计中,第一预设条件可以包括:在单位时间内,预处理数据在第二三维坐标系内的坐标数据满足:
x 2+z 2≤(a+i) 2,以及x 2+z 2≥(a-i) 2
第二预设条件可以包括:在单位时间内,预处理数据在第二三维坐标系内的坐标数据满足:
x 2+y 2≤2b|y|,以及x 2+y 2+2bk≥k 2+2b|y|;
第三预设条件可以包括:在单位时间内,预处理数据在第二三维坐标系内的坐标数据满足:
x 2+y 2-2by≤0,x 2+y 2-2by≤0和|y|<=b 2
其中,a为第二坐标系原点与左耳原点或者右耳原点的距离,b为第二坐标系原点与颈部原点之间的距离,i,k为正数。
在一种可能的设计中,用户的状态可以包括:静止状态和运动状态。当用户处于运动状态时,基于单位时间内的状态数据,绘制用户的运动曲线图,对运动曲线图进行分类并保存。
在此情况下,通过对用户的状态进行分类,并对相应地数据进行存储,方便对采集到的用户在不同背景状态下的初始数据进行预处理时进行参考,提高对初始数据的处理效率和准确度。
第二方面,提供一种颈动数据采集与处理装置。
在一种可能的设计中,该颈动数据采集与处理装置用于执行上述第一方面提供的一种颈动数据采集与处理方法。本申请可以根据上述第一方面提供的方法,对该颈动数据采集与处理装置进行功能模块的划分。例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。示例性的,本申请可以按照功能将该颈动数据采集与处理装置划分为数据采集模块、预处理模块、曲线绘制模块和信息获取模块等。上述划分的各个功能模块执行的可能的技术方案和有益效果的描述均可以参考上述第一方面或其相应的可能的设计提供的技术方案,此处不再赘述。
在另一种可能的设计中,该颈动数据采集与处理装置包括:存储器和一个或多个处理器,该存储器和处理器耦合。该存储器用于存储计算机指令,该处理器用于调用该计算机指令,以执行如第一方面及其任一种可能的设计方式提供的任一种方法。
第三方面,本申请提供了一种计算机可读存储介质,如计算机非瞬态的可读存储介质。其上储存有计算机程序(或指令),当该计算机程序(或指令)在颈动数据采集与处理装置上运行时,使得该颈动数据采集与处理装置执行上述第一方面中任一种可能的实现方式提供的任一种方法。
第四方面,本申请提供了一种计算机程序产品,当其在计算机上运行时,使得第一方面中的任一种可能的实现方式提供的任一种方法被执行。
第五方面,本申请提供了一种芯片系统,包括:处理器,处理器用于从存储器中调用并运行该存储器中存储的计算机程序,执行第一方面中的实现方式提供的任一种方法。
第六方面,本申请提供了一种颈动数据采集与处理系统,包括:第一终端和第二终端。第一终端用于采集头部运动的初始数据,第二终端对采集到的初始数据进行处理,以实现颈动数据的采集和处理。或者,该疾颈动数据采集与处理系统包括第三终端,第三终端用于执行第一方面中的实现方式提供的任一种方法。
可以理解的是,上述提供的任一种颈动数据采集与处理装置、计算机存储介质、计算机程序产品或颈动数据采集与处理系统等均可以应用于上文所提供的对应的方法,因此,其所能达到的有益效果可参考对应的方法中的有益效果,此处不再赘述。
在本申请中,上述颈动数据采集与处理装置的名字对设备或功能模块本身不构成限定,在实际实现中,这些设备或功能模块可以以其他名称出现。只要各个设备或功能模块的功能和本申请类似,属于本申请权利要求及其等同技术的范围之内。
本申请的这些方面或其他方面在以下的描述中会更加简明易懂。
附图说明
图1为本申请实施例提供的颈动数据采集与处理系统的架构示意图之一;
图2为本申请实施例提供的颈动数据采集与处理系统的架构示意图之二;
图3为本申请实施例提供的终端设备的结构示意图;
图4为本申请实施例提供的第二三维坐标系的示意图之一;
图5为本申请实施例提供的第二三维坐标系的示意图之二;
图6为本申请实施例提供的颈动数据采集与处理方法流程示意图之一;
图7为本申请实施例提供的头部左右转向运动曲线示意图;
图8为本申请实施例提供的头部左右转向运动的阈值区间示意图;
图9为本申请实施例提供的头部左右侧倾运动曲线示意图;
图10为本申请实施例提供的头部左右侧倾运动的阈值区间示意图;
图11为本申请实施例提供的头部俯仰运动的曲线示意图;
图12为本申请实施例提供的颈动数据采集与处理方法流程示意图之二;
图13为本申请实施例提供的颈动数据采集与处理装置的示意图;
图14是本申请实施例应用的系统架构图之一;
图15为本申请实施例应用的系统架构图之二;
图16为本申请实施例应用的系统架构图之三;
图17为本申请实施例提供的应用于图16所示系统架构的流程示意图;
图18为本申请实施例提供的一种芯片系统的结构示意图;
图19为本申请实施例提供的计算机程序产品的概念性局部视图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请的实施例中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。
在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。本申请中术语“至少一个”的含义是指一个或多个。
应理解,在本文中对各种所述示例的描述中所使用的术语只是为了描述特定示例,而并非旨在进行限制。如在对各种所述示例的描述和所附权利要求书中所使用的那样,单数形式“一个(“a”,“an”)”和“该”旨在也包括复数形式,除非上下文另外明确地指示。
还应理解,本文中所使用的术语“和/或”是指并且涵盖相关联的所列出的项目中的一个或多个项目的任何和全部可能的组合。术语“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本申请中的字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,在本申请的各个实施例中,各个过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
还应理解,术语“包括”(也称“includes”、“including”、“comprises”和/或“comprising”)当在本说明书中使用时指定存在所陈述的特征、整数、步骤、操作、 元素、和/或部件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元素、部件、和/或其分组。
还应理解,术语“如果”可被解释为意指“当...时”(“when”或“upon”)或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定...”或“如果检测到[所陈述的条件或事件]”可被解释为意指“在确定...时”或“响应于确定...”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。
应理解,说明书通篇中提到的“一个实施例”、“一实施例”、“一种可能的实现方式”意味着与实施例或实现方式有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”、“一种可能的实现方式”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。
下面将结合图1-图12对本申请实施例提供的颈动数据采集与处理方法进行具体阐述。
参考图1,图1为本申请实施例提供的颈动数据采集与处理系统的架构示意图之一。如图1所示,颈动数据采集与处理系统包括第一终端11和第二终端12。
如图1所示,第一终端11,用于采集用户在单位时间内颈部运动的初始数据,然后将采集到的颈部运动的初始数据发送给第二终端12。在采集用户的颈部运动的初始数据之前,需要确定具体的采集点,由于颈部运动数据比较难以直接采集,直接采集的数据也难以反应颈部的真实运动情况,如头部在进行俯仰时,颈部表皮所采集的数据变化不大,导致误差较大。但颈部运动会带动头部进行相应的运动,因此,可以通过采集头部上某一个点或者多个点的运动情况,来表征颈部运动。本实施例中为了方便采集数据,采集左耳和右耳的运动数据,以左耳原点和右耳原点为采集点,通过分析左耳原点和右耳原点的运动情况,来分析颈部的运动情况。其中,左耳原点为左耳数据采集装置的中心,右耳原点为右耳数据采集装置的中心。
为了方便采集左耳原点和/或右耳原点的运动数据,本实施例中的第一终端11可以是数据采集能力的耳机,该耳机配置有运动传感器,包括加速度传感器、陀螺仪、电子罗盘传感器、大气压传感器等。采集数据时,将左耳机佩戴在左耳上,将右耳机佩戴在右耳上,其中,左耳机的中心为左耳原点,右耳机的中心为右耳原点。
颈部运动的初始数据可以包括:在单位时间内,左耳原点和右耳原点运动的三维坐标数据,左耳原点和右耳原点的加速度,左耳机和右耳机所采集的方位数据。
需要说明的是,左耳原点和右耳原点运动的三维坐标数据是位于第一三维坐标系内的坐标数据,第一三维坐标系是指第一终端11内所预设的三维坐标系。
第二终端12,用于接收第一终端11所采集的颈部运动的初始数据,并对初始数据进行预处理,得到用户处于静止或者相对静止时单位时间内颈部运动的预处理数据。然后将得到的预处理数据绘制成曲线,得到颈部运动曲线。然后再将颈部运动曲线进行分类,获取单位时间内,各种类型颈部运动曲线出现的频次。
本实施例中的第二终端12可以是手机、平板、电脑或者车载电脑等具有相应数据处理能力的智能终端。
第二终端12内设置有第二三维坐标系,第二三维坐标系的原点为左耳原点和右耳原点连线的中点,第二三维坐标系的x轴正方向为坐标系原点指向左耳原点的方向, 第二三维坐标系的的y轴正方向为竖直向上的方向,第二三维坐标系的的z轴为同时垂直于x轴和y轴且与指向面部的方向。经第二终端12预处理后的数据是以第二三维坐标系为参考系,所得到的第二三维坐标数据,颈部运动曲线是基于第二三维坐标数据所绘制的。
由于每个用户的基本信息不同,如身高和头部的尺寸不同,因此在第二终端12内预设有基本信息库,在进行第一次数据处理时,需要用户填写相关的基本信息,包括身高,左耳原点与右耳原点之间的距离,左耳原点(或者右耳原点)与地面(或者头顶)的距离,颈部原点到地面(或者头顶)的距离。其中,颈部原点为颈部转动的中心。
参考图2,图2为本申请实施例提供的颈动数据采集与处理系统的架构示意图之二。如图2所示,颈动数据采集与处理系统包括第三终端13。
第三终端13,用于采集用户在单位时间内颈部运动的初始数据,然后对初始数据进行预处理,得到用户处于静止或者相对静止时单位时间内颈部运动的预处理数据。然后将得到的预处理数据绘制成曲线,得到颈部运动曲线。然后再将颈部运动曲线进行分类,获取单位时间内,各种类型颈部运动曲线出现的频次。
本实施例中的第三终端13可以是具有信息获取功能且具有数据处理能力的耳机等智能终端设备。
参考图3,图3是本申请实施例提供的一种终端设备的结构示意图。终端设备30可以是图1中的第一终端11或者第二终端12,也可以是图2中的第三终端13。
如图4所示,终端设备30可以包括处理器31、存储器32、通信接口33以及总线34。其中,处理器31、存储器32以及通信接口33之间可以通过总线34连接。
处理器31是终端设备30的控制中心,可以是一个通用中央处理单元(central processing unit,CPU),也可以是其他通用处理器。其中,通用处理器可以是微处理器或者是任何常规的处理器。
作为示例,处理器31可以包括一个或多个CPU,例如图4中所示的CPU 0和CPU 1。
存储器32可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦写可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
一种可能的实现方式中,存储器32可以独立于处理器31存在。存储器32可以通过总线34与处理器31相连接,用于存储数据、指令或者程序代码。处理器31调用并执行存储器32中存储的指令或程序代码时,能够实现本申请实施例所提供的预测方法。
另一种可能的实现方式中,存储器32也可以和处理器31集成在一起。
通信接口33,用于终端设备30与其他设备(如服务器等)通过通信网络连接,该通信网络可以是以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。通信接口33可以包括用于接收数据的接收单元,以及用于发送数据的发送单元。
总线34,可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component Interconnect,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图3中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
需要指出的是,图3中示出的结构并不构成对该终端设备30的限定,除图3所示部件之外,该终端设备30可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
本申请实施例提供了一种颈动数据采集与处理方法,该方法可以应用于图1所示的第一终端11和第二终端12中,或者应用于图2所示的第三终端13中。具体的,该方法可以应用于图3所示的终端设备30中,当该方法应用于图3所示的终端设备30中时,可以通过处理器31执行存储器32中的程序指令实现本申请实施例提供的颈动数据采集与处理方法。通过执行本申请实施例提供的颈动数据采集与处理方法,可以对颈部运动数据的关键特征进行提取,实现颈部运动数据的进行充分利用。
下面结合附图,对本申请实施例提供的颈动数据采集与处理方法进行描述。
实施例一
本实施例中所采用的颈动数据采集与处理方法可以是应用于图1中所示的颈动数据采集与处理系统。
本申请实施例以第一终端11采集用户的颈部运动数据,第二终端12对数据进行分析处理的形式为例对本申请实施例的采集与处理流程进行说明,具体的,本申请实施例的第一终端11可以为具有数据收集功能的耳机,第二终端12可以为智能手机。
请参考图6,图6示出了本申请实施例提供的颈动数据采集与处理方法流程示意图之一。该方法可以包括以下步骤:
S101、第一终端11采集用户在单位时间内头部运动的初始数据,将初始数据发送到第二终端12。
由于头部和颈部是连为一体的,且头部是由颈部带动进行运动,并以颈部原点为中心进行运动,因此头部运动的初始数据可以表征用户的颈部运动情况。颈部原点是指颈部与躯干连接面的中心。
用户将耳机佩戴在双耳,采集用户在单位时间内的头部运动数据,耳机上可以设置是否采集数据的按钮,也可以通过智能设备对是否采集数据进行控制,也可以使耳机处于一直采集用户的相关数据的状态。本申请实施例中,单位时间是指人为设定的一段时间,单位时间的长短可以根据用户的需要进行设置,如单位时间可以为1分钟、5分钟或者10分钟。
本申请实施例中,头部运动的初始数据包括:用户在第一三维坐标系内的初始坐标数据,用户的加速度数据和用户的面部方位数据。其中,用户在第一三维坐标系内的初始坐标数据是指:用户佩带好耳机时,左耳原点和右耳原点的三维坐标数据,以及在单位时间内,用户在进行运动时,左耳原点和右耳原点动态的三维坐标数据。用户的加速度数据是指通过加速度传感器检测出的用户的加速度,可以通过加速度数据判断用户在单位时间内的运动状态。如当用户的加速度为0时,可以判断用户呈静止状态或者匀速运动 的状态;当加速度数据不为0时,则用户一定处于非匀速运动状态,具体的运动情况需要根据左耳原点和右耳原点的坐标数据变化,并结合加速度数据进行推算。用户的面部方位数据是指用户的面部朝向数据,可以通过用户的面部方位数据,结合左耳原点和右耳原点动态的三维坐标数据以及加速度数据,判断用户的运动路径等信息。
在本申请实施例中,耳机将采集到的头部运动的初始数据发送给手机,数据传输的方式可以采用蓝牙传输、网络传输或者其它传输方式,本实施例对具体的传输方式不作限定。
S102、第二终端12对初始数据进行预处理,得到用户处于静止或者相对静止时单位时间内头部运动的预处理数据。
手机接收到耳机采集到的初始数据后,对初始数据进行预处理。初始数据中的三维坐标数据是以第一三维坐标系为参考系所得到的三维坐标数据,在进行数据处理时,建立了第二三维坐标系。参考图4和图5,图4为本申请实施例提供的第二三维坐标系的示意图之一,图5为本申请实施例提供的第二三维坐标系的示意图之二。如图4和图5所示,第二三维坐标系是以左耳原点和右耳原点连线的中点为原点,以穿过原点指向左耳原点的方向为x轴正方向,以穿过原点竖直向上的方向为y轴正方向,以穿过原点指向面部的方向为z轴正方向,以该第二三维坐标系为参考系所确定的三维坐标数据,可以直观的反映出头部的运动曲线,可以更好地分析头部的运动情况,从而分析用户的颈部运动情况。对初始数据进行预处理包括:将左耳原点和右耳原点以第一三维坐标系为参考系所得到的第一坐标数据转换为以第二三维坐标系为参考系的第二坐标数据。
由于在采集用户头部运动的初始数据时,该初始数据是相对地面的运动数据,当用户静止时,该头部运动数据也是相对身体重心的运动数据。但当用户处于运动状态时,该初始数据包含了头部相对用户重心的运动数据和用户重心相对地面的运动数据,而本申请旨在分析用户的头部相对身体重心的运动数据,即用户处于静止时或者相对静止时头部的运动数据,以实现对头部运动数据的利用。
在进行数据预处理时,先根据左耳原点和右耳原点在单位时间内的运动数据,以及用户的加速度数据,分析出用户重心相对于地面的运动数据,将该数据称作为背景状态数据,得到背景状态数据的方法为现有技术,如现有技术中通过手环或者手机等获取用户的运动情况的方法,本实施例中不作赘述。由于采集到的的预处理数据为头部相对用户重心的运动数据和用户重心相对地面的运动数据的和,因此将头部运动的初始数据减背景状态数据,即可得到头部相对用户重心的运动数据,该数据即为用户处于静止或者相对静止时单位时间内头部运动的预处理数据。预处理后的数据是以第二三维坐标系为参考系的第二坐标数据。
此外,由于手机在对采集到的初始数据进行预处理时,可以得到用户重心相对于地面运动的背景状态数据,而不同的单位时间内,用户的背景状态数据不一定完全一样。例如,用户静止时,其背景状态数据为0;用户匀速运动时,其背景状态数据为匀速变化的,变化的值与匀速运动的速度和运动方向有关;用户非匀速运动时,其背景状态数据也是变化的,具体的变化情况与非匀速运动时的速度、加速度以及运动的方向有关。可以在手机上设置背景状态数据库,用于存储预处理过程中所得到的背景状态数据,对于不同的背景状态,可以建立与每种背景状态相对应的典型的背景状态数据,方便对采集到的用户在不同背景状态下的初始数据进行预处理时进行参考。
S103、第二终端12将单位时间内的预处理数据绘制成曲线,得到头部运动曲线。
手机对初始数据进行预处理后,得到预处理数据,预处理数据中的第二坐标数据为左耳原点在单位时间内的坐标数据和右耳原点在单位时间内的坐标数据,将单位时间内左耳原点的坐标数据变化依次连接起来,便可得到左耳原点的运动轨迹曲线;将单位时间内右耳原点的坐标数据变化依次连接起来,便可得到右耳原点的运动轨迹曲线。由于第二三维坐标系的原点为左耳原点和右耳原点的中心,因此左耳原点的运动轨迹曲线和右耳原点的运动轨迹曲线关于第二三维坐标系的原点对称。本申请实施例中,通过左耳原点和右耳原点的运动情况来表征头部的运动情况,以头部的运动情况表征用户的颈部运动情况。因此左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线可表征颈部运动曲线。颈部运动曲线为左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线按比例缩小得到,具体比例可以根据颈部上的点到颈部中心的距离,与左耳原点到第二三维坐标系的原点的距离的比例进行确定。
S104、第二终端12对头部运动曲线进行分类,并获取单位时间内各类头部运动曲线出现的频次信息。
手机在将预处理数据绘制成曲线后,得到左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线,由于颈部运动曲线是左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线经过比例缩放的,因此,分析左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线即可分析出颈部的运动情况。由于在单位时间内,用户的头部运动情况可能较为复杂,因此单位时间内,左耳原点的运动轨迹曲线和右耳原点的运动轨迹曲线可能为一条连续的复杂的曲线,这样不便于对颈部的运动情况进行分析。为了便于分析颈部的运动情况,对颈部运动进行分类,每一类的头部运动都有对应的特征曲线,对单位时间内的头部运动曲线分别进行归类,并统计各类头部运动曲线出现的频次,根据头部运动曲线判断所述用户的头部运动类别,再根据用户的头部运动类别分析用户的颈部运动类别,以分析单位时间内颈部的运动情况。
可选地,第二终端12对头部运动曲线进行分类,可以通过以下方式实现。
由于头部和颈部是一体的,头部是由颈部支撑并带动着运动的,当头部运动时,颈部也会产生相应的运动。因此,本申请实施例中,可以通过分析头部运动的情况,来分析颈部对应的运动情况。
方式一
第二终端12判断头部运动曲线是否满足第一预设条件,若满足第一预设条件,则判断头部运动为左右转向运动,与头部运动相对应的颈部运动也为左右转向运动。
在本申请实施例中,头部运动和颈部运动具有一致性,头部运动曲线即可表征颈部运动曲线。左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线即可表示头部运动曲线,因此,分析左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线是否满足与第一预设条件相对应的预设条件,即可判断颈部运动是否为左右转向运动。
参考图7,图7为本申请实施例提供的头部左右转向运动曲线示意图。如图7所示,在本申请实施例中,第一预设条件可以为:单位时间内,左耳原点或者右耳原点在第二三维坐标系内的坐标数据是否同时满足条件:
x 2+z 2≤(a+i) 2
x 2+z 2≥(a-i) 2
其中,a为第二三维坐标系的原点与左耳原点或者右耳原点的距离,b为第二三维坐标系的原点与颈部原点之间的距离,i为正数,表示左耳原点或者右耳原点可左右偏移的距离。
用户在进行头部运动时,每次的头部运动路径不可能完全重合,存在一定的偏移,因此,相应地,每次颈部运动路径也不会完全重合。在通过头部运动曲线判断某次颈部运动是否属于某一类型的颈部运动时,可以通过确定与某次颈部运动对应的头部运动曲线是否位于某一阈值空间内,来判断该次颈部运动的类型。
本申请实施例中,左耳原点的运动轨迹曲线和右耳原点的运动轨迹曲线关于第二三维坐标系的原点对称的,因此,分析左耳原点或者右耳原点的运动曲线即可分析整个头部的运动曲线。下面以左耳原点的运动轨迹为例进行说明。
参考图8,图8为本申请实施例提供的头部左右转向运动的阈值区间示意图。如图8所示,图中A表示颈部未运动时左耳原点在阈值区间内的最右位置,对应图6中x轴上a-i所表示的位置;B表示颈部未运动时左耳原点在阈值区间内的最左位置,对应图6中x轴上a+i所表示的位置,ABCD区域表示用户在进行颈部左转向运动时,左耳原点可运动的阈值区间,位于该区域内的头部运动曲线为左转运动曲线。ABFE区域表示用户在进行颈部右转向运动时,左耳原点可运动的阈值区间,位于该区域内的头部运动曲线为右转运动曲线。
当用户进行颈部运动时,若左耳原点在第二三维坐标系内的坐标值(x、y、z)满足上述第一预设条件中的两个限定条件,且左耳原点在第二三维坐标系内的坐标值(x、y、z)满足x>=0,z<=0,则说明左耳原点的运动曲线位于ABCD区域内,可以判断用户的头部运动曲线为左转运动曲线,对应地,用户的颈部运动曲线为颈部左转运动曲线,则用户的颈部运动为颈部左转运动。
若左耳原点在第二三维坐标系内的坐标值(x、y、z)满足上述第一预设条件中的两个限定条件,且左耳原点在第二三维坐标系内的坐标值(x、y、z)满足x>=0,z>=0,则说明左耳原点的运动路径位于ABFE区域内,可以判断用户的头部运动曲线为右转运动曲线,对应地,用户的颈部运动曲线为颈部右转运动曲线,则用户的颈部运动为颈部右转运动。
方式二
第二终端12判断头部运动曲线是否满足第二预设条件,若满足第二预设条件,则判断头部运动为左右侧倾运动,与头部运动相对应的颈部运动也为左右侧倾运动。
在本申请实施例中,由于左耳原点的运动轨迹曲线或右耳原点的运动轨迹曲线即可表示头部运动曲线,头部运动曲线可以表征用户的颈部运动曲线。当用户进行左右侧倾运动时,左耳原点的运动轨迹曲线与右耳原点的运动轨迹曲线相同,为了方便分析,以左耳原点和右耳原点的中心点为参考点,分析用户的头部运动情况。通过分析左耳原点和右耳原点的中心点的运动轨迹曲线是否满足与第二预设条件,判断头部运动是否为左右侧倾运动,进而判断颈部运动是否为颈部左右侧倾运动。
参考图9、图10,图9为本申请实施例提供的头部左右侧倾运动曲线示意图,图10为本申请实施例提供的头部左右侧倾运动的阈值区间示意图。如图9、图10所示,在 本申请实施例中,第二预设条件可以为:在单位时间内,左耳原点和右耳原点的中心点在第二三维坐标系内的坐标数据是否同时满足如下条件:
x 2+y 2≤2b|y|,
x 2+y 2+2bk≥k 2+2b|y|。
其中,b为第二坐标系原点与颈部原点之间的距离,k为左耳原点与右耳原点的中心点的可向下偏移的距离,k为正数。
通过设置k,使得第二预设条件为一区域判断,通过判断左耳原点和右耳原点的中心点的运动轨迹是否位于第二预设条件限定的区域内,来判断头部的运动是否为第二预设条件所对应的头部运动,进而判断颈部运动是否属于相应的运动。这是由于用户在进行头部运动时,每次的头部运动路径不可能完全重合,存在一定的偏移,因此,每次颈部运动路径也不可能完全重合。设置区域判定条件,可以有效地将某次颈部运动归类到区域判定条件对应的这一类颈部运动中。
图10中AOB表示颈部原点位于如图9所示位置时,用户的头部在左右侧倾运动时,左耳原点和右耳原点的中心点的运动曲线,可表示用户此时的头部运动曲线,此时,左耳原点和右耳原点的中心点正好位于第二三维坐标系的原点位置。CO 1D表示当左耳原点和右耳原点的中心点向下偏移k时,左耳原点和右耳原点的中心点的运动曲线。ACDB区域表示用户在进行颈部左右侧倾运动时,头部上左耳原点和右耳原点的中心点可运动的阈值区间。相应地,OACO 1区域表示用户在进行颈部右倾运动时,对应的头部上左耳原点和右耳原点的中心点可运动的阈值区间,位于该区域内的头部运动曲线为右倾运动曲线;OBDO 1区域表示用户在进行颈部左倾运动时,对应的头部上左耳原点和右耳原点的中心点可运动的阈值区间,位于该区域内的头部运动曲线为左倾运动曲线。
当用户进行颈部运动时,若左耳原点和右耳原点的中心点在第二三维坐标系内的坐标值(x、y、z)满足上述第二预设条件中的两个限定条件,且,x<0,y<0,则说明左耳原点和右耳原点的中心点的运动曲线位于OACO 1区域内,可以判断用户的头部运动曲线为右倾运动曲线,相应地,用户的颈部运动曲线为颈部右倾运动曲线,则用户的颈部运动为颈部右倾运动。
当用户进行颈部运动时,左耳原点和右耳原点的中心点在第二三维坐标系内的坐标值(x、y、z)满足上述第二预设条件中的两个限定条件,且,x>0,y<0,则说明左耳原点和右耳原点的中心点的运动曲线位于OBDO 1区域内,可以判断用户的头部运动曲线为左倾运动曲线,相应地,用户的颈部运动曲线为颈部左倾运动曲线,则用户的颈部运动为颈部左倾运动。
方式三
第二终端12判断头部运动曲线是否满足第三预设条件,若满足第三预设条件,则判断头部运动为俯仰运动,与头部运动相对应的颈部运动也为俯仰运动。
在本申请实施例中,第三预设条件可以为:在单位时间内,左耳原点或右耳原点在第二三维坐标系内的坐标数据是否满足如下条件:
x 2+y 2-2by≤0;
其中,b为第二坐标系原点与颈部原点之间的距离。
当用户的颈部进行俯仰运动时,颈部和头部是以颈部原点为中心进行俯仰运动的。此 时,左耳原点的运动轨迹曲线与右耳原点的运动轨迹曲线相同,因此,分析左耳原点或者右耳原点的运动曲线即可分析整个头部的运动曲线,进而分析颈部的运动情况。下面以右耳原点的运动曲线为例进行说明。
参考图11,图11为本申请实施例提供的头部俯仰运动的曲线示意图。如图11所示,图11中的O点为右耳原点,C点为仰头时,右耳原点所能达到的位置,D点为低头时,右耳原点所能达到的位置。图中的曲线OC为仰头时,右耳原点的标准运动曲线;图中的曲线OD为低头时,右耳原点的标准运动曲线。由于用户每次的仰头运动不可能完全一致,低头运动也不可能完全一致,因此,判断用户是否在进行仰头或者低头时,可以设定一个阈值区间,当用户的头部运动曲线位于该阈值区间内,即可判断用户做出了相应的仰头运动或者低头运动。图中的OBC区域表示用户在进行仰头运动时,对应的头部上右耳原点可运动的阈值区间,位于该区域内的头部运动曲线为仰头运动曲线,对应的,颈部运动曲线也为仰头运动曲线;图中的OAD区域表示用户在进行低头运动时,对应的头部上右耳原点可运动的阈值区间,位于该区域内的头部运动曲线为低头运动曲线,对应的,颈部运动曲线也为低头运动曲线。
当用户进行颈部运动时,若右耳原点在第二三维坐标系内的坐标值(x、y、z)满足上述第三预设条件中的限定条件,且满足:x<=0,y<=0,|y|<=b 2,则说明右耳原点的运动曲线位于OBC区域内,可以判断用户的头部运动曲线为仰头运动曲线,相应地,可以判断用户的颈部运动曲线为仰头运动曲线,则用户的颈部运动为仰头运动。
当用户进行颈部运动时,若右耳原点在第二三维坐标系内的坐标值(x、y、z)满足上述第三预设条件中的限定条件,且满足:x>=0,y<=0,|y|<=b 2,则说明右耳原点的运动曲线位于OAD区域内,可以判断用户的头部运动曲线为低头运动曲线,相应地,可以判断用户的颈部运动曲线为低头运动曲线,则用户的颈部运动为低头运动。
本申请实施例中的颈动数据采集与处理方法可应用于运动健康,具体的,第一终端11在采集到颈部运动的初始数据后,将颈部运动的初始数据发送到第二终端12,第二终端12对初始数据进行处理,得到相应的颈部运动信息。颈部运动信息可以如本申请实施例所得到的颈部运动曲线,颈部运动类别。第二终端12还可以对颈部运动的初始数据以及处理后的数据进行数据可视化,数据可视化是指将采集到的初始数据以及分析产生的数据以表格、折线图、饼图或者柱状图等比较直观的形式展现给用户。例如,第二终端12根据采集到的初始数据生成用户的运动曲线图,或者根据预处理后的数据生成颈部运动的曲线图,或者根据得到的颈部运动信息:包括各类颈部运动出现的频次、颈部运动的类别等信息,生成用户在一段时间内的颈部运动图表。该图表可用于帮助用户了解其在这段时间内做出的各类颈部运动和频次。
相应地,用户在根据生成的颈部运动图表了解其运动状况后,做出相应的应对策略。例如,某一颈部运动图表反映用户在一段时间内颈部为低头的比例高于70%,若用户长时间处于低头状态,容易产生各种颈椎疾病。则用户可根据该信息,改善相应的坐姿,并适当活动颈部,配合其他的颈部运动,改善颈部的状态。
此外,还可以在手机内设置相应的颈部健康知识库,手机在对采集到的颈部运动的初始数据进行分析后,对分析得出的颈部健康状态,结合颈部健康知识库,输出相应的提示信息,提示信息可以包括颈部健康状态以及对用户的建议。手机可以采用直 接在屏幕上显示提示信息,也可以采用语音播报的方式输出提示信息。在采用语音播报输出提示信息时,可以通过用户佩戴的耳机对用户进行提示信息播报。本申请实施例对输出提示信息的方式不作限定。
参考图14,图14是本申请实施例应用的系统架构图之一。如图14所示,图中的预处理模块完成数据的初步处理,滤除干扰,如跑步、步行等;执行模块将数据传输到模型分析中,根据模型分析中的数据建模匹配模型数据库中的数据处理模型;执行模块接收模型分析反馈的结果对数据进行处理,输出模块完成分类计数和颈部运动参数结果输出;可视化模块可以根据颈部运动分类将运动数据以多种方式呈现在运动健康界面;策略分析对颈部运动数据进一步分析,实现对颈部肌肉的运动可视化;另一方面根据用户健康知识库分析运动强度和侧重;健康管家播报可以汇总用户的运动状态,以健康管家的方式通过耳机等实现运动播报或提醒。上述的颈部肌肉一般包括:颈浅肌:颈阔肌;颈外侧肌:胸锁乳突肌;颈深肌:内侧群(椎前群),外侧群(椎侧群)。
参考图15,图15为本申请实施例应用的系统架构图之二。如图15所示,本申请实施例中的颈动数据采集与处理方法还可以应用于体娱互动等场景,具体的应用可以参考图15中所示的流程。例如,将本方法得出的相关信息应用于APP的隔空操作。手机在将颈部运动的初始数据进行处理后,可以得出不同类别的颈部运动曲线,包括颈部左转运动曲线、颈部右转运动曲线、颈部左倾运动曲线、颈部右倾运动曲线、仰头运动曲线和低头运动曲线等,每种颈部运动曲线可以作为一个单独的指令,用于对手机内的APP进行操作。
例如,将左转运动曲线设置为某个音乐APP启动操作指令,用户颈部左转时,即可启动该音乐APP;将颈部右转运动曲线设置为该音乐APP的退出指令,用户颈部右转时,即可退出该音乐APP;将仰头运动曲线设置为该音乐APP的播放指令,用户仰头时,即可使该音乐APP播放音乐;将低头运动曲线设置为该音乐APP的暂停播放指令,用户低头时,即可使该音乐APP暂停播放音乐;将颈部左倾运动曲线设置为该音乐APP的切换到上一首的指令,用户颈部左倾时,即可使该音乐APP切换到上一首歌曲;将颈部右倾运动曲线设置为该音乐APP的切换到下一首的指令,用户颈部右倾时,即可使该音乐APP切换到下一首歌曲。
此外,也可以将不同的颈部运动曲线设置为不同的指令,用于启动或关闭不同的App,或者将颈部运动曲线设置为相应的指令,用于相机的拍照等场景。通过颈部运动即可实现与手机APP的隔空互动,不仅操作方便,还能解放双手,具有较强的趣味性。
此外,还可以将上述方法应用于隔空操作手机,例如通过头颈动作操作手机进行翻页、选择、下拉、选中、解锁、拍照、输入法等。
此外,也可以通过本申请实施例所提供的方法自定义颈动曲线,通过识别自定义颈动曲线触发用户自定义应用,例如:打开APK应用、锁屏、解锁等。
此外,还可以将上述方法应用于耳机导航指引。手机设定目的地后,通过耳机进行导航,实现不依赖手机导航画面,解放用户与手机的视力交互。
本申请实施例不对颈动数据采集与处理方法的具体应用进行限定。
参考图16、图17,图16为本申请实施例应用的系统架构图之三,图17为本申请实施例提供的应用于图16所示系统架构的流程示意图。本申请实施例所采用的颈动数 据采集与处理方法可以参考图16所示的系统进行应用,具体的应用方法和场景等参照图16中所示。
上述的数据处理模型可以提供多种接口供不同的应用模块使用。例如:
1、运动数据接口:(1)不同运动曲线的计数接口;(2)颈动曲线的快慢程度接口;(3)颈部运动类型接口。
2、开关量数据接口:(1)单位时间内一次有效的颈动曲线可以作为开关量数据接口;(2)一次有效的颈动曲线具有方向属性和场景属性可以作为选择类的开关量数据接口;(3)单位时间内有效颈动曲线的频次可以作为自定义开关量数据接口;(4)颈部静态姿势可以作为开关量数据接口;(5)头颈空间朝向可以作为指向数据接口。
3、个性颈动曲线接口:用户输入自定义个性颈动曲线作为用户应用的个性化操作命令接口。
实施例二
本实施例中所采用的颈动数据采集与处理方法可以是应用于图2中所示的颈动数据采集与处理系统。本申请实施例以第三终端13为具有采集颈部运动数据能力以及具有对数据进行分析处理能力的耳机为例进行说明。
请参考图12,图12示出了本申请实施例提供的颈动数据采集与处理方法流程示意图之二。该方法可以包括以下步骤:
S201、第三终端13采集用户在单位时间内头部运动的初始数据;
S202、第三终端13对初始数据进行预处理,得到用户处于静止或者相对静止时单位时间内头部运动的预处理数据;
S203、第三终端13将单位时间内的预处理数据绘制成曲线,得到头部运动曲线;
S204、第三终端13对头部运动曲线进行分类,并获取单位时间内各类头部运动曲线出现的频次信息。
本实施例中的第三终端13的功能和作用相当于实施例一中第一终端11的功能和作用与第二终端12的功能和作用的结合,本实施例中各个步骤的技术方案和有益效果的描述均可参照上述实施例一中对应步骤的描述,在此不作赘述。
本申请实施例中的颈动数据采集与处理方法的应用场景可以参照实施例一中所介绍的应用场景,在应用时,需要将第三终端13所采集的初始数据以及对初始数据的处理结果发送到对应场景中的智能终端中,其余应用步骤和方法参照实施例一中所述,在此不作赘述。
上述主要从方法的角度对本申请实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对颈动数据采集与处理装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块 的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
如图13所示,图13为本申请实施例提供的颈动数据采集与处理装置的示意图。
该颈动数据采集与处理装置用于执行上述的颈动数据采集与处理方法,例如,执行图6所示的颈动数据采集与处理方法。示例的,颈动数据采集与处理装置可以包括:数据采集模块1、预处理模块2、曲线绘制模块3和信息获取模块4。
数据采集模块1,用于采集用户在单位时间内头部运动的初始数据,头部运动的初始数据表征用户的颈部运动情况。
预处理模块2,用于对头部运动的初始数据进行预处理,得到用户处于静止或者相对静止时单位时间内头部运动的预处理数据。
曲线绘制模块3,用于将单位时间内的预处理数据绘制成曲线,得到头部运动曲线。
信息获取模块4,用于对头部运动曲线进行分类,并获取单位时间内各类头部运动曲线出现的频次信息。
结合图6,数据采集模块1可以执行S101和/或S102,预处理模块2可以执行S103,曲线绘制模块3可以执行S104,信息获取模块4可以执行S105。
可选地,数据采集模块1采集的初始数据包括:位于第一三维坐标系内的初始坐标数据,用户的加速度数据a和用户的面部方位数据。
可选地,预处理模块2,具体用于基于初始坐标数据、加速度数据和方位数据,获取用户的状态数据,并确定用户的状态;基于初始坐标数据和状态数据,获取预处理数据;预处理数据为以第一三维坐标系为参考系的第一坐标数据。
可选地,预处理模块2,还具体用于:将第一坐标数据转换为以第二三维坐标系为参考系的第二坐标数据,第二三维坐标系的原点为用户的左耳原点与右耳原点连线的中心点。
可选地,曲线绘制模块3对头部运动曲线进行分类,包括:左右转向运动曲线,头部运动曲线满足第一预设条件;左右侧倾运动曲线,头部运动曲线满足第二预设条件;俯仰运动曲线,头部运动曲线满足第三预设条件。
可选地,曲线绘制模块3基于第一预设条件、第二预设条件和第三预设条件对头部运动曲线进行分类。
其中,第一预设条件,包括:在单位时间内,预处理数据在第二三维坐标系内的坐标数据满足:
x 2+z 2≤(a+i) 2,以及x 2+z 2≥(a-i) 2
第二预设条件,包括:在单位时间内,预处理数据在第二三维坐标系内的坐标数据满足:
x 2+y 2≤2b|y|,以及x 2+y 2+2bk≥k 2+2b|y|。
第三预设条件,包括:在单位时间内,预处理数据在第二三维坐标系内的坐标数据满足:
x 2+y 2-2by≤0,x 2+y 2-2by≤0和|y|<=b 2
其中,a为第二坐标系原点与左耳原点或者右耳原点的距离,b为第二坐标系原点与颈部原点之间的距离,i,k为正数。
可选地,预处理模块2,具体用于确定用户的状态,用户的状态可以包括:静止状 态和运动状态,当用户处于运动状态时,基于单位时间内的状态数据,绘制用户的运动曲线图,对运动曲线图进行分类并保存。
关于上述可选方式的具体描述可以参见前述的方法实施例,此处不再赘述。此外,上述提供的任一种颈动数据采集与处理装置的解释以及有益效果的描述均可参考上述对应的方法实施例,不再赘述。
作为示例,结合图4,颈动数据采集与处理装置中的数据采集模块1、预处理模块2、曲线绘制模块3和信息获取模块4中的部分或全部实现的功能可以通过图3中的处理器31执行图3中的存储器32中的程序代码实现。
本申请实施例还提供一种芯片系统,如图18所示,该芯片系统100包括至少一个处理器110和至少一个接口电路120。作为示例,当该芯片系统100包括一个处理器和一个接口电路时,则该一个处理器可以是图18中实线框所示的处理器110(或者是虚线框所示的处理器110),该一个接口电路可以是图18中实线框所示的接口电路120(或者是虚线框所示的接口电路120)。当该芯片系统100包括两个处理器和两个接口电路时,则该两个处理器包括图18中实线框所示的处理器110和虚线框所示的处理器110,该两个接口电路包括图18中实线框所示的接口电路120和虚线框所示的接口电路120。对此不作限定。
处理器110和接口电路120可通过线路互联。例如,接口电路120可用于接收信号(例如从车速传感器或边缘服务单元接收信号)。又例如,接口电路120可用于向其它装置(例如处理器110)发送信号。示例性的,接口电路120可读取存储器中存储的指令,并将该指令发送给处理器110。当所述指令被处理器110执行时,可使得疾病风险等级预测装置执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请另一实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当指令在疾病风险等级预测装置上运行时,该疾病风险等级预测装置执行上述方法实施例所示的方法流程中该疾病风险等级预测装置执行的各个步骤。
在一些实施例中,所公开的方法可以实施为以机器可读格式被编码在计算机可读存储介质上的或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。
图19示意性地示出本申请实施例提供的计算机程序产品的概念性局部视图,所述计算机程序产品包括用于在计算设备上执行计算机进程的计算机程序。
在一个实施例中,计算机程序产品是使用信号承载介质130来提供的。所述信号承载介质130可以包括一个或多个程序指令,其当被一个或多个处理器运行时可以提供以上针对图5描述的功能或者部分功能。因此,例如,参考图6中S101~S104的一个或多个特征可以由与信号承载介质130相关联的一个或多个指令来承担。此外,图19中的程序指令也描述示例指令。
在一些示例中,信号承载介质130可以包含计算机可读介质131,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(read-only memory,ROM)或随机存储记忆体(random access memory,RAM)等等。
在一些实施方式中,信号承载介质130可以包含计算机可记录介质132,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。
在一些实施方式中,信号承载介质130可以包含通信介质133,诸如但不限于,数字 和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。
信号承载介质130可以由无线形式的通信介质133(例如,遵守IEEE 802.11标准或者其它传输协议的无线通信介质)来传达。一个或多个程序指令可以是,例如,计算机可执行指令或者逻辑实施指令。
在一些示例中,诸如针对图19描述的颈动数据采集与处理装置可以被配置为,响应于通过计算机可读介质131、计算机可记录介质132、和/或通信介质133中的一个或多个程序指令,提供各种操作、功能、或者动作。
应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机执行指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种颈动数据采集与处理方法,其特征在于,包括:
    采集用户在单位时间内头部运动的初始数据,所述头部运动的初始数据表征所述用户的颈部运动情况;
    对所述头部运动的初始数据进行预处理,得到所述用户处于静止或者相对静止时所述单位时间内头部运动的预处理数据;
    将所述单位时间内的所述预处理数据绘制成曲线,得到头部运动曲线;
    对所述头部运动曲线进行分类,并获取所述单位时间内各类所述头部运动曲线出现的频次信息。
  2. 根据权利要求1所述的方法,其特征在于,所述初始数据包括:
    位于第一三维坐标系内的初始坐标数据,所述用户的加速度数据和所述用户的面部方位数据。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述初始数据进行预处理,包括:
    基于所述初始坐标数据、所述加速度数据和所述方位数据,获取所述用户的状态数据,并确定所述用户的状态;
    基于所述初始坐标数据和所述状态数据,获取所述预处理数据;
    所述预处理数据为以所述第一三维坐标系为参考系的第一坐标数据。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述初始数据进行预处理,还包括:
    将所述第一坐标数据转换为以第二三维坐标系为参考系的第二坐标数据,所述第二三维坐标系的原点为所述用户的左耳原点与右耳原点连线的中心点。
  5. 根据权利要求1至4任意一项所述的方法,其特征在于,所述对所述头部运动曲线进行分类,包括:
    左右转向运动曲线,所述头部运动曲线满足第一预设条件;
    左右侧倾运动曲线,所述头部运动曲线满足第二预设条件;
    俯仰运动曲线,所述头部运动曲线满足第三预设条件。
  6. 根据权利要求5所述的方法,其特征在于,
    所述第一预设条件,包括:在所述单位时间内,所述预处理数据在第二三维坐标系内的坐标数据满足:
    x 2+z 2≤(a+i) 2,以及x 2+z 2≥(a-i) 2
    所述第二预设条件,包括:在所述单位时间内,所述预处理数据在第二三维坐标系内的坐标数据满足:
    x 2+y 2≤2b|y|,以及x 2+y 2+2bk≥k 2+2b|y|;
    所述第三预设条件,包括:在所述单位时间内,所述预处理数据在第二三维坐标系内的坐标数据满足:
    x 2+y 2-2by≤0,x 2+y 2-2by≤0和|y|<=b 2
    其中,a为第二坐标系原点与左耳原点或者右耳原点的距离,b为第二坐标系原点与颈部原点之间的距离,i,k为正数。
  7. 根据权利要求3所述的方法,其特征在于,所述用户的状态包括:静止状态和运动状态;
    当所述用户处于运动状态时,基于所述单位时间内的所述状态数据,绘制所述用户的运动曲线图,对所述运动曲线图进行分类并保存。
  8. 一种颈动数据采集与处理装置,其特征在于,所述装置包括:
    数据采集模块,用于采集用户在单位时间内头部运动的初始数据,所述头部运动的初始数据表征所述用户的颈部运动情况;
    预处理模块,用于对所述头部运动的初始数据进行预处理,得到所述用户处于静止或者相对静止时所述单位时间内头部运动的预处理数据;
    曲线绘制模块,用于将所述单位时间内的所述预处理数据绘制成曲线,得到头部运动曲线;
    信息获取模块,用于对所述头部运动曲线进行分类,并获取所述单位时间内各类所述头部运动曲线出现的频次信息。
  9. 根据权利要求8所述的装置,其特征在于,所述数据采集模块采集的所述初始数据包括:
    位于第一三维坐标系内的初始坐标数据,所述用户的加速度数据和所述用户的面部方位数据。
  10. 根据权利要求9所述的装置,其特征在于,所述预处理模块,具体用于:
    基于所述初始坐标数据、所述加速度数据和所述方位数据,获取所述用户的状态数据,并确定所述用户的状态;
    基于所述初始坐标数据和所述状态数据,获取所述预处理数据;
    所述预处理数据为以所述第一三维坐标系为参考系的第一坐标数据。
  11. 根据权利要求10所述的装置,其特征在于,所述预处理模块,还具体用于:
    将所述第一坐标数据转换为以第二三维坐标系为参考系的第二坐标数据,所述第二三维坐标系的原点为所述用户的左耳原点与右耳原点连线的中心点。
  12. 根据权利要求8至11任意一项所述的装置,其特征在于,所述曲线绘制模块对所述头部运动曲线进行分类,包括:
    左右转向运动曲线,所述头部运动曲线满足第一预设条件;
    左右侧倾运动曲线,所述头部运动曲线满足第二预设条件;
    俯仰运动曲线,所述头部运动曲线满足第三预设条件。
  13. 根据权利要求12所述的装置,其特征在于,所述曲线绘制模块基于所述第一预设条件、所述第二预设条件和所述第三预设条件对所述头部运动曲线进行分类;
    所述第一预设条件,包括:在所述单位时间内,所述预处理数据在第二三维坐标系内的坐标数据满足:
    x 2+z 2≤(a+i) 2,以及x 2+z 2≥(a-i) 2
    所述第二预设条件,包括:在所述单位时间内,所述预处理数据在第二三维坐标系内的坐标数据满足:
    x 2+y 2≤2b|y|,以及x 2+y 2+2bk≥k 2+2b|y|;
    所述第三预设条件,包括:在所述单位时间内,所述预处理数据在第二三维坐标 系内的坐标数据满足:
    x 2+y 2-2by≤0,x 2+y 2-2by≤0和|y|<=b 2
    其中,a为第二坐标系原点与左耳原点或者右耳原点的距离,b为第二坐标系原点与颈部原点之间的距离,i,k为正数。
  14. 根据权利要求10所述的装置,其特征在于,所述预处理模块,具体用于确定所述用户的状态;
    所述用户的状态包括:静止状态和运动状态;
    当所述用户处于运动状态时,基于所述单位时间内的所述状态数据,绘制所述用户的运动曲线图,对所述运动曲线图进行分类并保存。
  15. 一种颈动数据采集与处理装置,其特征在于,包括:存储器和处理器,所述存储器用于存储计算机程序,所述处理器用于调用所述计算机程序,以执行权利要求1-7任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行权利要求1-7任一项所述的方法。
PCT/CN2021/137201 2020-12-30 2021-12-10 颈动数据采集与处理方法及装置 WO2022143110A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011616244.4A CN114694244A (zh) 2020-12-30 2020-12-30 颈动数据采集与处理方法及装置
CN202011616244.4 2020-12-30

Publications (1)

Publication Number Publication Date
WO2022143110A1 true WO2022143110A1 (zh) 2022-07-07

Family

ID=82132218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137201 WO2022143110A1 (zh) 2020-12-30 2021-12-10 颈动数据采集与处理方法及装置

Country Status (2)

Country Link
CN (1) CN114694244A (zh)
WO (1) WO2022143110A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680456A (zh) * 2013-11-29 2015-06-03 平安科技(深圳)有限公司 一种人体颈部运动量监测方法
CN105488313A (zh) * 2014-09-15 2016-04-13 博雅网络游戏开发(深圳)有限公司 曲线趋势识别方法和装置
CN106569607A (zh) * 2016-11-08 2017-04-19 上海交通大学 一种基于肌电及运动传感器的头部动作识别系统
CN107126675A (zh) * 2017-04-24 2017-09-05 广东乐源数字技术有限公司 一种预防颈椎病的智能可穿戴设备及应用方法
CN107822645A (zh) * 2017-10-23 2018-03-23 上海百芝龙网络科技有限公司 一种基于WiFi信号的情绪识别方法
CN108805037A (zh) * 2018-05-23 2018-11-13 南京大学 一种利用图像信号以及电信号的人体与设备匹配方法
CN109144349A (zh) * 2018-08-07 2019-01-04 西交利物浦大学 一种虚拟、增强或混合现实头显运动方向识别方法及系统
CN109549649A (zh) * 2018-11-19 2019-04-02 东南大学 一种检测颈部活动的可穿戴式设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680456A (zh) * 2013-11-29 2015-06-03 平安科技(深圳)有限公司 一种人体颈部运动量监测方法
CN105488313A (zh) * 2014-09-15 2016-04-13 博雅网络游戏开发(深圳)有限公司 曲线趋势识别方法和装置
CN106569607A (zh) * 2016-11-08 2017-04-19 上海交通大学 一种基于肌电及运动传感器的头部动作识别系统
CN107126675A (zh) * 2017-04-24 2017-09-05 广东乐源数字技术有限公司 一种预防颈椎病的智能可穿戴设备及应用方法
CN107822645A (zh) * 2017-10-23 2018-03-23 上海百芝龙网络科技有限公司 一种基于WiFi信号的情绪识别方法
CN108805037A (zh) * 2018-05-23 2018-11-13 南京大学 一种利用图像信号以及电信号的人体与设备匹配方法
CN109144349A (zh) * 2018-08-07 2019-01-04 西交利物浦大学 一种虚拟、增强或混合现实头显运动方向识别方法及系统
CN109549649A (zh) * 2018-11-19 2019-04-02 东南大学 一种检测颈部活动的可穿戴式设备

Also Published As

Publication number Publication date
CN114694244A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
US11947719B2 (en) Building saccade models for predicting a landing point of a saccade experienced by a player
CN112400202B (zh) 为在hmd环境中进行快速中央凹渲染利用预测和对gpu的后期更新做出的眼睛跟踪
US11166104B2 (en) Detecting use of a wearable device
US20170371450A1 (en) Detecting tap-based user input on a mobile device based on motion sensor data
EP2839428B1 (en) Method of displaying multimedia exercise content based on exercise amount and multimedia apparatus applying the same
WO2016188318A1 (zh) 一种3d人脸重建方法、装置及服务器
TWI669681B (zh) 提供人體姿勢保健資訊的電子計算裝置、系統與方法
CN104834165B (zh) 一种投影屏幕上的激光点位置确定方法
CN106575150A (zh) 使用运动数据识别手势
CN107850943A (zh) 用于提供现实世界移动与虚拟/增强现实体验中的移动之间的连续性的系统和方法
WO2018161906A1 (zh) 动作识别方法、装置、系统以及存储介质
CN108965954A (zh) 使用用于减少视频的回放时间的智能分析的终端
US20230162411A1 (en) Augmented reality map curation
US20190310715A1 (en) Apparatus and method of using events for user interface
CN109949438A (zh) 异常驾驶监测模型建立方法、装置及存储介质
WO2022143110A1 (zh) 颈动数据采集与处理方法及装置
CN114005511A (zh) 一种康复训练方法、系统、训练自助设备和存储介质
CN111050266B (zh) 一种基于耳机检测动作进行功能控制的方法及系统
WO2023045789A9 (zh) 退出二维码的方法和装置
JP7196856B2 (ja) 情報処理装置、情報処理方法、およびプログラム
Bi et al. CSEar: Metalearning for Head Gesture Recognition Using Earphones in Internet of Healthcare Things
CN115047966A (zh) 交互方法、电子设备与交互系统
US20240085985A1 (en) Inertial sensing of tongue gestures
WO2023160196A1 (zh) 内容显示方法、装置、电子设备及存储介质
US11756274B1 (en) Low-power architecture for augmented reality device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21913851

Country of ref document: EP

Kind code of ref document: A1