CN116344048A - Causal reasoning device for care of cognition disorder patient - Google Patents

Causal reasoning device for care of cognition disorder patient Download PDF

Info

Publication number
CN116344048A
CN116344048A CN202310209204.5A CN202310209204A CN116344048A CN 116344048 A CN116344048 A CN 116344048A CN 202310209204 A CN202310209204 A CN 202310209204A CN 116344048 A CN116344048 A CN 116344048A
Authority
CN
China
Prior art keywords
user
module
content
data
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310209204.5A
Other languages
Chinese (zh)
Inventor
安宁
明鉷
吴瑛
杨矫云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202310209204.5A priority Critical patent/CN116344048A/en
Publication of CN116344048A publication Critical patent/CN116344048A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24564Applying rules; Deductive queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Epidemiology (AREA)
  • Social Psychology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Developmental Disabilities (AREA)
  • Databases & Information Systems (AREA)
  • Psychology (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Educational Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention relates to a causal reasoning device for care of cognition disorder patients, which comprises: the user terminal is configured to be capable of acquiring input content input by a user through the input module in the interaction process of the user and the user terminal through the content acquisition part, and the first Bluetooth module and the first communication module of the user terminal can be respectively used for communication of the evaluation unit; the cognitive knowledge database is at least stored with physiological index data, nursing measure data and health state data, and the physiological index data, the nursing measure data and the health state data establish corresponding association relations; an evaluation unit which transmits to the cognitive knowledge database based on the input content acquired from the user side and the time sequence action data related to the input content to receive at least one intervention target point information related to the user; the evaluation unit infers and generates at least one care plan based on the key intervention target points; the evaluation unit sends the care scheme to the user side for display.

Description

Causal reasoning device for care of cognition disorder patient
Technical Field
The invention relates to the field of database construction, in particular to a causal reasoning device for care of cognition disorder patients, which is hardware equipment.
Background
Cognitive functions are mental functions that the human brain recognizes and reflects objective things, including perception, learning and memory, attention, language, thinking, etc. As people age, their cognitive functions decline, and the general manifestations are decline in memory decline. Such as hearing, vision loss, mobility impairment, etc., which can severely impact our quality of life. Symptoms of senile dementia are usually manifested by serious loss of cognitive ability and decline of memory, and also manifested by obvious decline of reasoning and judging ability. At present, the evaluation of the cognitive ability of human individuals mainly depends on a cognitive disorder screening scale, and the human individuals are evaluated by staff, so that the evaluation of the cognitive ability of the human individuals is realized. However, the conventional scale evaluation has the following problems that the return visit time interval is long, and the periodic return visit is difficult to realize; the evaluation time is long, the requirement on staff is high, and the wide popularization is difficult to realize; it is difficult for a human individual with a low education level to complete most of the evaluation contents, etc.
Biomarkers (biomarks) are clinically interpretable biological features that can be objectively measured and assessed, which can serve as indicators of biological or pathological processes and can objectively reflect the effects of therapeutic approaches. Whereas digital biomarkers (Digital Biomarkers) are objective criteria for the discovery, interpretation or prediction of disease progression that are quantifiable, clinically interpretable by digital means to change the "signal" released by humans. Traditional biomarkers often require specialized personnel and equipment (e.g., medical imaging equipment, gauges, gait labs) and are more often used with invasive measurement methods (e.g., blood drawing assays). The digital biomarker can be directly combined with the development of technologies such as intelligent terminals, big data, artificial intelligence and the like, and the measurement means or the evaluation method of the existing biomarker is innovated, so that the existing biomarker can be continuously detected and evaluated more easily.
Digital medicine is an emerging leading-edge discipline around the world, with games being an important component. The game can exercise the brain agility and body coordination of people, and can treat senile dementia; the game can lead people to concentrate attention and treat the attention deficit hyperactivity disorder; the game also helps to relax the emotion so he is developing games that treat young anxiety disorders. Games are classified into a category of 'digital medicines', and the treatment method is more personalized, the treatment effect is more accurate, and a treatment closed loop can be formed by oneself. Digital pharmaceutical products include clinical evidence-based software or hardware products that can be used to measure or intervene in human health.
For example, chinese patent publication No. CN106327049a discloses a cognitive assessment system, including an information module, a test module, and an analysis module; the information module is used for obtaining medical information matched with the test module according to the data of the object and establishing a complete cognitive evaluation database; the test module obtains the cognitive test data of the object through the test, and the cognitive test data comprises the following five sub-modules: a attention and execution function test module, a memory test module, a mathematical and computational ability test module, a language test module, and a control and planning test module for actions and behaviors; and the analysis module determines a cognitive evaluation result of the object according to the medical information acquired by the information module and the cognitive test data acquired by the test module. However, the invention still requires the staff to measure to realize the cognitive evaluation, has high requirements on the staff and is difficult to realize automation. Moreover, the decline of cognitive ability of human individuals is a slow and imperceptible process, and the cognitive ability score obtained by evaluating the cognitive ability score by a two-time cognitive impairment screening scale cannot well reflect the change of cognitive ability of human individuals, and as mentioned above, the change of cognitive ability obtained by frequently evaluating the cognitive ability score is difficult to operate because the cognitive impairment screening scale evaluation process has the problems of long time consumption, high requirements for staff, difficulty in completing most evaluation contents of human individuals with low education level, and the like. Accordingly, there is a need for improvements over the prior art.
Furthermore, there are differences in one aspect due to understanding to those skilled in the art; on the other hand, as the inventors studied numerous documents and patents while the present invention was made, the text is not limited to details and contents of all that are listed, but it is by no means the present invention does not have these prior art features, the present invention has all the prior art features, and the applicant remains in the background art to which the rights of the related prior art are added.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a causal reasoning device for the care of cognition disorder patients, which at least comprises the following components: the user terminal at least comprises a content acquisition part, a first Bluetooth module, a first communication module and an input module, wherein the user terminal is configured to be capable of acquiring input content input by a user through the input module in the interaction process of the user and the user terminal through the content acquisition part, and the first Bluetooth module and the first communication module can be respectively used for communication of the evaluation unit;
a cognitive knowledge database storing at least physiological index data, care measure data, and health status data; the physiological index data, the nursing measure data and the health state data establish corresponding association relation; the cognitive knowledge database establishes a communication connection relation with the evaluation unit;
An evaluation unit which is used for transmitting the input content acquired from the user side and time sequence action data related to the input content to the cognitive knowledge database so as to receive at least one intervention target point information related to a user;
the evaluation unit infers and generates at least one care plan based on the key intervention target points;
the evaluation unit sends the care plan to the user side for display.
Preferably, the evaluation unit includes at least a content evaluation module configured to be able to acquire, from the first communication module, input content input by a user through the input module and time-series action data related to the input content through the second communication module; wherein the evaluation unit can establish the association relation between the physiological index data and the health state data of the user through a content evaluation module based on the input content and/or time sequence action data and send the association relation to the cognitive knowledge database, the cognitive knowledge database feeds back a knowledge graph subgraph to the content evaluation module based on a generalized causal relation algorithm to determine a causal model structure aiming at the user,
The cognitive knowledge database calculates average causal effects of the health status data, the care measure data and the physiological data based on at least one intervention method to determine key intervention targets having causal relationships with the cognitive health status.
Preferably, the evaluation unit includes at least a content evaluation module configured to be able to acquire, from the first communication module, input content input by a user through the input module and time-series action data related to the input content through the second communication module; the evaluation unit can establish association relation between the physiological index data and the health state data of the user through a content evaluation module based on the input content and/or time sequence action data and send the association relation to the cognitive knowledge database, and the cognitive knowledge database feeds back a knowledge graph subgraph to the content evaluation module based on a generalized causal relation algorithm to determine a causal model structure aiming at the user.
Preferably, the user terminal is further provided with or provided with a classification module, the classification module is used for classifying the input content of the user acquired by the content acquisition part, and is used for classifying the time sequence action data of the user acquired by the wearable device, and the input content classified by the classification module and the time sequence action data corresponding to the input content are sent to the content evaluation module of the evaluation unit through the first communication module.
Preferably, the content evaluation module sends to the cognitive knowledge database based on the input content acquired from the user side and time sequence action data related to the input content to extract a corresponding knowledge graph subgraph, so that the method for constructing the initial structural causal graph comprises the following steps:
a conditional independent test method based on BRT test statistics calculates Hellinger distances between individual health status data and sign data of the user,
by checking whether the Hellinger distance is 0, the conditional independence among the variables is obtained to determine whether the correlation or causal relationship exists between the various health states and the sign data of the user,
and determining a causal model structure for the user based on the causal relation, realizing individual-oriented causal care knowledge graph sub-division, and constructing an individual-oriented initial structure causal graph.
Preferably, the user terminal is further provided with or provided with an identity recognition module, the identity recognition module can allocate a unique identity for the user and a wearable device carried by the user and store the identity in the identity recognition module, and the wearable device can send electromagnetic waves carrying the identity to a limited communication range, wherein the identity recognition module can be used for recognizing the electromagnetic waves carrying the identity so as to recognize whether input content acquired by the user terminal and/or time sequence action data corresponding to the input content are generated by the user carrying the wearable device with the identity.
Preferably, the evaluation unit is further provided with or is provided with an early warning module and a display module, the early warning module can compare the cognitive ability score of the user corresponding to the current period with the cognitive ability score of the user corresponding to the previous period, and the early warning module can generate first early warning information when the cognitive ability score ring ratio of the user is reduced to exceed a preset trigger threshold, and the first early warning information can be displayed through the display module.
Preferably, the intervention method for calculating the average causal effect by the cognitive knowledge database at least comprises one of do algorithm, back door adjustment method, front door adjustment method and tool variable method.
Preferably, the manner in which the cognitive knowledge database calculates the Hellinger distance is as follows: the helinger distance between each health status and the sign data was calculated based on the Copula density function.
Preferably, the device further comprises a wearable device, the wearable device establishes a communication connection relationship with the evaluation unit through a second bluetooth module, the wearable device is configured to collect time sequence action data corresponding to the content collected by the content collection part by a user in a time sequence manner, and the time sequence action data is sent to the first bluetooth module of the user terminal through the second bluetooth module.
Preferably, the means for reasoning and generating at least one care plan based on the key intervention target comprises: information is sent to a cognitive knowledge database based on physical sign data of a user so that the cognitive knowledge database feeds back at least one care plan which corresponds to the key intervention target point and can be realized to the evaluation unit, and the evaluation unit confirms the care plan based on life conditions input by the user through the user side.
Drawings
FIG. 1 is a simplified schematic of a preferred embodiment of the present invention;
FIG. 2 is a flow chart of a preferred embodiment of the cognitive ability assessment performed by the inventive content assessment module;
FIG. 3 is a flow chart of a preferred embodiment of causal reasoning performed by the assessment unit of the present invention;
FIG. 4 is a schematic diagram of a causal relationship network generated by the assessment unit of the present invention, wherein FIG. 4 (a) schematically illustrates predictors of cognitive decline; fig. 4 (b) exemplarily shows predictors of cognitive improvement.
List of reference numerals
100: the user terminal 100a: the content acquisition unit 100b: first Bluetooth module
100c: the first communication module 100d: the input module 100f: classification module
100g: identity recognition module 200: wearing device 200a: second Bluetooth module
300: the evaluation unit 300a: the second communication module 300b: content evaluation module
300c: early warning module 300d: display module
Detailed Description
The following detailed description refers to the accompanying drawings.
Fig. 1 shows a causal reasoning device for care of cognitively impaired patients. The causal reasoning device at least comprises a user side 100 and a cognitive knowledge database evaluation unit 300. The client 100 at least includes a content collection unit 100a, a first bluetooth module 100b, a first communication module 100c, and an input module 100d. The user terminal 100 is configured to be able to acquire input content input by a user through the input module 100d during the user's interaction with the user terminal 100 through the content acquisition part 100 a. The first bluetooth module 100b and the first communication module 100c can be used for data communication with other devices, respectively.
The wearable device 200 includes at least a second bluetooth module 200a. The wearable device 200 is configured to be able to collect time-series action data of a user corresponding to the content collected by the content collection part 100a in a time-series manner and transmit the time-series action data to the first bluetooth module 100b of the user terminal 100 through the second bluetooth module 200a.
Preferably, the user terminal 100 and the wearable device 200 are also capable of collecting physical sign data of the user respectively. For example, the content collected at the user side includes sign data. The wearable device 200 collects physical sign data of the user through the physiological sensor. The physical sign data comprises physiological data of the user and physiological data change information.
And the cognitive knowledge database is used for storing at least various physiological index data, nursing measure data and health state data of people, especially the old people. The hardware of the cognitive knowledge database can be constructed by a processor, an application specific integrated chip, a server, etc. having storage and information processing functions. The cognitive knowledge database and the evaluation unit 300 establish a communication connection relationship through a wired or wireless mode. Wireless means are for example bluetooth communication, WIFI communication, etc. The wired manner is, for example, optical fiber communication or the like.
The evaluation unit 300 includes at least a content evaluation module 300b, a second communication module 300a. The content evaluation module 300b is configured to be able to acquire input content input by a user through the input module 100d and time-series action data related to the input content from the first communication module 100c through the second communication module 300a. The input content at least comprises gesture operation, voice instruction and the like.
The assessment unit 300 is capable of generating a corresponding care plan based on the input content and/or the time-series action data and the intervention target information fed back by the cognitive knowledge database.
Preferably, the input content acquired from the user terminal 100 and the time sequence action data related to the input content are sent to the cognitive knowledge database to acquire the intervention target point information related to the user. The cognitive knowledge database calculates the average causal effect of the health state data, the nursing measure data and the physiological data based on at least one intervention method to judge a key intervention target point with causal relation with the cognitive health state; reasoning and generating at least one care plan based on the critical intervention targets. The evaluation unit 300 sends the care plan to the user terminal 100 for display.
The user may be a human individual having a cognitive level comparable to his age. The user may also be a cognition impaired patient.
Preferably, the client 100 includes, but is not limited to, a workstation, a personal computer, a general purpose computer, an internet appliance, a notebook, a desktop computer, a multiprocessor system, a set top box, a network PC, a wireless device, a portable device, a wearable computer, a cellular or mobile phone, a Portable Digital Assistant (PDA), a smart phone, a tablet, an ultrabook, a netbook, a multiprocessor system, a microprocessor-based or programmable consumer electronics, a minicomputer, a gaming terminal, and the like. Preferably, the client 100 may be connected to the network via a wired and/or wireless connection. Preferably, the user terminal 100 may provide various application programs to the user. For example, such applications may include, but are not limited to, maps for navigation and global positioning, weather, reminders, clocks, publications selected by the user, emails, messages for texting, and internet browsers, etc. Preferably, one or more portions of the network may be an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), a Wireless Wide Area Network (WWAN), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a wireless network, a WIFI network, a WiMax network, any other type of network, or a combination of two or more such networks.
Preferably, the content collection part 100a is configured to be able to collect input content input by a user through the user terminal 100. Preferably, the input content may be voice information, gestures, text information, video image information, and the like.
Furthermore, since the behavior "signal" released by the human being obtained by the digitizing means can be changed into a quantifiable objective standard having clinical interpretability, the behavior "signal" released by the human being obtained by the digitizing means can be used for finding, interpreting or predicting the trend of the related diseases. The above-described "signal" of human released behavior obtained by digital means may be referred to as a digital biomarker. Through the configuration mode, the digital biomarker generated in the interaction process of the user and the user terminal 100 can be collected through the content collection part 100a, so that the cognitive ability change condition of the user can be conveniently evaluated through the collected digital biomarker in the subsequent step, and medical staff can conveniently know the recent treatment effect of the user. Preferably, the first bluetooth module 100b is configured to establish a data connection between the user terminal 100 and the wearable device 200.
Preferably, the first communication module 100c is configured to establish a communication connection between the user terminal 100 and the evaluation unit 300. Preferably, the first communication module 100c may transmit the input content data acquired by the user terminal 100 and the time-series action data corresponding to the input content data to the evaluation unit 300 within a time threshold. Preferably, the time threshold may be flexibly set according to actual scene requirements, for example, the time threshold may be twenty-four hours. Preferably, the first communication module 100c may periodically use broadband WIFI for transmission or other means including 4G LTE transmissions provided by the user data plan, where possible. Preferably, the first communication module 100c is further responsible for protecting the encrypted channel before transmitting the input content data acquired by the user terminal 100 and the time-series action data corresponding to the input content data. By the configuration mode, the data information in the system can be effectively protected, so that the privacy information of the user is prevented from being revealed.
Preferably, the input module 100d is used for inputting gesture operations, voice commands, and other input contents to the user terminal 100 by a user. Preferably, the input module 100d may be a keyboard, a touch screen, a microphone, a camera, etc.
Preferably, the input module 100d is also used for collecting physical sign data of the user.
Preferably, the user may wear the wearable device 200 on his body in such a way that the input content of the user input user terminal 100 can be collected. Preferably, the wearable device 200 is configured to obtain time sequence motion data corresponding to input content input by a user through the user terminal 100. Through the configuration mode, the digital biomarker generated in the interaction process of the user and the wearable device 200 can be acquired through the wearable device 200, so that the cognitive ability change condition of the user can be evaluated through the acquired digital biomarker in the subsequent steps, and medical staff can know the recent treatment effect of the user conveniently. That is, the wearable device 200 is also able to collect sign data of the user.
Preferably, the wearable device 200 may be installed or provided with a specific time series action acquisition module to acquire time series action data related to the input content inputted by the user through the user terminal 100.
Preferably, the time series action data may be an operation action related to the input content inputted by the user through the user terminal 100 and a time corresponding to the operation action. Preferably, the operational actions may include, but are not limited to, keyboard tap actions, touch screen gestures, stylus and mouse swipe gestures, limb actions while exercising, and the like.
Preferably, the wearable device 200 may include a wristband smart device, a head mounted smart device, etc., such as a smart watch and a smart helmet. Preferably, the wearable device 200 may be carried with a user twenty-four hours a day. Preferably, the data connection between the user terminal 100 and the wearable device 200 can be established through the first bluetooth module 100b installed or disposed on the user terminal 100 and the second bluetooth module 200a installed or disposed on the wearable device 200.
Preferably, when the distance between the second bluetooth module 200a of the wearable device 200 and the first bluetooth module 100b of the user terminal 100 exceeds a threshold distance, the wearable device 200 may store the content input by the user terminal 100 in a time-sequential manner in a storage module within the wearable device 200. Preferably, when the distance between the second bluetooth module 200a of the wearable device 200 and the first bluetooth module 100b of the user terminal 100 is within the threshold range, data can be transmitted in real time and in both directions between the second bluetooth module 200a of the wearable device 200 and the first bluetooth module 100b of the user terminal 100, and the memory module stored in the wearable device 200 in a time-sequential manner can also be transmitted to the first bluetooth module 100b of the user terminal 100. Preferably, the threshold distance may be set manually according to actual scene requirements, for example, the threshold distance may be twenty meters.
According to a preferred embodiment, the user terminal 100 is further installed or provided with a classification module 100f, the classification module 100f is used for classifying the input content of the user acquired by the content acquisition part 100a, and for classifying the time-series action data of the user acquired by the wearable device 200, and the input content classified by the classification module 100f and the time-series action data corresponding to the input content are transmitted to the content evaluation module 300b of the evaluation unit 300 through the first communication module 100 c.
The classification module 100f may be software or hardware. The classification module 100f classifies the input content of the user acquired by the content acquisition unit 100a according to the type of the input content. For example, in the case where a user who needs to perform cognitive ability assessment performs daily social communication using the user terminal 100 and the wearable device 200 corresponding to the user terminal 100, the classification module 100f installed in the user terminal 100 obtains content input by the user through the touch screen of the user terminal 100, and divides the content input through the touch screen of the user terminal 100 into first input content according to the type of the user terminal 100.
In the case of responding to the wearable device 200 instruction, the first input content replying to the wearable device 200 instruction is acknowledgement or negative acknowledgement content that is input in a click manner to the touch screen. Preferably, the wearable device 200 instructions may be instructions to remind the user to perform a certain activity, for example, the wearable device 200 instructions may be data instructions of taking medicine at regular time, moving at regular time, sleeping at regular time, etc. Preferably, the wearable device 200 instructions may be instructions that require the user to confirm his daily activities, for example, the wearable device 200 instructions may be instructions that confirm whether the user has earned a hydro-electric fee that has been paid for the current month.
In the case of responding to the instant messaging message of the user terminal 100, the first input content of the instant messaging message of the user terminal 100 is a communication content that replies to the instant messaging message in a manner of sliding contact with the touch screen, where the communication content may be a word and/or sentence capable of expressing a certain semantic meaning, an expression packet capable of expressing a psychological state of the user, a picture collected by the user, or a video recorded by the user in advance. Instant messaging may include, but is not limited to, a WeChat platform.
In the case of responding to the bell sound of the user terminal 100, the first input content for the bell sound of the user terminal 100 is reply content selected for the touch screen in a click manner. The bell may be a doorbell sound emitted by a doorbell installed at the user's home, or may be an incoming call bell of a smart phone at the user terminal 100. Preferably, the answer content may be a set of character combinations having a specific meaning, which are input through the touch screen of the user terminal 100. Preferably, the answer content may be answer content related to the incoming call content input by the touch screen after the user answers the incoming call content of the user terminal 100.
In the case of responding to the vibration of the user terminal 100, the first input content for the vibration of the user terminal 100 may be reply content responding accordingly to the vibration of the user terminal 100 in a click and/or sliding contact with the touch screen. Preferably, the vibration of the user terminal 100 may be a vibration of a smart phone. Preferably, the vibration of the user terminal 100 may also be a vibration generated by a game terminal used by a user. Preferably, the vibration of the user terminal 100 may be other vibration. Preferably, the reply content may be reply content made for the receipt of an email or a message in the smart phone.
In the case of responding to the change in the gesture of the user terminal 100, the first input content for the change in the gesture of the user terminal 100 may be feedback content made to the change in the gesture of the user terminal 100 in a point contact or sliding contact with the touch screen. Preferably, the user terminal 100 may be a game terminal with a touch screen, such as a smart phone or other portable game terminal. Preferably, the feedback content may be feedback input made by continuously clicking a touch screen of the game terminal or sliding contact with the touch screen for the gesture change of the game terminal by the user so as to adjust a game character or an object in the screen content displayed by the game terminal.
The classification module 100f installed at the user terminal 100 obtains the voice content input by the user through the smart phone of the user terminal 100, and divides the voice information input by the smart phone of the user terminal 100 into the second input content according to the type of the user terminal 100. The second input content may be used to characterize a user's level of social engagement.
In the case of responding to the wearable device 200 instruction, the second input content replying to the wearable device 200 instruction is the reply content to the wearable device 200 instruction made in a voice manner. The answer content may be voice answer content made by the user for a query issued by the wearable device 200 for certain daily activities of the user. The answer content may be a speech segment capable of expressing a certain semantic meaning. The wearable device 200 instructions may be to ask the user if to take a medicine, exercise, rest, etc. within a prescribed time.
In the case of responding to the instant messaging message of the client 100, the first input content of the instant messaging message of the client 100 is a communication content that replies to the instant messaging message in a voice manner. The communication content can be a voice segment capable of expressing a certain meaning or no specific meaning, wherein the length of the voice is not limited, i.e. the voice segment can be the pronunciation of a single character or the whole voice formed by a plurality of sentences. Speech without specific semantics may be speech content uttered by a user that is not understood by a person of normal cognitive level in the same age range.
In the case of responding to the ringtone of the user terminal 100, the first input content for the ringtone of the user terminal 100 is a reply content replying to the ringtone of the user terminal 100 in a voice input manner. The content of the application band can be telephone communication content between the user and relatives and friends of the user.
In the case of responding to the vibration of the user terminal 100, the first input content for the vibration of the user terminal 100 may be reply content that responds to the vibration of the user terminal 100 in a voice input manner. The vibration of the user terminal 100 may be vibration caused by an incoming call, an email, a message, etc. of the smart phone.
In the case of responding to the change of the gesture of the user terminal 100, the first input content for the change of the gesture of the user terminal 100 may be feedback content made to the change of the gesture of the user terminal 100 in a voice input manner. The user terminal 100 may be a game terminal with a touch screen, such as a smart phone, a smart appliance, a game terminal, etc. The feedback content can be feedback input which is made by adjusting the game characters or objects in the picture content displayed by the game terminal by the user through voice input of the smart phone aiming at the gesture change of the game terminal. The feedback content can also be that the gesture of other intelligent devices is changed through voice input of the intelligent mobile phone so as to adjust the gesture and the movement track of the intelligent device, for example, the intelligent device can be an intelligent sweeping robot.
The classification module 100f installed at the user terminal 100 divides text information input through the keypad of the user terminal 100 into third input contents according to the type of the user terminal 100.
In the case of responding to the wearable device 200 instruction, the third input content replying to the wearable device 200 instruction may be reply content to the wearable device 200 instruction by the user in a click or tap manner through the keyboard. The reply content may be reply content that the user makes with respect to a query issued by the wearable device 200 for certain daily activities of the user. The reply content may include one or more of an alphanumeric entry, a backspace entry, a capitalization entry, a format entry, and an edit entry.
In the case of responding to the instant message of the user terminal 100, the third input content of the instant message of the user terminal 100 is the communication content of the user replying to the instant message by clicking or knocking the keyboard. The communication may be a character or combination of characters capable of expressing a certain semantic meaning or no specific semantic meaning. The number of characters may be one or more.
In the case of responding to the bell sound of the user terminal 100, the third input content for the bell sound of the user terminal 100 is a reply content that the user replies to the bell sound of the user terminal 100 in a click or tap manner through the keyboard. The response content can be text communication content between the user and relatives and friends of the user.
The classification module 100f installed at the user terminal 100 divides the email and the sms received by the smart phone of the user terminal 100 into fourth input contents according to the type of the user terminal 100. The fourth input content may also be used to characterize the user's level of social engagement.
In the case of responding to the wearable device 200 instruction, the fourth input content replying to the wearable device 200 instruction may be input content for the wearable device 200 instruction acquired by the user through the user terminal 100 in a passive reception manner. The input content may be text content of an email, a short message. The input content may also include character input, repeated misspellings, omissions, excessive corrections, irregular delay variations of common words, message length and message consistency, and the like. The input content may also include email addresses of the recipient and the sender, and short message telephone numbers of the recipient and the sender.
Preferably, the classification module 100f installed on the user terminal 100 may further classify other content input by the user through the user terminal 100 according to the actual scene requirement, so as to obtain other behavior content capable of characterizing the human cognitive ability level.
The classification module 100f installed on the user terminal 100 acquires the time sequence action data corresponding to the content input by the user through the wearable device 200, and divides the time sequence action data corresponding to the content input by the user through the touch screen into first time sequence action data according to the type of the user terminal 100 corresponding to the wearable device 200.
In the case of responding to the touch screen of the user terminal 100, the first timing action data acquired by the wearable device 200 is data of an input content trigger time, a response start time, a response end time, a duration time, a pause number, a pause time, a contact number with the touch screen, and the like of the gesture action made by the wearable device 200 for the user to input through the touch screen in a click manner. For example, the recording format of the first timing action data may be: the triggering time, the duration XX seconds, the pause time XX seconds and the contact times with the touch screen are XX times. The input content trigger time may be a time when a user opens an application on the touch screen through the touch screen. The triggering time of the input content can also be flexibly selected according to the actual scene requirement, for example, the zero point of the morning of the day where the response starting time is selected as the triggering time of the input content. The response start time may be a time when the first input is made through the touch screen after the user opens the application. The response end time may be a time when an end input is made through the touch screen after the user opens the application. The difference between the trigger time and the response start time pertaining to the same first input content corresponding to the first timing action data may be used to calculate a delay time for the user to characterize the cognitive abilities of the user in terms of execution, calculation, and understanding judgment.
The classification module 100f installed on the user terminal 100 obtains the time sequence action data corresponding to the content input by the user through the wearable device 200, and divides the time sequence action data corresponding to the content input by the user through the wearable device 200 into the second time sequence action data according to the type of the user terminal 100 corresponding to the wearable device 200.
In the case of a smartphone in response to the user terminal 100, the second time-series action data acquired by the wearable device 200 is data of an input content trigger time, a response start time, a response end time, a duration, a pause time, and the like of the wearable device 200 for a voice input made by the user in a voice manner in response to the smartphone. The input content trigger time may be the time recorded by the wearable device 200 when the user answers the phone call through the smart phone of the user terminal 100. The input content trigger time can be flexibly selected according to the actual scene requirement, for example, the early morning zero point of the day where the response start time is selected as the current input content trigger time. For example, the recording format of the second time series action data may be: input content trigger time, response start time, response end time, duration, dwell time, etc.
The classification module 100f installed on the user terminal 100 acquires the time sequence action data corresponding to the content input by the user through the wearable device 200, and divides the time sequence action data corresponding to the content input by the user through the wearable device 200 into third time sequence action data according to the type of the user terminal 100 corresponding to the wearable device 200.
In the case of responding to the keyboard of the user terminal 100, the third time-series action data acquired by the wearable device 200 is data of an input content trigger time, a response start time, a response end time, a duration, a delay time, a click frequency, and the like for the input operation by the wearable device 200 by the user through the keyboard of the user terminal 100 in a click manner. The input content departure time may be a time recorded by the wearable device 200 when the user opens a certain email/short message/game web page of the user terminal 100. The input content trigger time can be flexibly selected according to the actual scene requirement, for example, the early morning zero point of the day where the response start time is selected as the current input content trigger time. For example, the recording format of the third time series action data may be: input content trigger time, response start time, response end time, duration, delay time, click frequency, etc.
The classification module 100f installed on the user terminal 100 divides the time sequence action data corresponding to the content input by the user, which is acquired by the wearable device 200, into fourth time sequence action data according to the type of the user terminal 100 corresponding to the wearable device 200, which is acquired by the wearable device 200, which is input by the user using the smart phone to receive the short message, the email, and the like.
In the case of receiving information in response to the smart phone of the user terminal 100, the fourth timing action data acquired by the wearable device 200 is data such as an input content trigger time, a response start time, a response end time, a duration, a delay time, a click frequency, etc. of the wearable device 200 when the user inputs text information through the smart phone of the user terminal 100. The triggering time of the input content can be the time when the smart phone of the user receives the e-mail and the short message. The triggering time of the input content can also be flexibly selected according to the actual scene requirement, for example, the zero point of the morning of the day where the response starting time is selected as the triggering time of the input content. For example, the recording format of the fourth timing action data may be: input content trigger time, response start time, response end time, duration, delay time, click frequency, etc.
Preferably, the classification module 100f installed on the user terminal 100 may further divide the time sequence action data corresponding to the input content input by the user using the user terminal 100 obtained by the wearable device 200 according to the type of the user terminal 100 corresponding to the wearable device 200.
Through the above configuration manner, the input content and the time sequence action corresponding to the input content acquired through the user terminal 100 are classified through the classification module 100f, and the input content and the time sequence action corresponding to the input content are used as digital biomarkers of the user, so that the cognitive ability of the user is conveniently evaluated through the neural network, and the cognitive ability of the user is prevented from being frequently evaluated through the cognitive disorder screening table, so that the disturbance to the daily life of the user is reduced, and the labor intensity of medical staff is reduced.
In the case that the first communication module 100c of the user terminal 100 can communicate with the second communication module 300a of the evaluation unit 300, the first communication module 100c installed or integrated in the user terminal 100 can transmit at least the first input content, the second input content, the third input content, the fourth input content, and the first time series data, the second time series data, the third time series data, and the fourth time series data corresponding to the first input content, the second input content, the third input content, and the fourth input content, which are classified by the classification module 100f, to the content evaluation module 300b installed or integrated in the evaluation unit 300 in real time or at a certain time interval in time sequence.
The certain time interval may be flexibly set according to the actual scene requirement, for example, the certain time interval may be within one hour after the classification module 100f finishes classifying the input content and the time sequence action data corresponding to the input content, or may be within twenty four hours after the classification module 100f finishes classifying the input content and the time sequence action data corresponding to the input content.
The above-mentioned time sequence refers to the time sequence of the response start time recorded by the time sequence action data corresponding to the input content received by the classification module 100 f. Preferably, the input content may be sent to the content evaluation module 300b together with the time sequence action data corresponding to the input content, for example, a first input content and a first time sequence action data in the same time period form a first data packet, a second input content and a second time sequence action data in the same time period form a second data packet, and so on.
Preferably, the first communication module 100c may send the input content subjected to the classification processing by the classification module 100f and the time sequence action corresponding to the input content to the content evaluation module 300b of the evaluation unit 300 through the second communication module 300a in a wired or wireless manner in real time.
Preferably, the data transmission between the first communication module 100c and the second communication module 300a may use a secure channel. Preferably, hypertext transfer protocol (HTTPS) may also be used between the first communication module 100c and the second communication module 300a to securely transfer data. By this configuration, the user privacy can be prevented from being compromised between the user terminal 100 and the evaluation unit 300.
According to a preferred embodiment, the evaluation unit 300 acquires the content input by the user and the time-series action data related to the input content from the user terminal 100 through the second communication module 300 a. The content evaluation module 300b of the evaluation unit 300 receives the input content and the time sequence motion data processed by the classification module 100f according to the time sequence, and sends the received input content and time sequence motion data to the cognitive knowledge database to extract the corresponding knowledge graph subgraph, thereby constructing the initial structure causal graph. In the case where the user terminal 100 and the evaluation unit 300 are capable of data transmission, the evaluation unit 300 may acquire the content input by the user through the user terminal 100 and the time-series action data corresponding to the input content in an active acquisition manner or a passive acquisition manner.
The active acquisition means that the content evaluation module 300b actively sends the prompt transmission information to the classification module 100f through the first communication module 100c and the second communication module 300a to cause the classification module 100f to send the input content and the time sequence action data related to the prompt transmission information. The incentive transmission information may be specific input contents and time series action data which the incentive classification module 100f transmits to the content evaluation module 300 b. For example, the incentive transmission information may incentive the classification module 100f to transmit the first input content and the first timing action data to the content evaluation module 300 b.
The passive acquisition means that the content evaluation module 300b passively receives the input content and the time sequence action data sent by the classification module 100f through the first communication module 100c and the second communication module 300a, that is, if the classification module 100f does not send the input content and the time sequence action content, the content evaluation module 300b cannot acquire the input content and the time sequence action data.
The content evaluation module 300b installed in or integrated with the evaluation unit 300 stores the time series action data and the input content corresponding to the time series action data in the content evaluation module 300b according to the sequence of the trigger input time recorded by the time series action data, so as to construct an initial structural causal graph for the individual user and generate a corresponding care plan through the content evaluation module 300 b.
The cognitive knowledge database can store various physiological index data, nursing measure data, health state data and association relations thereof of people, and can also store association relations of a plurality of intervention targets associated with average causal effects and a plurality of nursing measures associated with the intervention targets.
Preferably, the content rating module 300b in the rating module 300 rating the average causal effect of the user.
The method by which the content rating module 300b evaluates the average causal effect of the user includes at least the following ways.
S1: and establishing the association between the sign data and the health state data of the user based on the individual data of the user, and extracting a corresponding knowledge graph sub-graph. The specific process is as follows:
s1.1, a causal relation algorithm is applied to determine a knowledge graph subgraph suitable for an individual user based on the causal nursing knowledge graph.
For example, a conditional independent test method based on BRT test statistics is used to calculate the Hellinger distance between each health state and sign data of the user. The Hellinger distance is calculated based on a Copula density function.
S1.2, checking whether the Hellinger distance is 0 or not to acquire the condition independence among the variables. If the Hellinger distance is 0, the conditions between the variables are independent of each other. If the Hellinger distance is not 0, the conditions among the variables have causal or phase relation.
S1.3, determining a causal model structure aiming at the user according to whether correlation or causal relation exists between each health state and physical sign data of the user, and realizing individual-oriented causal nursing knowledge graph sub-graph division. And constructing an initial structure causal graph specially oriented to the user individuals according to the knowledge graph subgraph.
S2: and calculating the average causal effect among various health states, nursing measures and physical sign data based on an intervention method in causal inference, and judging a key intervention target point with causal relation with the cognitive health state of the user.
The intervention method at least comprises a 'do algorithm' (do-calculus) through a back door adjustment method, a front door adjustment method and a tool variable method. The elimination of confounding factors existing among various variables in the structural causal graph can be realized through the calculation of the intervention method, and the average causal effect among various health states, nursing measures and physical sign data is calculated. Judging the key intervention target point related to the average causal effect.
In the invention, the target point refers to a physiological index corresponding to the average causal effect. The target is, for example, blood pressure, and the intervention target is the blood pressure that should be intervened. There are multiple corresponding care schemes for a given intervention target. Different intervention schemes have requirements for different physiological indexes. Accordingly, the content rating module 300b selects the appropriate at least one care regimen based on the intervention target and the user's sign data.
Preferably, the content evaluation module 300b screens at least one care plan based on the objective living condition information inputted by the user through the input module 100d and transmits it to the user terminal 100.
Preferably, the content rating module 300b does not directly send the care plan to the client 100. The content evaluation module 300b sends at least one care plan to the service terminal for manual review by a person who is medical qualified and has expertise in the treatment of cognitive disorders. And (5) checking the care scheme manually. After the manual audit is successful, the service terminal sends the information of the audit qualification to the content evaluation module 300b, and if the manual audit is not passed, the content evaluation module 300b selects an appropriate care plan again based on the intervention target and the sign data of the user. Preferably, the content rating module 300b transmits the care plan that passed the audit to the client 100 for customer selection in response to receipt of the "audit ok" information.
In the case where the content rating module 300b is capable of establishing a cognitive ability rating benchmark, the content rating module 300b installed or integrated with the rating unit 300 may analyze and update the care plan of the user's cognitive ability over a period of time according to causal utility related factors used by the content rating module 300b based on the user side 100 and the wearable device 200 and the cognitive knowledge database used by the user.
In order to improve the accuracy of the content evaluation module 300b to the care plan recommendation, benchmark tests can be performed by multiple groups of people with different ages and normal cognitive abilities matched with the ages of the people, wherein each tested individual is provided with a corresponding user terminal 100 and a corresponding wearable device 200; the data of each tested individual is then recorded and entered into the evaluation unit 300, so as to obtain the touch screen gesture types, keyboard knocking area preference and average delay, keyboard knocking frequency, average using duration of corresponding application programs, average reaction time of using each action node of the game end, and the like, which are used by each user end 100 and the wearable device 200 in higher frequency of use by different age groups and multiple groups of people with normal cognitive ability/abnormal cognitive ability matched with the age groups. In addition, the cloud server may also directly search and record the touch screen gesture type, keyboard tapping area preference and average delay, keyboard tapping frequency, average application use duration of corresponding application programs, average reaction time of each action node of the game end, and the like, which are used by the general public, with higher frequency of use of each user end 100 and the wearable device 200, by using the data mining technology, so as to serve as an evaluation reference of normal cognitive ability/abnormal cognitive ability.
The average score of data information including, but not limited to, input content data and time series action data and cognitive ability assessment thereof generated from the testee using the user terminal 100 and the wearable device 200 may be used as a benchmark test result.
Baseline test results may be used to compare the performance of an individual with the performance of his or her "companion," or to compare the performance of an individual under different circumstances (e.g., before and after administration) or over time to measure cognitive decline or improvement relative to the individual itself or its companion.
Selected features defining a companion may include, but are not limited to, features: gender, year of birth, race, highest education, annual income, health status, things done during idle time (e.g., playing video games, reading, surfing the internet, watching television, etc.), average sleep time per week, how many languages an individual can read and write, whether an individual has learned a new language or a new instrument in the last two years, etc.
The evaluation unit 300 may learn the analysis of causal relation of the recorded input content and time series action data through a neural network capable of learning complex higher-order features to obtain a causal model structure capable of characterizing the cognitive ability of the user from the input content and time series action data. The reference test result can be used as a reference sample for neural network learning.
For example, changes in gestures on the touch screen recorded by the benchmark results, the type of gesture, the duration of the gesture, gestures that include excessive scrolling during searching and pagination during browsing may also be used as inputs to feature extraction, learning, and computation by the neural network. For example, delay time of answering a call, integrity and accuracy of language expression, number of pauses and time of pauses between sentences, irregular speech phenomenon, reduction or movement of speech spectrum and the like recorded by the reference test result can also be used as input of feature extraction, learning and calculation by the neural network. For example, the frequency, average delay time, preference of clicking a specific area of the keyboard, strength of clicking the keyboard, etc. recorded by the benchmark test result may also be used as input for feature extraction, learning and calculation by the neural network. For example, character input of e-mail/short message sent and received by the user recorded by the benchmark test result, repeated spelling errors, omission, excessive correction, irregular delay change of common words, message length, message consistency and the like can also be used as input of feature extraction, learning and calculation by the neural network.
S41, mapping from the input content and time sequence action data related to the input content acquired from the user terminal 100 to causal relationship evaluation is established. The content rating module 300b may use a neural network capable of learning complex higher-order features to build a mapping of the input content acquired from the user terminal 100 and the time-series action data related to the input content to a causal relationship assessment. The content evaluation module 300b may be continually iteratively optimized by inputting the benchmark results of the neural network and the pre-set causal relationship scores corresponding to the benchmark results.
S42, quantizing the mapping by using the loss function. The content evaluation module 300b uses the loss function to quantify the mapping of the input content and the time series action data related to the input content obtained from the user side 100 to a causal relationship assessment to learn basic features and/or higher order features through a neural network that can characterize the human cognitive abilities. The type of the loss function can be flexibly selected according to the actual scene requirements.
Further, the basic and/or higher-order features that can characterize a human cognitive ability may be a change in gestures, types of gestures, gesture durations, excessive scrolling of fingers during a search, pagination of gestures during browsing, etc. when a human uses a touch screen. The basic features and/or the high-order features capable of representing the cognitive ability of the human can also be response delay time, the completeness and accuracy of language expression, the number of pauses and the pause time between sentences, irregular voice phenomenon, reduction or movement of voice frequency spectrum and the like when the human answers the call. The basic features and/or higher order features that characterize the cognitive ability of a human user may also be the frequency with which the human user taps the keyboard, the average delay time, preferences for tapping a particular area of the keyboard, the strength with which the keyboard is tapped, etc. The basic features and/or higher-order features that characterize human cognitive abilities may also be character input, repeated misspellings, omissions, excessive corrections, irregular changes in common words, message length, etc. when a human user sends and receives e-mail/text messages.
S43. find the best weight that enables the loss function to take the minimum value and create a new mapping using the best weight. The content evaluation module 300b may optimize the neural network by random gradient descent, small batch gradient descent, etc., to find a set of optimal weights that can minimize the loss function used to quantify the merits of the neural network. The number of optimal weights may be multiple sets.
S44, applying the new mapping to the input content acquired by the content evaluation module 300b and time sequence action data related to the input content to acquire a new intervention target of the user. The evaluation unit 300 may analyze whether the care plan is appropriate based on the history and the average records of other users matching the existing user. The output of this analysis is an assessment of the care regimen and extrapolates from activities such as social engagement, physical activity, learning activity, etc., changes in the cognitive abilities of the user and causal relationships between various health states and the sign data.
With this configuration, the content evaluation module 300b may perform fine-grained evaluation on each sub-class capability of the cognitive ability of the user based on each sub-class data of the time-series action data and the input content of the digital biomarker of the user, so as to achieve finer grasp of improvement or change of each sub-class capability of the cognitive ability of the user in the near term, so as to understand the change of the cognitive ability of the user in the near term, and/or the treatment effect of the patient with cognitive disorder after taking care of the care plan.
According to a preferred embodiment, the user terminal 100 is also provided with or equipped with an identity module 100g. The identity module 100g is capable of assigning a unique identity to the user and the wearable device 200 carried by the user and storing the identity in the identity module 100g. The wearable device 200 is able to send electromagnetic waves carrying an identification to a limited communication range. The identity recognition module 100g can be used to recognize electromagnetic waves carrying an identity, so as to recognize whether the input content and/or the time sequence action data corresponding to the input content acquired by the user terminal 100 are generated by the user carrying the wearable device 200 with the identity.
According to a preferred embodiment, the evaluation unit 300 is also fitted with or provided with an early warning module 300c and a display module 300d. The pre-warning module 300c periodically compares the difference between the intervention target point of the user corresponding to the current period and the intervention target point of the user corresponding to the previous period, and the pre-warning module 300c generates first pre-warning information when the intervention target point of the user changes, and then the first pre-warning information can be displayed through the display module 300d.
Preferably, the comparison period of the evaluation unit 300 periodically comparing the intervention target point of the user corresponding to the current period with the intervention target point of the user corresponding to the previous period may be set manually. For example, the pre-warning module 300c of the evaluation unit 300 may be configured to be periodic with at least one of days, weeks, months, quarters, and years. For example, the pre-warning module 300c of the evaluation unit 300 may be configured to compare whether the corresponding user's intervention target changes on a daily basis. For another example, the pre-warning module 300c of the evaluation unit 300 may be configured to compare whether the corresponding user's intervention target changes on a daily and weekly basis. That is, the early warning module 300c of the evaluation unit 300 compares whether the intervention target of the corresponding user in the two adjacent days is changed or not, and also compares whether the intervention target of the corresponding user in the two adjacent weeks is changed or not.
Preferably, the early warning module 300c of the evaluation unit 300 may periodically compare the intervention target point of the user corresponding to the current period with the intervention target point of the user corresponding to the previous period in at least two comparison periods different from each other. Different comparison periods may correspond to variations in care measures that are different from one another.
According to a preferred embodiment, the content rating module 300b derives causal related variable characteristics via a neural network based on causal results. The content evaluation module 300b extracts input content and time sequence action data corresponding to relevant variable features from the cognitive knowledge database according to a time sequence, and performs causal relationship reasoning on the data through a bayesian network.
In the case that the content evaluation module 300b can evaluate, periodically or aperiodically, the input content acquired by the user terminal 100 and the time-series action data corresponding to the input content in the process of interaction with the user terminal 100 in the past period of time of the user, the content evaluation module 300b can search the weights of one or more hidden layers of the neural network, which are close to the output layer, based on the causal relationship, so as to find one or more weights with a higher association degree with the intervention target and related variable features corresponding to the weights.
Example 2
The cognitive knowledge database at least comprises an original database, a data processing unit and an reasoning unit. The original database is used for storing various original physiological index data, nursing measure data, health state data and the like. The data processing unit is used for constructing a data set. The reasoning unit is used for calculating the average causal effect among various health states, nursing measures and physical sign data.
The raw database can collect information related to the sign or plurality of disorders based on the sign data and/or disorder data and store it in a classification so that the data processing unit can obtain the main feature parameters based on the information of the raw database and construct the data set based on the main feature parameters. The inference unit builds a bayesian network based on the principal characteristic parameters and the dataset to mine the average causal effects between the disorders through the data patterns, thereby enabling to construct complications or complications between the average causal effect disorders according to whether or not the complications are constituted.
The inference unit is capable of building a causal relationship network model by means of a bayesian neural network.
In the case where the inference unit is capable of acquiring relevant variable features causing a change in cognitive ability of the user based on the cognitive ability evaluation result, the content evaluation module 300b extracts corresponding input content of the relevant variable features and time-series action data in time sequence, and performs causal relationship inference on the data by the inference unit in the knowledge database. For example, when the relevant variable features with higher degrees associated with the execution subclass score are the touch screen gesture type, the delay time of answering a call, the integrity degree and the accuracy degree of language expression, the inference unit may extract all data contents of the first input content, the first time-sequence action data, the second input content and the first time-sequence action data of the user in the time-sequence action database from the time-sequence action database, and then input all the data contents into the cognitive knowledge database to obtain a knowledge graph subgraph and calculate an average causal effect and an intervention target point. The number of relevant variable features that may be associated with a user cognitive ability change may be one or more.
According to a preferred embodiment, the inference unit generating a causal relationship network model involving a plurality of related characteristic variables based on the generated normalized data comprises:
s101, forming a network fragment library based on variables through a Bayesian fragment counting process;
s102, forming a whole set of test networks, and constructing each test network of the whole set of test networks from different network segment subsets;
s103, the whole set of test networks are optimized in total by evolving each test network through local transformation of simulated annealing to generate a consistent causal relationship network model.
The step of generating a causal relationship network model involving a plurality of related feature variables based on the generated normalized data using a bayesian network algorithm is explained in more detail below for the content inference unit. However, one of ordinary skill in the art will recognize that other means of employing bayesian analysis may be used.
S101, forming a network fragment library based on variables through a Bayesian fragment counting process. Preferably, the normalized data of the plurality of relevant characteristic variables may be input as an input data set into the inference unit. The inference unit may form a library of "network segments" that includes relevant feature variables of associations and relationships for each subclass of cognitive ability. The inference unit selects a subset of network segments in the library and constructs an initial trial network from the selected subset. The artificial intelligence based reasoning unit can also select a different subset of network segments in the library to construct another initial test network. Finally, a complete set of initial test networks (e.g., 10 networks) is formed from the different subsets of network fragments in the library. This process is called parallel whole set sampling. Each test network in the whole set of test networks is evolved or optimized by adding, subtracting and/or replacing additional network fragments from the library. The inference unit may describe the whole set of test networks as a causal network model after the optimization/evolution process is completed. The whole set of relation network model generated by the reasoning unit can be used for showing the association behaviors among all subclasses of cognitive ability, relevant characteristic variables and other characteristic variables. The network segment defines a quantitative succession among all possible small sets (e.g., sets of two to three members or sets of two to four members) of measured relevant feature variables (input data). The relationships between related feature variables in a segment may be linear, logical, polynomial, explicit or implicit homozygote, etc. The relationships in each segment are assigned a bayesian probability score that reflects how likely the input data is to give candidate relationships and also penalizes the relationships for their mathematical complexity. By scoring all possible pairings inferred from the input data, as well as the three-way relationships, the most likely fragments in the library can be identified. Various model types may be used in segment counting, including but not limited to linear regression, logistic regression, (ANOVA) ANOVA models, (ANOVA) ANCOVA models, nonlinear/polynomial regression models, and even non-parametric regression.
S102, forming a whole set of test networks, and constructing each test network of the whole set of test networks from different network segment subsets. In the causal inference process, the inference unit may construct each network in the whole set of initial trial networks from a subset of fragments in the library of fragments. The inference unit constructs each initial test network in the whole set of initial test networks from different fragment subsets of the fragment library. The model is evolved or optimized by determining the most likely factoring and most likely parameters based on given input data. Thereafter, the inference unit may obtain the network that best matches the input data by evaluating a scoring function for each network using the input data given a training set of the input data.
S103, the whole set of test networks are optimized in total by evolving each test network through local transformation of simulated annealing to generate a consistent causal relationship network model. For example, the test network may be evolved and optimized according to the milter-monte carlo sampling algorithm. Annealing of test network simulations can be used to optimize or evolve each test network in the whole set by local transformation.
It should be noted that the above-described embodiments are exemplary, and that a person skilled in the art, in light of the present disclosure, may devise various solutions that fall within the scope of the present disclosure and fall within the scope of the present disclosure. It should be understood by those skilled in the art that the present description and drawings are illustrative and not limiting to the claims. The scope of the invention is defined by the claims and their equivalents.
The present specification contains several inventive concepts, and applicant reserves the right to issue a divisional application according to each of the inventive concepts. The description of the invention encompasses multiple inventive concepts, such as "preferably," "according to a preferred embodiment," or "optionally," all means that the corresponding paragraph discloses a separate concept, and that the applicant reserves the right to filed a divisional application according to each inventive concept.

Claims (10)

1. A causal reasoning device for care of cognition impaired patients, comprising at least:
the user terminal (100) at least comprises a content acquisition part (100 a), a first Bluetooth module (100 b), a first communication module (100 c) and an input module (100 d), wherein the user terminal (100) is configured to be capable of acquiring input content input by a user through the input module (100 d) in the interaction process of the user and the user terminal through the content acquisition part (100 a), and the first Bluetooth module (100 b) and the first communication module (100 c) can be respectively used for communication of the evaluation unit (300);
a cognitive knowledge database storing at least physiological index data, care measure data, and health status data; the physiological index data, the nursing measure data and the health state data establish corresponding association relation; the cognitive knowledge database establishes a communication connection relationship with the evaluation unit (300);
An evaluation unit (300) for transmitting to the cognitive knowledge database based on the input content acquired from the user terminal (100) and time sequence action data related to the input content to receive at least one intervention target point information related to a user;
the evaluation unit (300) infers and generates at least one care plan based on the critical intervention targets;
the evaluation unit (300) sends the care plan to the user side (100) for display.
2. The causal inference apparatus according to claim 1, wherein the evaluation unit (300) comprises at least a content evaluation module (300 b), a second communication module (300 a), the content evaluation module (300 b) being configured to be able to obtain from the first communication module (100 c) via the second communication module (300 a) an input content entered by a user via the input module (100 d) and time-sequential action data related to the input content;
wherein the evaluation unit (300) can establish the association relation between the physiological index data and the health state data of the user through a content evaluation module (300 b) based on the input content and/or time sequence action data and send the association relation to the cognitive knowledge database,
the cognitive knowledge database feeds back knowledge-graph subgraphs to the content rating module (300 b) based on a generalized causal relationship algorithm to determine a causal model structure for a user,
The cognitive knowledge database calculates average causal effects of the health status data, the care measure data and the physiological data based on at least one intervention method to determine key intervention targets having causal relationships with the cognitive health status.
3. The causal reasoning apparatus according to claim 2, wherein the user side is further provided with or equipped with a classification module (100 f), the classification module (100 f) is configured to classify the input content of the user acquired by the content acquisition unit (100 a), and to classify the time-series action data of the user acquired by the wearable device (200), and to transmit the input content classified by the classification module (100 f) and the time-series action data corresponding to the input content to the content evaluation module (300 b) of the evaluation unit (300) through the first communication module (100 c).
4. A causal inference apparatus according to claim 3, wherein the cognitive knowledge database is sent to the cognitive knowledge database based on the input content acquired from the user side (100) and time sequence action data related to the input content to extract a corresponding knowledge graph subgraph, and the method for constructing an initial structural causal graph comprises:
A conditional independent test method based on BRT test statistics calculates Hellinger distances between individual health status data and sign data of the user,
checking whether the Hellinger distance is 0, obtaining the conditional independence among various variables to determine whether the correlation or causal relationship exists between various health states of the user and the sign data,
and determining a causal model structure for the user based on the causal relation, realizing individual-oriented causal care knowledge graph sub-division, and constructing an individual-oriented initial structure causal graph.
5. The causal reasoning device according to claim 4, characterized in that the user side (100) is further equipped or provided with an identity recognition module (100 g), the identity recognition module (100 g) being capable of assigning a unique identity to the user and to a wearable device (200) carried by the user and storing the identity in the identity recognition module, the wearable device (200) being capable of transmitting electromagnetic waves carrying the identity to a limited communication range,
the identity recognition module (100 g) can be used for recognizing electromagnetic waves carrying the identity mark so as to recognize whether input content and/or time sequence action data corresponding to the input content acquired by the user terminal (100) are generated by a user carrying the wearable device (200) with the identity mark.
6. The causal reasoning device according to any one of claims 1-5, characterized in that the evaluation unit (300) is further provided with or is provided with an early warning module (300 c) and a display module (300 d), the early warning module (300 c) being able to compare the cognitive ability score of the user corresponding to the current period with the cognitive ability score of the user corresponding to the last period, and the early warning module (300 c) being able to generate a first early warning information when the cognitive ability score loop ratio of the user decreases beyond a preset trigger threshold, the first early warning information being able to be displayed by the display module (300 d).
7. The causal inference device according to any one of claims 1 to 6, wherein the intervention method for calculating the average causal effect in the cognitive knowledge database comprises at least one of do algorithm, back door adjustment method, front door adjustment method, tool variable method.
8. The causal inference apparatus of claim 7, wherein the cognitive knowledge database calculates the Hellinger distance by:
the helinger distance between each health status and the sign data was calculated based on the Copula density function.
9. The causal inference apparatus of claim 7, further comprising a wearable device (200), said wearable device (200) establishing a communication connection with the evaluation unit (300) via a second bluetooth module (200 a),
The wearable device (200) is configured to be capable of acquiring time-series action data of a user corresponding to the content acquired by the content acquisition part (100 a) in a time-series manner, and transmitting the time-series action data to a first Bluetooth module (100 b) of the user terminal (100) through the second Bluetooth module (200 a).
10. The causal inference apparatus of claim 7, wherein the means for reasoning and generating at least one caretaking scenario based on the critical intervention target by the assessment unit (300) comprises:
transmitting information to a cognitive knowledge database based on the sign data of the user such that the cognitive knowledge database feeds back to the evaluation unit (300) at least one care plan corresponding to and enabled by the key intervention target,
the evaluation unit (300) confirms the care plan based on the living conditions input by the user through the user side.
CN202310209204.5A 2023-03-01 2023-03-01 Causal reasoning device for care of cognition disorder patient Pending CN116344048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310209204.5A CN116344048A (en) 2023-03-01 2023-03-01 Causal reasoning device for care of cognition disorder patient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310209204.5A CN116344048A (en) 2023-03-01 2023-03-01 Causal reasoning device for care of cognition disorder patient

Publications (1)

Publication Number Publication Date
CN116344048A true CN116344048A (en) 2023-06-27

Family

ID=86888669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310209204.5A Pending CN116344048A (en) 2023-03-01 2023-03-01 Causal reasoning device for care of cognition disorder patient

Country Status (1)

Country Link
CN (1) CN116344048A (en)

Similar Documents

Publication Publication Date Title
US11120895B2 (en) Systems and methods for mental health assessment
US10748644B2 (en) Systems and methods for mental health assessment
US20230298749A1 (en) Virtual healthcare communication platform
CN108780663B (en) Digital personalized medical platform and system
US20180096738A1 (en) Method for providing health therapeutic interventions to a user
US20190117143A1 (en) Methods and Apparatus for Assessing Depression
US20210391083A1 (en) Method for providing health therapeutic interventions to a user
CN115040086A (en) Data processing system and method based on digital biomarkers
WO2019221252A1 (en) Information processing device, information processing method, and program
WO2022174161A1 (en) Systems and methods for psychotherapy using artificial intelligence
WO2021072208A1 (en) System and method for monitoring system compliance with measures to improve system health
Maeda et al. Recording daily health status with chatbot on mobile phone-a preliminary study
US11972336B2 (en) Machine learning platform and system for data analysis
CN116344048A (en) Causal reasoning device for care of cognition disorder patient
CN117412702A (en) System and method for psychological treatment using artificial intelligence
Arunkumar et al. Application using machine learning to promote women’s personal health
SureshKumar et al. HELTRAK-a medical application with chatbot based on AI
CN116052844A (en) Cognitive impairment causal reasoning system based on digital biomarker
US11694797B2 (en) Virtual healthcare communication platform
CN116364307A (en) Counter fact-based cognitive disorder digital drug effect evaluation method
Li Data-driven, context-aware human fatigue management in traffic control centers
CN116364269A (en) Counter facts-based cognitive disorder household sickbed effect evaluation method
Sharma MODELLING AND APPLICATION OF BIOMARKERS FOR AFFECTIVE STATE MINING USIING SOFT COMPUTING TECHNIQUES SOFT COMPUTING
Yang et al. Heterogeneous Graph Attention Networks for Depression Identification by Campus Cyber-Activity Patterns
Kugapriya et al. UNWIND–A Mobile Application that Provides Emotional Support for Working Women

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination