CN116775996A - Visual training project recommending method and device based on user feedback - Google Patents

Visual training project recommending method and device based on user feedback Download PDF

Info

Publication number
CN116775996A
CN116775996A CN202310747623.4A CN202310747623A CN116775996A CN 116775996 A CN116775996 A CN 116775996A CN 202310747623 A CN202310747623 A CN 202310747623A CN 116775996 A CN116775996 A CN 116775996A
Authority
CN
China
Prior art keywords
user
training
data
item
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310747623.4A
Other languages
Chinese (zh)
Inventor
谢伟浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shijing Medical Software Co ltd
Original Assignee
Guangzhou Shijing Medical Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shijing Medical Software Co ltd filed Critical Guangzhou Shijing Medical Software Co ltd
Priority to CN202310747623.4A priority Critical patent/CN116775996A/en
Publication of CN116775996A publication Critical patent/CN116775996A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a visual training project recommending method and device based on user feedback, wherein the method comprises the following steps: acquiring user data; the user data includes scores and preference values for training items and user information; inputting user data into a training item prediction model, and predicting the next training item based on the classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results; performing unsupervised training on the vision inspection result and the user information by using a coding and decoding structure of TabNet to obtain user characteristics; constructing a classification label through user characteristics; the classification tag is set according to the search sample data and the matching sample data. According to the embodiment of the application, the objective data and subjective preference of the user are comprehensively considered, so that a better training effect can be objectively recommended, and the user is more willing to cooperate with the participated project subjectively, thereby achieving a better recommending effect.

Description

Visual training project recommending method and device based on user feedback
Technical Field
The present application relates to the field of visual training, and in particular, to a visual training program recommendation method, device, terminal equipment and computer readable storage medium based on user feedback.
Background
Visual training is a personalized training that improves visual function and visual performance, in particular, it may be directed to common visual defects, including: abnormal visual information processing, abnormal vision and movement coordination, cerebral trauma or rehabilitation of vision after shock and the like can be overcome through visual training.
Typically, prior to performing vision training, the patient is required to perform multiple vision function tests, such as binocular vision tests, binocular refractive tests, gaze quality tests, contrast sensitivity tests, simultaneous vision function tests, fusion function tests, stereoscopic vision function tests, and the like. From these examination data, an expert, doctor or related technician will arrange a visual training regimen based on his own experience. The establishment of the visual training scheme has strong dependence on human experience, and the subjective experience of the human can directly influence the training effect. On the other hand, the prior art also has a rule-based mode for automatically generating a training scheme so as to reduce the dependence of the training scheme on an expert or a doctor, but the technical scheme needs to formulate a relatively complicated rule, and a reasonable visual training scheme cannot be provided for the situation of a user in a personalized and fine manner. For example, feedback information from the user cannot be taken into account in the formulation process.
Disclosure of Invention
The application provides a visual training project recommending method, a device, terminal equipment and a computer readable storage medium based on user feedback, which are used for solving the technical problem that a reasonable visual training scheme cannot be provided for the situation of a user in a personalized and refined way in the prior art.
In order to solve the above technical problems, an embodiment of the present application provides a visual training item recommendation method based on user feedback, including:
acquiring user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side;
inputting the user data into a training item prediction model, and predicting the next training item based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results;
the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data.
As a preferable scheme, the classification label is set according to the next training item of the search sample and the next training item of the screened matching sample; the screened matching samples are obtained through screening according to the similarity between the searching samples and the matching samples.
As a preferred solution, the screened matching samples are screened according to the similarity between the search sample and the matching sample, specifically:
screening out matching samples with cosine similarity larger than a preset value;
or, screening a plurality of matching samples according to the order of cosine similarity from large to small.
Preferably, the user information includes age and gender; the vision test results include vision, mydriasis, optometry, gaze properties, simultaneous vision, fusion function, stereoscopy, amblyopia type, eye position, nystagmus condition, and diagnostic results.
Preferably, before the user data is input into the training project prediction model, the method further comprises: normalizing the last training score value and the preference value of the last training item; and carrying out the ebadd operation on the user information and the discrete features in the vision inspection result, wherein the loss function of the training item prediction model adopts a cross entropy function.
Correspondingly, the embodiment of the application also provides a visual training project recommending device based on user feedback, which comprises a user data acquisition module and a recommending module; wherein,,
the user data acquisition module is used for acquiring user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side;
the recommendation module is used for inputting the user data into a training item prediction model and predicting the next training item based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results;
the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data.
As a preferable scheme, the classification label is set according to the next training item of the search sample and the next training item of the screened matching sample; the screened matching samples are obtained through screening according to the similarity between the searching samples and the matching samples.
As a preferred solution, the screened matching samples are screened according to the similarity between the search sample and the matching sample, specifically:
screening out matching samples with cosine similarity larger than a preset value;
or, screening a plurality of matching samples according to the order of cosine similarity from large to small.
Correspondingly, the embodiment of the application also provides a terminal device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the visual training project recommending method based on user feedback when executing the computer program.
Correspondingly, the embodiment of the application also provides a computer readable storage medium, which comprises a stored computer program, wherein the equipment where the computer readable storage medium is located is controlled to execute the visual training program recommendation method based on the user feedback when the computer program runs.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
the embodiment of the application provides a visual training item recommending method and device based on user feedback, terminal equipment and a computer readable storage medium, wherein the visual training item recommending method comprises the following steps: acquiring user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side; inputting the user data into a training item prediction model, and predicting the next training item based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results; the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data. By implementing the embodiment of the application, the user information, the vision examination result, the last training item, the trained times, the last training score and the preference value of the last training item are input into the prediction model, and the classification label is set according to the search sample data and the matching sample data.
Drawings
Fig. 1: a flow diagram of one embodiment of a visual training program recommendation method provided by the application based on user feedback.
Fig. 2: a schematic diagram of one embodiment of tabular training data is provided for the present application.
Fig. 3: a schematic structural diagram of an embodiment of a visual training program recommending device provided by the application based on user feedback.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Example 1
Referring to fig. 1, a visual training program recommendation method based on user feedback according to an embodiment of the present application includes steps S1 to S2; wherein,,
step S1, obtaining user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side.
In this embodiment, referring to fig. 2, when prediction is required, the visual inspection result and the user information of the user may be obtained from the user side. The user information mainly includes the name, age, and sex of the user, etc., and the vision test results include, but are not limited to, vision, mydriasis, optometry, gaze quality, simultaneous vision, fusion function, stereoscopic vision, amblyopia type, eye position, nystagmus condition, and diagnosis results, etc.
In this embodiment, the score value may be obtained by a score evaluation system built in the training program. For example, some vision training products currently in the market place contain several training modules, each comprising at least 20 training items. Usually, the training items are mainly small games, and the small games can guide a user to perform actions such as eyeball rotation or gazing, so that the purpose of visual training is achieved. Some small games calculate the score of the training program for the user based on the success rate of the pass. The training program adopted by the embodiment is mainly a game program developed by digital therapy for treating the strabismus. Alternatively, the preference value may be a subjective score of the user, and the user may input the preference value through an external device at the user terminal, even if the preference value is fed back to the terminal for performing the visual training item recommendation method based on the user feedback.
Preferably, the user data may be selected from historical data according to treatment effect. For example, data with the top 20% of treatment effect was screened. The therapeutic effect can be evaluated according to the vision improvement degree before and after training.
Further, the form training data may be constructed by user data (refer to fig. 2). When the 1 st training item was selected, the current number of training items was 0, the previous training item was none, the score of the previous training item was 0, and the preference degree for the training item was 0.
Simultaneously, normalizing the last training score value and the preference value of the last training item; and carrying out the ebadd operation on the user information and the discrete features in the vision examination result, wherein the rest features (continuous features) are not processed, and the loss function of the training project prediction model can adopt a cross entropy loss function.
S2, inputting the user data into a training item prediction model, and predicting the next training item based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results;
the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data.
In this embodiment, the user data of the input model mainly includes user information, a vision inspection result at the current time (before the T-th training), a last training item (T-1 th training), the number of times trained (T-1 time), a last training score value, and a preference value for the last training item. Each visual training corresponds to a training program.
Optionally, a deep learning based form data classification method such as TabNet (TabNet: attentive Interpretable Tabular Learning) is used to model the vision inspection result of the user before the T-th training, the training item used for the T-1 time, the trained times (namely T-1), the score of the T-1 time training and the preference degree of the training item for the T-1 time training, and is used for predicting the next (T-th) training item.
Considering that better training items can be brought to users, more than the actual used items are brought to the users, meanwhile, in order to increase the output diversity and prevent overfitting, the label does not directly adopt the next training item used by the users to be predicted, but combines the training items used by the users to be predicted as the prediction target of the model by searching similar users according to the first n items which are adopted by the similar users most frequently.
Specifically, the TabNet network architecture includes a coding and decoding structure and a prediction structure, and performs unsupervised training on the vision inspection result before the T training and the information such as age and gender of the user through the coding and decoding structure, and uses the output result of the encoder as the characteristic representation of the user to be predicted. Then, taking vision examination results before the T-th training of some users and user related information (gender, age and the like) as search samples, and respectively determining and screening corresponding matching samples for each search sample. The screening rule can be obtained by screening according to the similarity between the search sample and the matching sample. The prediction structure is used for performing supervised training on the next training items of a plurality of user samples.
Specifically, as an example of the present embodiment, when the cosine similarity between the search sample and the matching sample is greater than a preset value (preferably 0.9), it is determined that the search sample corresponds to the matching sample, and the next training item of the matching sample greater than the preset value may be used for training; as another example of this embodiment, a plurality of matching samples with the maximum cosine similarity may be screened out according to the order of the cosine similarity from large to small. The matched object may be the vision test result of other users before a certain training and user related information (gender, age, etc.) in addition to the look-up sample.
Then, setting the classification labels for the screened matching samples and classification labels to be pre-configured according to the next training items of the searched samples and the next training items of the screened matching samples (as a preferred implementation manner, the next training items of five matching samples and the next training items of the searched samples are 6 training items in total); the matching samples are obtained through screening according to cosine similarity between the searching samples and the matching samples. The probability value of the next training item of the search sample may be set to 0.25, and the probability values of the other training items may be set to 0.15,6 training items, and the total probability value of the training items may be set to 1.
After the classification labels are set, the training item prediction model can be built through the set classification labels. Combining the training item of the T-1 time and the corresponding scoring value thereof, the preference value of the T-1 time item, the training times before the T time, the vision examination result before the T time of the user and relevant user information, and learning the interrelationship between the training items by utilizing a TabNet network structure for predicting the next training item. And then aiming at the scores of the prediction results, sorting the training items, selecting three training items with the front scores, and sending the three training items to the user terminal so as to achieve the purpose of recommending to the user. By implementing the embodiment of the application, the user can be pertinently recommended to a reasonable training scheme based on factors such as user preference, score of training items (instant performance of the user) and the like without formulating complicated rules as in the prior art.
Correspondingly, referring to fig. 3, the embodiment of the application also provides a visual training project recommending device based on user feedback, which comprises a user data acquiring module 101 and a recommending module 102; wherein,,
the user data acquisition module 101 is configured to acquire user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side;
the recommending module 102 is configured to input the user data into a training item prediction model, and predict a training item next time based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results;
the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data.
As a preferable scheme, the classification label is set according to the next training item of the search sample and the next training item of the screened matching sample; the screened matching samples are obtained through screening according to the similarity between the searching samples and the matching samples.
As a preferred solution, the screened matching samples are screened according to the similarity between the search sample and the matching sample, specifically:
screening out matching samples with cosine similarity larger than a preset value;
or, screening a plurality of matching samples according to the order of cosine similarity from large to small.
Correspondingly, the embodiment of the application also provides a terminal device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the visual training project recommending method based on user feedback when executing the computer program.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the terminal, connecting various parts of the entire terminal using various interfaces and lines.
The memory may be used to store the computer program, and the processor may implement various functions of the terminal by running or executing the computer program stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
Correspondingly, the embodiment of the application also provides a computer readable storage medium, which comprises a stored computer program, wherein the equipment where the computer readable storage medium is located is controlled to execute the visual training program recommendation method based on the user feedback when the computer program runs.
Wherein the module integrated with the visual training program recommending device based on the user feedback can be stored in a computer readable storage medium if the module is realized in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
the embodiment of the application provides a visual training item recommending method and device based on user feedback, terminal equipment and a computer readable storage medium, wherein the visual training item recommending method comprises the following steps: acquiring user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side; inputting the user data into a training item prediction model, and predicting the next training item based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results; the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data. By implementing the embodiment of the application, the user information, the vision examination result, the last training item, the trained times, the last training score and the preference value of the last training item are input into the prediction model, and the classification label is set according to the search sample data and the matching sample data.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present application, and are not to be construed as limiting the scope of the application. It should be noted that any modifications, equivalent substitutions, improvements, etc. made by those skilled in the art without departing from the spirit and principles of the present application are intended to be included in the scope of the present application.

Claims (10)

1. A visual training program recommendation method based on user feedback, comprising:
acquiring user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side;
inputting the user data into a training item prediction model, and predicting the next training item based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results;
the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data.
2. The visual training item recommending method based on user feedback as set forth in claim 1, wherein said classification tag is set according to a next training item of a search sample and a next training item of a screened matching sample; the screened matching samples are obtained through screening according to the similarity between the searching samples and the matching samples.
3. The visual training program recommending method based on user feedback according to claim 2, wherein the screened matching samples are screened according to similarity between the search samples and the matching samples, specifically:
screening out matching samples with cosine similarity larger than a preset value;
or, screening a plurality of matching samples according to the order of cosine similarity from large to small.
4. The visual training program recommending method based on user feedback of claim 3, wherein said user information comprises age and gender; the vision test results include vision, mydriasis, optometry, gaze properties, simultaneous vision, fusion function, stereoscopy, amblyopia type, eye position, nystagmus condition, and diagnostic results.
5. The visual training program recommendation method based on user feedback of claim 4, further comprising, prior to said inputting said user data into a training program prediction model: normalizing the last training score value and the preference value of the last training item; and carrying out the ebadd operation on the user information and the discrete features in the vision inspection result, wherein the loss function of the training item prediction model adopts a cross entropy function.
6. The visual training project recommending device based on the user feedback is characterized by comprising a user data acquisition module and a recommending module; wherein,,
the user data acquisition module is used for acquiring user data; the user data comprises user information, a vision examination result at the current moment, a last training item, trained times, a last training score value and a preference value for the last training item; wherein, the preference value is obtained through feedback of a user side;
the recommendation module is used for inputting the user data into a training item prediction model and predicting the next training item based on a pre-constructed classification label; selecting training items with top three scores to recommend to the user according to the scores of the predicted results;
the training project prediction model is constructed based on a TabNet network structure; the TabNet comprises a coding and decoding structure and a prediction structure; the encoding and decoding structure is used for performing unsupervised training on the vision inspection result and user information to obtain user characteristics, and the user characteristics are used for constructing the classification labels; the classification labels are set according to a plurality of user sample data; the user sample data includes lookup sample data and matching sample data.
7. The visual training program recommending apparatus based on user feedback as defined in claim 6, wherein said classification tag is set according to a next training program of a search sample and a next training program of a screened matching sample; the screened matching samples are obtained through screening according to the similarity between the searching samples and the matching samples.
8. The visual training program recommending apparatus according to claim 7, wherein said screened matching samples are screened according to a similarity between the search sample and the matching sample, specifically:
screening out matching samples with cosine similarity larger than a preset value;
or, screening a plurality of matching samples according to the order of cosine similarity from large to small.
9. A terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the visual training program recommendation method based on user feedback according to any of claims 1 to 5 when the computer program is executed.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program, when run, controls a device in which the computer readable storage medium is located to perform the visual training program recommendation method based on user feedback according to any one of claims 1 to 5.
CN202310747623.4A 2023-06-21 2023-06-21 Visual training project recommending method and device based on user feedback Pending CN116775996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310747623.4A CN116775996A (en) 2023-06-21 2023-06-21 Visual training project recommending method and device based on user feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310747623.4A CN116775996A (en) 2023-06-21 2023-06-21 Visual training project recommending method and device based on user feedback

Publications (1)

Publication Number Publication Date
CN116775996A true CN116775996A (en) 2023-09-19

Family

ID=87985525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310747623.4A Pending CN116775996A (en) 2023-06-21 2023-06-21 Visual training project recommending method and device based on user feedback

Country Status (1)

Country Link
CN (1) CN116775996A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117438065A (en) * 2023-11-14 2024-01-23 杭州亮眼赫姿医疗器械有限公司 Data processing method, system and storage medium of VR vision training instrument
CN117809807A (en) * 2024-01-22 2024-04-02 中科网联(武汉)信息技术有限公司 Visual training method, system and storage medium based on interaction platform

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210158177A1 (en) * 2019-11-21 2021-05-27 Adobe Inc. Method and system for recommending digital content
CN113641791A (en) * 2021-08-12 2021-11-12 卓尔智联(武汉)研究院有限公司 Expert recommendation method, electronic device and storage medium
US20220058489A1 (en) * 2020-08-19 2022-02-24 The Toronto-Dominion Bank Two-headed attention fused autoencoder for context-aware recommendation
CN114581177A (en) * 2022-02-17 2022-06-03 平安科技(深圳)有限公司 Product recommendation method, device, equipment and storage medium
CN114663198A (en) * 2022-04-25 2022-06-24 未鲲(上海)科技服务有限公司 Product recommendation method, device and equipment based on user portrait and storage medium
CN114783563A (en) * 2022-05-10 2022-07-22 浙江工业大学 Recommendation method for visual training
CN115017413A (en) * 2022-06-16 2022-09-06 咪咕文化科技有限公司 Recommendation method and device, computing equipment and computer storage medium
CN115019933A (en) * 2022-06-16 2022-09-06 浙江工业大学 Amblyopia training scheme recommendation method fusing GMF and CDAE
CN115844696A (en) * 2023-02-24 2023-03-28 广州视景医疗软件有限公司 Method and device for generating visual training scheme, terminal equipment and medium
CN116010793A (en) * 2023-01-04 2023-04-25 浙江网商银行股份有限公司 Classification model training method and device and category detection method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210158177A1 (en) * 2019-11-21 2021-05-27 Adobe Inc. Method and system for recommending digital content
US20220058489A1 (en) * 2020-08-19 2022-02-24 The Toronto-Dominion Bank Two-headed attention fused autoencoder for context-aware recommendation
CN113641791A (en) * 2021-08-12 2021-11-12 卓尔智联(武汉)研究院有限公司 Expert recommendation method, electronic device and storage medium
CN114581177A (en) * 2022-02-17 2022-06-03 平安科技(深圳)有限公司 Product recommendation method, device, equipment and storage medium
CN114663198A (en) * 2022-04-25 2022-06-24 未鲲(上海)科技服务有限公司 Product recommendation method, device and equipment based on user portrait and storage medium
CN114783563A (en) * 2022-05-10 2022-07-22 浙江工业大学 Recommendation method for visual training
CN115017413A (en) * 2022-06-16 2022-09-06 咪咕文化科技有限公司 Recommendation method and device, computing equipment and computer storage medium
CN115019933A (en) * 2022-06-16 2022-09-06 浙江工业大学 Amblyopia training scheme recommendation method fusing GMF and CDAE
CN116010793A (en) * 2023-01-04 2023-04-25 浙江网商银行股份有限公司 Classification model training method and device and category detection method
CN115844696A (en) * 2023-02-24 2023-03-28 广州视景医疗软件有限公司 Method and device for generating visual training scheme, terminal equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117438065A (en) * 2023-11-14 2024-01-23 杭州亮眼赫姿医疗器械有限公司 Data processing method, system and storage medium of VR vision training instrument
CN117809807A (en) * 2024-01-22 2024-04-02 中科网联(武汉)信息技术有限公司 Visual training method, system and storage medium based on interaction platform
CN117809807B (en) * 2024-01-22 2024-05-31 中科网联(武汉)信息技术有限公司 Visual training method, system and storage medium based on interaction platform

Similar Documents

Publication Publication Date Title
US11636601B2 (en) Processing fundus images using machine learning models
CN116775996A (en) Visual training project recommending method and device based on user feedback
Grill-Spector et al. Visual recognition: As soon as you know it is there, you know what it is
Henderson Human gaze control during real-world scene perception
Canayaz Classification of diabetic retinopathy with feature selection over deep features using nature-inspired wrapper methods
Saeed et al. Sense and learn: Self-supervision for omnipresent sensors
EP3850638B1 (en) Processing fundus camera images using machine learning models trained using other modalities
KR100773107B1 (en) Step by Step Fitness Management System of User Object Using on-line
Rai et al. Visual attention, visual salience, and perceived interest in multimedia applications
Ludwig et al. Automatic identification of referral-warranted diabetic retinopathy using deep learning on mobile phone images
US20220309665A1 (en) Processing fundus images using machine learning models to generate blood-related predictions
CN117370535B (en) Training method of medical dialogue model, medical query method, device and equipment
CN112052874A (en) Physiological data classification method and system based on generation countermeasure network
CN115844696A (en) Method and device for generating visual training scheme, terminal equipment and medium
Abirami et al. A novel automated komodo Mlipir optimization-based attention BiLSTM for early detection of diabetic retinopathy
Porcu et al. Towards the prediction of the quality of experience from facial expression and gaze direction
Pachai et al. The bandwidth of diagnostic horizontal structure for face identification
EP4341944A1 (en) Artificial intelligence based systems and methods for analyzing user-specific skin or hair data to predict user-specific skin or hair conditions
CN114822800A (en) Internet medical triage method and system
Vijayakumar et al. AI-derived quality of experience prediction based on physiological signals for immersive multimedia experiences: research proposal
Kannan et al. Predicting autism in children at an early stage using eye tracking
Bittner et al. The impact of symmetry on the efficiency of human face perception
SE1550325A1 (en) Optimizing recommendations in a system for assessing mobility or stability of a person
Iyer et al. Negative affect homogenizes and positive affect diversifies social memory consolidation across people
Jayarathna et al. Rationale and architecture for incorporating human oculomotor plant features in user interest modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination