CN115407867B - Intelligent interaction system based on multiple sensors - Google Patents

Intelligent interaction system based on multiple sensors Download PDF

Info

Publication number
CN115407867B
CN115407867B CN202210861778.6A CN202210861778A CN115407867B CN 115407867 B CN115407867 B CN 115407867B CN 202210861778 A CN202210861778 A CN 202210861778A CN 115407867 B CN115407867 B CN 115407867B
Authority
CN
China
Prior art keywords
module
user
sensor
optimization
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210861778.6A
Other languages
Chinese (zh)
Other versions
CN115407867A (en
Inventor
齐红心
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xia Qianming
Original Assignee
Xia Qianming
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xia Qianming filed Critical Xia Qianming
Priority to CN202210861778.6A priority Critical patent/CN115407867B/en
Publication of CN115407867A publication Critical patent/CN115407867A/en
Application granted granted Critical
Publication of CN115407867B publication Critical patent/CN115407867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses an intelligent interaction system based on multiple sensors, which comprises a sensor assembly module, a user model construction module and an interaction optimization module, wherein the sensor assembly module is used for acquiring optimizable data of a user according to multiple sensors, the user model construction module is used for constructing a user use scene model by combining acquired data, the interaction optimization module is used for performing visual optimization on an interaction scene and an interaction mode of the user, the user model construction module comprises a manual input module, a sensor data integration module, an identity classification module and a requirement analysis module, the sensor data integration module is used for performing analysis integration on the sensor acquired data, the identity classification module is used for performing identity classification according to the sensor acquired data and user habits, and the requirement analysis module is used for analyzing interaction requirements of the user.

Description

Intelligent interaction system based on multiple sensors
Technical Field
The invention relates to the technical field of intelligent interaction, in particular to an intelligent interaction system based on multiple sensors.
Background
With the rapid development of smart home, smart televisions have higher and higher proportion in life of people, and particularly smart televisions of digital television signal types bound with various large operators have rapid development, although the development of mobile electronic equipment is very popular, smart televisions can bring a sense of aggregation to users, however, information provided by smart televisions is too complicated, information on one screen is difficult to be rapidly received by users, the exploring enthusiasm of users can be reduced due to excessive information, the user experience is influenced while resource waste is caused, meanwhile, the functions of smart televisions are more and more abundant, the system operation complexity of the smart televisions is correspondingly increased, the population structure of China is complex, television manufacturers are numerous, and resource copyrights are dispersed, so that the experience effect of the smart televisions is not ideal, the utilization rate of smart television resources and information is improved, and the intelligent interaction system based on multiple sensors for improving the user experience is very necessary.
Disclosure of Invention
The invention aims to provide an intelligent interaction system based on multiple sensors, so as to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: the utility model provides an intelligent interaction system based on multisensor, includes sensor assembly module, user model building module, interaction optimization module, sensor assembly module is used for gathering user's optimizable data according to a plurality of sensors, user model building module is used for combining to gather data and builds user's use scene model, interaction optimization module is used for carrying out visual optimization to user's interaction scene and interaction mode, user model building module includes manual input module, sensor data integration module, identity classification module, demand analysis module, manual input module is used for entering single user's identity information, use custom into the system, sensor data integration module is used for carrying out analysis integration to sensor acquisition data, identity classification module is used for carrying out the identity classification according to sensor acquisition data and user custom, demand analysis module is used for analyzing user's interaction demand.
According to the technical scheme, the sensor assembly module comprises a deep sensing camera module, a body movement recording sensor module, a behavior recording module and a data uploading module, wherein the deep sensing camera module is used for verifying the identity of a user and watching behavior recording, the body movement recording sensor module is used for identifying and recording the body movement condition of the user, the behavior recording module is used for recording the behavior characteristics of the user when watching television, and the data uploading module is used for transmitting the data to a block chain for storage.
According to the technical scheme, the interaction optimization module comprises a focusing display module, a searching gain calculation module, a classifying redrawing module and an information framework optimization module, wherein the focusing display module is used for changing focusing display layout by butting a television display driving board, the searching gain calculation module is used for calculating the gain of a user in the searching process, the classifying redrawing module is used for pertinently redrawing a classifying menu of a UI interface, the information framework optimization module is used for optimizing a module level of a core function, and the focusing display module is electrically connected with the searching gain calculation module and the classifying redrawing module.
According to the technical scheme, in the sensor assembly module, the deep sensing camera module and the body movement recording sensor module form a serial structure to realize state identification and recording when a user watches television, the deep sensing camera module has a gaze sensing function, the body movement recording sensor module has a high-sensitivity multi-axis sensing function, and the specific linkage method for the state identification and recording when the user watches television of the serial structure formed by the deep sensing camera module and the body movement recording sensor module is as follows:
step S1: detecting the angle of the sight of the user and judging the gazing state;
step S2: transmitting a starting electric signal to the body movement recording sensor module according to the monitoring result;
step S3: and judging the user behavior according to the data recorded by the body movement recording sensor.
According to the above technical solution, in step S3, the user behavior and the judgment basis thereof specifically include the following classifications:
classification a: the deep-sensing camera detects that a user is in a gazing state;
classification B: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has active feedback in a time smaller than t;
classification C: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has activity feedback in a period of time which is longer than T and shorter than T;
classification D: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has no activity feedback after the time is longer than T;
each of the above classifications represents the following possible activities, respectively:
activity A: the user is in a television watching state;
activity B: the user is in other activities after watching television;
activity C: the user enters a light sleep state;
activity D: the user enters a deep sleep state;
the time T and the time T respectively represent a shallow sleep time threshold and a deep sleep time threshold of the user, the unit is minutes, and the time is obtained by combining the age of the user with the historical sleep time.
According to the above technical solution, in the user model construction module, the method for constructing the user model includes the following steps:
step one: manually inputting user characteristics, wherein the user characteristics comprise age, watching interest type and average watching duration;
step two: integrating the data recorded by the sensor module;
step three: classifying the identity of the user by combining the data;
step four: user demand analysis is carried out according to the classification result and the recorded data;
step five: and carrying out interaction optimization on the analysis result of the user demand.
According to the above technical solution, in the fifth step, the method for performing interactive optimization on the analysis result of the user demand further includes the following steps:
optimizing step 1: calculating a search gain R;
optimizing step 2: recording time T of each time the user switches the classified plate according to search behavior of the user B And searching for time-consuming T in the current plate w
Optimizing step 3: determining the effective value G of the search target of the user according to the historical search period and big data i
Optimizing step 4: counting the time T spent by a user without a target search J
According to the above technical solution, in the optimizing step 1, the calculation formula of the search gain R is as follows:
where k is a time conversion coefficient, takingValue range (0, 1), T B 、T w 、T J In minutes.
According to the technical scheme, the information architecture optimization module comprises content classification optimization, display focusing optimization and interaction flow optimization. .
Compared with the prior art, the invention has the following beneficial effects: according to the invention, the deep camera module and the body movement recording sensor module are arranged, so that the deep camera can perform facial identity recognition on a user, the functions of encryption and identity recognition are achieved, meanwhile, when the user generates gazing behaviors, gazing duration can be recorded, whether the user watches a television or not is judged by combining the body movement recording sensor, the data of the user belong to privacy data, and the risk of privacy leakage is reduced by performing blockchain storage.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of the system module composition of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides the following technical solutions: the intelligent interaction system based on the multiple sensors comprises a sensor assembly module, a user model building module and an interaction optimization module, wherein the sensor assembly module is used for collecting optimizable data of a user according to the multiple sensors, the user model building module is used for building a user scene model by combining collected data, the interaction optimization module is used for conducting visual optimization on interaction scenes and interaction modes of the user, the user model building module comprises a manual input module, a sensor data integration module, an identity classification module and a demand analysis module, the manual input module is used for inputting identity information and use habits of a single user into the system, the sensor data integration module is used for conducting analysis integration on the sensor collected data, the identity classification module is used for conducting identity classification according to the sensor collected data and the user habits, and the demand analysis module is used for analyzing interaction demands of the user. In order to meet the use of various users, a manual input module is provided for user information input, meanwhile, the users are classified by combining data acquired by the sensors, the interaction requirements and interaction habits of the users are further analyzed, and a complete user model is constructed.
The sensor assembly module comprises a deep sensing camera module, a body movement recording sensor module, a behavior recording module and a data uploading module, wherein the deep sensing camera module is used for verifying the identity of a user and looking at behavior recording, the body movement recording sensor module is used for identifying and recording the body movement condition of the user, the behavior recording module is used for recording the behavior characteristics of the user when watching television, and the data uploading module is used for transmitting data to a block chain for storage. The deep camera can perform facial identity recognition on the user, plays roles of encryption and identity recognition, can record looking at time when the user looks at the behavior, and the combined motion recording sensor judges whether the user watches a television or not, so that the data of the user belong to privacy data, and the privacy leakage risk is reduced by performing blockchain storage.
The interactive optimization module comprises a focusing display module, a searching gain calculation module, a classifying redrawing module and an information framework optimization module, wherein the focusing display module is used for changing focusing display layout by butting a television display driving board, the searching gain calculation module is used for calculating the gain of a user in the searching process, the classifying redrawing module is used for carrying out targeted redrawing on a classifying menu of a UI interface, the information framework optimization module is used for optimizing a module level of a core function, and the focusing display module is electrically connected with the searching gain calculation module and the classifying redrawing module. The method comprises the steps of carrying out focusing optimization on a television display interface according to the requirements of users, ensuring that each user can more efficiently inquire the interface required by the user, calculating the search income of each time, carrying out comprehensive optimization according to calculation results, simultaneously carrying out navigation menu classification redrawing aiming at different users, improving the search efficiency, placing a module directly related to a core function at the highest level in information architecture optimization, improving the module level of an auxiliary core function, and hiding or desalting a module with lower correlation with the core function.
In the sensor assembly module, a series structure formed by the deep-sensing camera module and the body movement recording sensor module is used for realizing state identification and recording when a user watches the television, the deep-sensing camera module has a gaze sensing function, the body movement recording sensor module has a high-sensitivity multi-axis sensing function, and the specific linkage method for the state identification and recording when the user watches the television by the series structure formed by the deep-sensing camera module and the body movement recording sensor module is as follows:
step S1: detecting the angle of the sight of the user and judging the gazing state; when watching television, a user looks at the screen and is detected by a looking sensing function in the deep camera, a starting signal is generated, and when the user stops looking at the television, a stopping signal is generated, wherein the starting signal is embodied in a binary state 1, and the stopping signal is embodied in a binary state 0;
step S2: transmitting a starting electric signal to the body movement recording sensor module according to the monitoring result; the body movement recording sensor module is in a sleep state for a long time, a receiving state can be opened only after the face recognition of a user is detected to enter the television system, and meanwhile, the sleep is released after a fixation stop signal sent by the fixation sensing module is received, so that the power consumption is reduced;
step S3: and judging the user behavior according to the data recorded by the body movement recording sensor.
In step S3, the user behavior and the judgment basis thereof specifically include the following classifications:
classification a: the deep-sensing camera detects that a user is in a gazing state;
classification B: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has active feedback in a time smaller than t;
classification C: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has activity feedback in a period of time which is longer than T and shorter than T;
classification D: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has no activity feedback after the time is longer than T;
each of the above classifications represents the following possible activities, respectively:
activity A: the user is in a television watching state;
activity B: the user is in other activities after watching television;
activity C: the user enters a light sleep state;
activity D: the user enters a deep sleep state;
the time T and the time T respectively represent a shallow sleep time threshold and a deep sleep time threshold of the user, the unit is minutes, and the time is obtained by combining the age of the user with the historical sleep time.
In the user model construction module, the method for constructing the user model comprises the following steps:
step one: manually inputting user characteristics, wherein the user characteristics comprise age, watching interest type and average watching duration;
step two: integrating the data recorded by the sensor module;
step three: classifying the identity of the user by combining the data;
step four: user demand analysis is carried out according to the classification result and the recorded data;
step five: and carrying out interaction optimization on the analysis result of the user demand.
In the fifth step, the method for performing interactive optimization on the analysis result of the user demand further comprises the following steps:
optimizing step 1: calculating a search gain R; under the use situation of the intelligent television, the user has a non-target searching behavior and a target searching behavior at the same time, in the non-target searching behavior, information browsed and perceived by the user can be revenue type information, and in the target searching behavior, the target of the user is clear, and the measurement standard of the searching efficiency is the time spent in searching;
optimizing step 2: recording time T of each time the user switches the classified plate according to search behavior of the user B And searching for time-consuming T in the current plate w The method comprises the steps of carrying out a first treatment on the surface of the The classifying plate comprises a plurality of sub-classifying plates, and the total switching time is T B
Optimizing step 3: determining the effective value G of the search target of the user according to the historical search period and big data i The method comprises the steps of carrying out a first treatment on the surface of the In the internet television scene, the search result is complex, no matter how the search result is, the search benefits of the user exceed the target benefits, but the numerical quantification is lacking, the search benefits of the user can be quantized by combining the historical search period with the search target effective value measured by the big data, and the calculation characteristics are more obvious;
optimizing step 4: counting the time T spent by a user without a target search J
In the optimizing step 1, the calculation formula of the search gain R is as follows:
wherein k is a time conversion coefficient, the value range is (0, 1), T B 、T w 、T J In minutes. The search behavior generated by the user during interaction is difficult to be digitalized, the search time and the target effective value are combined to calculate the search gain, the search benefits of the user are more remarkably represented, and data reference is provided for interface optimization and classification design of an interaction system, specifically: and sequencing the plurality of user search gains R after face verification, and selecting the display logic of the interface according to the sequencing result.
The information architecture optimization module comprises content classification optimization, display focusing optimization and interaction flow optimization. Users of different age groups have different favorites on the program types, the favorites of the users are judged by combining the manually input information and the large data of the user watching record and searching time at ordinary times, the programs with the highest user interest are displayed in a sub-screen in a focusing mode, the interactive flow is optimized by voice and gesture, and the learning cost of the users is reduced.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. The utility model provides an intelligent interaction system based on multisensor, includes sensor assembly module, user model building module, mutual optimization module, its characterized in that: the sensor module is used for acquiring optimizable data of a user according to a plurality of sensors, the user model building module is used for building a user use scene model by combining the acquired data, the interaction optimization module is used for performing visual optimization on an interaction scene and an interaction mode of the user, the user model building module comprises a manual input module, a sensor data integration module, an identity classification module and a demand analysis module, the manual input module is used for inputting identity information and use habits of a single user into the system, the sensor data integration module is used for performing analysis integration on the sensor acquired data, the identity classification module is used for performing identity classification according to the sensor acquired data and the user habits, and the demand analysis module is used for analyzing interaction demands of the user;
the sensor assembly module comprises a deep sensing camera module, a body movement recording sensor module, a behavior recording module and a data uploading module, wherein the deep sensing camera module is used for verifying the identity and looking at behavior recording of a user, the body movement recording sensor module is used for identifying and recording the body movement condition of the user, the behavior recording module is used for recording the behavior characteristics of the user when watching television, and the data uploading module is used for transmitting data to a block chain for storage;
the interactive optimization module comprises a focusing display module, a searching gain calculation module, a classifying redrawing module and an information framework optimization module, wherein the focusing display module is used for changing focusing display layout by abutting against a television display driving board, the searching gain calculation module is used for calculating the gain of a user in the searching process, the classifying redrawing module is used for pertinently redrawing a classifying menu of a UI interface, the information framework optimization module is used for optimizing a module level of a core function, and the focusing display module is electrically connected with the searching gain calculation module and the classifying redrawing module.
2. The intelligent multi-sensor based interaction system of claim 1, wherein: in the sensor assembly module, a deep-sensing camera module and a body movement recording sensor module form a serial structure to realize state identification and recording when a user watches television, the deep-sensing camera module has a gaze sensing function, the body movement recording sensor module has a high-sensitivity multi-axis sensing function, and a specific linkage method for the state identification and recording when the user watches television by the serial structure formed by the deep-sensing camera module and the body movement recording sensor module is as follows:
step S1: detecting the angle of the sight of the user and judging the gazing state;
step S2: transmitting a starting electric signal to the body movement recording sensor module according to the monitoring result;
step S3: and judging the user behavior according to the data recorded by the body movement recording sensor.
3. A multi-sensor based intelligent interactive system according to claim 2, characterized in that: in the step S3, the user behavior and the judgment basis thereof specifically include the following classifications:
classification a: the deep-sensing camera detects that a user is in a gazing state;
classification B: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has active feedback in a time smaller than t;
classification C: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has activity feedback in a period of time which is longer than T and shorter than T;
classification D: the deep sensing camera detects that the user is not in a gazing state, and the body movement recording sensor detects that the user has no activity feedback after the time is longer than T;
each of the above classifications represents the following possible activities, respectively:
activity A: the user is in a television watching state;
activity B: the user is in other activities after watching television;
activity C: the user enters a light sleep state;
activity D: the user enters a deep sleep state;
the time T and the time T respectively represent a shallow sleep time threshold and a deep sleep time threshold of the user, the unit is minutes, and the time is obtained by combining the age of the user with the historical sleep time.
4. A multi-sensor based intelligent interactive system according to claim 3, characterized in that: in the user model construction module, the method for constructing the user model comprises the following steps:
step one: manually inputting user characteristics, wherein the user characteristics comprise age, watching interest type and average watching duration;
step two: integrating the data recorded by the sensor module;
step three: classifying the identity of the user by combining the data;
step four: user demand analysis is carried out according to the classification result and the recorded data;
step five: and carrying out interaction optimization on the analysis result of the user demand.
5. The intelligent multi-sensor based interaction system of claim 4, wherein: in the fifth step, the method for performing interactive optimization on the analysis result of the user demand further comprises the following steps:
optimizing step 1: calculating a search gain R;
optimizing step 2: recording time T of each time the user switches the classified plate according to search behavior of the user B And searching for time-consuming T in the current plate w
Optimizing step 3: determining the effective value G of the search target of the user according to the historical search period and big data i
Optimizing step 4: counting the time T spent by a user without a target search J
6. The intelligent multi-sensor based interaction system of claim 5, wherein: in the optimizing step 1, the calculation formula of the search gain R is as follows:
wherein k is a time conversion coefficient, the value range is (0, 1), T B 、T w 、T J In minutes.
7. The intelligent multi-sensor based interaction system of claim 6, wherein: the information architecture optimization module comprises content classification optimization, display focusing optimization and interaction flow optimization.
CN202210861778.6A 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors Active CN115407867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210861778.6A CN115407867B (en) 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210861778.6A CN115407867B (en) 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors

Publications (2)

Publication Number Publication Date
CN115407867A CN115407867A (en) 2022-11-29
CN115407867B true CN115407867B (en) 2023-10-24

Family

ID=84158047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210861778.6A Active CN115407867B (en) 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors

Country Status (1)

Country Link
CN (1) CN115407867B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796734A (en) * 2015-03-20 2015-07-22 四川长虹电器股份有限公司 Real-time interactive smart television program combined recommendation system and method
CN109068149A (en) * 2018-09-14 2018-12-21 深圳Tcl新技术有限公司 Program commending method, terminal and computer readable storage medium
CN111629254A (en) * 2020-05-18 2020-09-04 南京莱科智能工程研究院有限公司 Scene-based intelligent television program recommending control system
CN114501144A (en) * 2022-01-13 2022-05-13 深圳灏鹏科技有限公司 Image-based television control method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015046089A (en) * 2013-08-29 2015-03-12 ソニー株式会社 Information processor and information processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796734A (en) * 2015-03-20 2015-07-22 四川长虹电器股份有限公司 Real-time interactive smart television program combined recommendation system and method
CN109068149A (en) * 2018-09-14 2018-12-21 深圳Tcl新技术有限公司 Program commending method, terminal and computer readable storage medium
CN111629254A (en) * 2020-05-18 2020-09-04 南京莱科智能工程研究院有限公司 Scene-based intelligent television program recommending control system
CN114501144A (en) * 2022-01-13 2022-05-13 深圳灏鹏科技有限公司 Image-based television control method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115407867A (en) 2022-11-29

Similar Documents

Publication Publication Date Title
US11196930B1 (en) Display device content selection through viewer identification and affinity prediction
CN101925915B (en) Equipment accesses and controls
CN102932570A (en) Monitoring robot
CN105825098B (en) Unlocking screen method, image-pickup method and the device of a kind of electric terminal
CN111243742B (en) Intelligent glasses capable of analyzing eye habit of children
CN103942243A (en) Display apparatus and method for providing customer-built information using the same
CN112560649A (en) Behavior action detection method, system, equipment and medium
CN105786711A (en) Data analysis method and device
CN103577662A (en) Method and device for determining electricity consumption condition or environmental condition of household electrical appliances
CN116761049B (en) Household intelligent security monitoring method and system
CN113076903A (en) Target behavior detection method and system, computer equipment and machine readable medium
CN113011399A (en) Video abnormal event detection method and system based on generation cooperative judgment network
CN104699798A (en) Sample data processing method and device
CN102982015A (en) Method of producing electronic courseware by utilizing electronic whiteboard and corresponding display method
CN115407867B (en) Intelligent interaction system based on multiple sensors
CN106022048B (en) Unlocking screen method, image-pickup method and the device of a kind of electric terminal
US9727312B1 (en) Providing subject information regarding upcoming images on a display
US10706601B2 (en) Interface for receiving subject affinity information
CN107743083A (en) A kind of intelligent domestic system
CN116993289A (en) System and method for managing interrogation record
CN116168313A (en) Control method and device of intelligent device, storage medium and electronic device
CN116977256A (en) Training method, device, equipment and storage medium for defect detection model
CN116261009A (en) Video detection method, device, equipment and medium for intelligently converting video audience
Kang et al. Behavior analysis method for indoor environment based on app usage mining
CN112016350A (en) Dining satisfaction evaluation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230926

Address after: Room 401, Unit 1, Building 2, Anyuan Community, Jianping Town, Langxi County, Xuancheng City, Anhui Province, 242000

Applicant after: Xia Qianming

Address before: No. 427, Xiexin Road, Taicang City, Suzhou City, Jiangsu Province, 215000

Applicant before: Qi Hongxin

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant