CN113869509A - Multi-device-participated collaborative scene analysis method and system - Google Patents

Multi-device-participated collaborative scene analysis method and system Download PDF

Info

Publication number
CN113869509A
CN113869509A CN202111037751.7A CN202111037751A CN113869509A CN 113869509 A CN113869509 A CN 113869509A CN 202111037751 A CN202111037751 A CN 202111037751A CN 113869509 A CN113869509 A CN 113869509A
Authority
CN
China
Prior art keywords
actual
sensing
information
database
perception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111037751.7A
Other languages
Chinese (zh)
Inventor
李强
袁叶倩
买倩玉
田海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North Minzu University
Original Assignee
North Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North Minzu University filed Critical North Minzu University
Priority to CN202111037751.7A priority Critical patent/CN113869509A/en
Publication of CN113869509A publication Critical patent/CN113869509A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/101Collaborative creation, e.g. joint development of products or services

Abstract

The invention relates to a method and a system for analyzing a multi-device participated collaboration scene, wherein the method comprises the following steps: firstly, determining information elements according to a current cooperation scene, and further constructing a perception model according to the determined information elements; then, a neural network model is constructed and trained to optimize the operation result; and further acquiring a perception information actual database through the constructed perception model, inputting the perception information actual database into the trained neural network model for operation to obtain an actual fusion objective function value database, and analyzing the actual fusion objective function value database to obtain the team cooperation efficiency. The method extracts effective perception information, reduces information redundancy, and analyzes the cooperation level of a team so as to improve the cooperation level and improve the whole cooperation efficiency.

Description

Multi-device-participated collaborative scene analysis method and system
Technical Field
The invention relates to the technical field of cooperative work perception, in particular to a method and a system for analyzing a cooperative scene with participation of multiple devices.
Background
Under the background of rapid development of communication technology and artificial intelligence, various intelligent devices and various applications carried by the intelligent devices are enriched, and great influence is brought to production and life styles of people. The use of multiple devices provides convenience for communication and information interaction among team members, and the efficiency of completing the cooperative task is improved, however, the multiple devices are not only large in number of devices, but also diverse in types of the devices, carried operating systems, interaction modes and information transfer modes, and the complexity of the cooperative working mode is increased.
A complex collaboration scene is formed between the team members and various devices, and a large amount of scene information is generated by interaction in the space; collaborators have limited personal ability, and generally are difficult to fully acquire information which is helpful for efficiently completing the collaborative tasks, so that the collaborative scene state is difficult to be comprehensively known, and the overall collaboration of the team cannot be accurately and effectively coordinated.
The known perception method and the known technology do not give sufficient attention to the problems of judgment of complexity of a collaborative work scene in which multiple devices participate, how to process information redundancy in multi-device interaction and the like, are mostly based on experience, and lack of a systematic and scientific solution.
Disclosure of Invention
In view of this, the invention provides a method and a system for analyzing a multi-device-participating collaboration scene, which utilize a related device to analyze and identify the collaboration scene, enhance perception, realize effective processing of information, and determine a team collaboration level, so as to adjust to improve team collaboration efficiency.
In order to achieve the purpose, the invention provides the following scheme:
a multi-device participated collaboration scene analysis method comprises the following steps:
determining an information element based on a cooperation scene, and constructing a perception model based on the information element;
constructing a neural network model based on a fusion objective function, and training the neural network model;
obtaining a perception information actual database based on the perception model;
inputting the actual perception information database into the trained neural network model to obtain an actual fusion objective function value database;
and analyzing based on the actual fusion objective function value database to obtain the cooperation efficiency.
Preferably, the information elements include groups, devices, time, environment, and tasks.
Preferably, the activation function of the neural network model is a Tanh function or a ReLU function.
Preferably, the actual database of perceptual information is stored in a cache manner or in a tensor manner after being aligned by multi-modal data.
Preferably, the obtaining of the actual database of perception information based on the perception model includes:
constructing different sensing combinations based on sensing equipment to obtain a sensing combination set;
sensing the cooperative scene based on the xth sensing combination and the sensing model to obtain xth sensing information actual data, enabling x to take different values, and repeating the process to obtain the sensing information actual database; x belongs to X; and X is the number of sensing combinations in the sensing combination set.
The invention also provides a multi-device participated collaborative scene analysis system, which comprises:
the sensing unit is used for obtaining a sensing information actual database based on the sensing model;
the processing unit is used for determining information elements based on the collaboration scene and constructing a perception model based on the information elements; the processing unit is further used for constructing a neural network model based on the fusion objective function, training the neural network model, further calculating according to the actual sensing information database and the trained neural network model to obtain an actual fusion objective function value database, and analyzing the actual fusion objective function value database to obtain the cooperation efficiency.
Preferably, the system further comprises a storage unit, and the storage unit stores the perception information actual database in a cache manner or stores the perception information actual database in a tensor manner after performing multi-modal data alignment.
Preferably, the sensing unit is one or a combination of a camera, a smart sound, a computer, a smart phone, a router, a vision sensor, a light sensor, a temperature sensor and an audio-visual integrated module.
Preferably, the sensing unit is connected with the storage unit through WiFi, Zigbee or bluetooth.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention relates to a method and a system for analyzing a multi-device participated collaboration scene, wherein the method comprises the following steps: firstly, determining information elements according to a current cooperation scene, and further constructing a perception model according to the determined information elements; then, a neural network model is constructed and trained to optimize the operation result; and further acquiring a perception information actual database through the constructed perception model, inputting the perception information actual database into the trained neural network model for operation to obtain an actual fusion objective function value database, and analyzing the actual fusion objective function value database to obtain the team cooperation efficiency. The method extracts effective perception information, reduces information redundancy, and analyzes the cooperation level of a team so as to improve the cooperation level and improve the whole cooperation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method for analyzing a collaborative scene with multiple devices participating in the invention;
fig. 2 is a structural diagram of a multi-device participating collaborative scene analysis system according to the present invention.
Description of the symbols: 1-sensing unit, 2-storage unit, 3-processing unit, 4-visualization unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a multi-device-participated collaborative scene analysis method and a multi-device-participated collaborative scene analysis system.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of a multi-device-participating collaboration scene analysis method, and as shown in fig. 1, the present invention provides a multi-device-participating collaboration scene analysis method, including:
step S1, determining information elements based on the collaboration scenario, and constructing a perception model based on the information elements. As an optional implementation manner, in this embodiment, the information elements include a group, a device, a time, an environment, and a task. Further, the perceptual model is a ═ G × D × T × E × W ═ ai|ai=<gi,di,ti,ei,wi>}; in the formula: a is a perception information set, G is a group set, D is an equipment set, T is a time set, E is an environment set, and W is a task set; a isi∈A,gi∈G,di∈D,ti∈T,ei∈E,wiE is W; the group in the group set consists of 2 and more members; the equipment set is one or a combination of a desktop computer, a notebook computer, a tablet computer, a smart phone, a smart watch, a smart camera, a smart sound box and a smart screen; time-set representation collaborative work processThe time series of (1); the set of environments includes a virtual mode, a real mode, and a fusion mode.
And step S2, constructing a neural network model based on the fusion objective function, and training the neural network model. Specifically, in this embodiment, the neural network model uses a BP neural network, and the fusion objective function is based on a time sequence, which is specifically as follows:
Figure BDA0003247912660000041
in the formula:
Figure BDA0003247912660000042
perception of information a for a scene at time tiThe entropy of the collaborative information of (a),
Figure BDA0003247912660000043
ΘTthe synergy entropy, which is the value of the element T.
Preferably, the weight of the neural network model is a scene perception factor mu, the activation function is any one of a Tanh function and a ReLU function, the bias is adjusted according to the complexity, the neural network model is trained for multiple times based on a perception information theoretical database, the force is fixed according to BP, the neural network model capable of processing different tasks can be obtained, an ideal fusion objective function value of each task is obtained in the training process, and then an ideal fusion objective function value database is obtained.
And step S3, obtaining a perception information actual database based on the perception model. The actual perception information database is the perception information set. The step S3 specifically includes:
step S31, constructing different sensing combinations based on the sensing equipment to obtain a sensing combination set;
step S32, sensing the cooperation scene based on the xth sensing combination and the sensing model to obtain xth sensing information actual data, making x take different values and repeating the process to obtain the sensing information actual database; x belongs to X; and X is the number of sensing combinations in the sensing combination set.
As an optional implementation manner, in this embodiment, the actual sensing information database is stored in a cache manner or stored in a tensor manner after performing multi-modal data alignment.
And step S4, inputting the actual sensing information database into the trained neural network model to obtain an actual fusion objective function value database.
And step S5, analyzing based on the actual fusion objective function value database and the ideal fusion objective function value database to obtain the cooperation efficiency.
Specifically, the cooperation efficiency corresponding to each task can be obtained through analysis, if the actual value is matched with the ideal value, it is indicated that the configuration of the tasks, the personnel and the equipment meets the cooperation efficiency requirement, if the actual value is larger than the theoretical value, it is indicated that the matching is incorrect, it is indicated that the tasks are too complex, the review tasks need to be performed again, the tasks are simplified or optimized, so as to improve the work efficiency of cooperation, if the actual value is smaller than the theoretical value, it is indicated that the matching is incorrect, it is indicated that the team and the equipment group are too complex, the review tasks need to be performed again on the team and the equipment group, and the tasks are simplified or optimized, so as to improve the work efficiency of cooperation.
Optionally, before the collaborative activity is started, the responsible person of the collaborative activity may utilize preset scene data of some related items to substitute a computation function of the collaborative scene complexity to estimate the collaborative scene complexity related to the collaborative activity, judge the person and the device required to complete the collaborative task, perform task arrangement, optimize work content, and better mobilize the person and the device.
Fig. 2 is a structural diagram of a multi-device-participating collaborative scene analysis system according to the present invention, and as shown in fig. 2, the present invention provides a multi-device-participating collaborative scene analysis system, which includes: sensing unit 1, storage unit 2 and processing unit 3.
The processing unit 3 determines information elements based on a collaborative scene, constructs a perception model based on the information elements, and sends the perception model to the perception unit 1, and the perception unit 1 obtains a perception information actual database through collection based on the perception model. In this embodiment, the sensing unit 1 is one or a combination of a camera, an intelligent sound, a computer, a smart phone, a router, a visual sensor, a light sensor, a temperature sensor, and an audio-visual integrated module. The perception model can perceive, analyze and filter more effective and accurate perception information from huge scene information generated in the cooperation process, and facilitates subsequent calculation of the complexity of the cooperation scene and visualization of element relations.
The sensing unit 1, the storage unit 2 and the processing unit 3 are all connected through WiFi, Zigbee or Bluetooth, and the transmission protocol is TCP or IP.
The storage unit 2 stores the actual perception information database in a cache manner or stores the actual perception information database in a tensor manner after multi-modal data alignment.
The processing unit 3 builds a neural network model based on a fusion objective function, trains the neural network model, obtains an ideal fusion objective function value database, further calculates according to the actual sensing information database and the trained neural network model to obtain an actual fusion objective function value database, and analyzes the actual fusion objective function value database and the ideal fusion objective function value database to obtain the cooperation efficiency.
As an optional implementation, the system of the present invention further includes: a visualization unit 4.
The visualization unit 4 visually outputs five elements of groups, equipment, time, environment and tasks based on the user-defined network relationship diagram.
Specifically, the custom network relation graph is a combined visualization graph of a coordinate system and a relation graph, wherein the horizontal axis of the coordinate system represents time, the vertical axis represents environment, and the relation graph represents the relation of tasks, groups and equipment. The user can select a viewing mode according to actual requirements, for example, viewing the situation of a collaboration scene at a certain time according to time, that is, knowing which members use which devices at the time point and what tasks are participated in which space in what mode.
The invention provides a multi-device participating cooperative scene analysis method and a multi-device participating cooperative scene analysis system, which are characterized in that element objects related to multi-device participating cooperative scene recognition, namely members, devices, time, environment and tasks, are determined according to a cooperative scene perception method, a multi-device participating cooperative scene perception model is constructed, and the complexity of a cooperative scene is calculated; sensing the collaborative scene by using a collaborative scene sensing method and combining the sensing model to acquire real-time collaborative scene collaboration information; performing fusion processing on the collaborative scene information to obtain perception information and storing the perception information; and extracting the perception information, analyzing the complexity of the collaborative scene, and simultaneously carrying out element relation visualization. The invention realizes more comprehensive analysis and depiction of the collaborative scene of multi-device participation, provides a more specific collaborative working environment for the collaborative members, and simultaneously can analyze the complexity of the collaborative scene of multi-device participation, thereby improving the perception capability of the team members on the collaborative process of the multi-device and improving the team collaboration efficiency to a certain extent.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (9)

1. A multi-device participated collaboration scene analysis method is characterized by comprising the following steps:
determining an information element based on a cooperation scene, and constructing a perception model based on the information element;
constructing a neural network model based on a fusion objective function, and training the neural network model;
obtaining a perception information actual database based on the perception model;
inputting the actual perception information database into the trained neural network model to obtain an actual fusion objective function value database;
and analyzing based on the actual fusion objective function value database to obtain the cooperation efficiency.
2. The multi-device participating collaborative scene analysis method of claim 1, wherein the information elements include groups, devices, time, environment, and tasks.
3. The method of claim 1, wherein the activation function of the neural network model is a Tanh function or a ReLU function.
4. The multi-device collaborative scene analysis method according to claim 1, wherein the actual perception information database is stored in a cache manner or in a tensor manner after being aligned with multi-modal data.
5. The method for analyzing collaborative scene with participation of multiple devices according to claim 1, wherein the obtaining of the actual database of perceptual information based on the perceptual model includes:
constructing different sensing combinations based on sensing equipment to obtain a sensing combination set;
sensing the cooperative scene based on the xth sensing combination and the sensing model to obtain xth sensing information actual data, enabling x to take different values, and repeating the process to obtain the sensing information actual database; x belongs to X; and X is the number of sensing combinations in the sensing combination set.
6. A multi-device participating collaborative scene analysis system, comprising:
the sensing unit is used for obtaining a sensing information actual database based on the sensing model;
the processing unit is used for determining information elements based on the collaboration scene and constructing a perception model based on the information elements; the processing unit is further used for constructing a neural network model based on the fusion objective function, training the neural network model, further calculating according to the actual sensing information database and the trained neural network model to obtain an actual fusion objective function value database, and analyzing the actual fusion objective function value database to obtain the cooperation efficiency.
7. The multi-device collaborative scene analysis system according to claim 6, further comprising a storage unit that stores the actual database of perceptual information in a cache manner or in a tensor manner after performing multi-modal data alignment.
8. The multi-device collaborative scene analysis system of claim 6, wherein the perception unit is one or a combination of a camera, a smart audio, a computer, a smart phone, a router, a vision sensor, a light sensor, a temperature sensor, and an audiovisual integration module.
9. The system of claim 7, wherein the sensing unit is connected to the storage unit via WiFi, Zigbee, or bluetooth.
CN202111037751.7A 2021-09-06 2021-09-06 Multi-device-participated collaborative scene analysis method and system Pending CN113869509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111037751.7A CN113869509A (en) 2021-09-06 2021-09-06 Multi-device-participated collaborative scene analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111037751.7A CN113869509A (en) 2021-09-06 2021-09-06 Multi-device-participated collaborative scene analysis method and system

Publications (1)

Publication Number Publication Date
CN113869509A true CN113869509A (en) 2021-12-31

Family

ID=78989604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111037751.7A Pending CN113869509A (en) 2021-09-06 2021-09-06 Multi-device-participated collaborative scene analysis method and system

Country Status (1)

Country Link
CN (1) CN113869509A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258470A (en) * 2023-05-15 2023-06-13 北京尽微致广信息技术有限公司 Data processing method, system, storage medium and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258470A (en) * 2023-05-15 2023-06-13 北京尽微致广信息技术有限公司 Data processing method, system, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
Demir et al. A conceptual model of team dynamical behaviors and performance in human-autonomy teaming
Wei et al. Real-time facial expression recognition for affective computing based on Kinect
CN109635644A (en) A kind of evaluation method of user action, device and readable medium
CN111028579A (en) Vision teaching system based on VR reality
CN110531849A (en) A kind of intelligent tutoring system of the augmented reality based on 5G communication
CN113869509A (en) Multi-device-participated collaborative scene analysis method and system
CN103593650A (en) Method for generating artistic images on basis of facial expression recognition system
CN107538492A (en) Intelligent control system, method and the intelligence learning method of mobile robot
CN104881647B (en) Information processing method, information processing system and information processing unit
CN114035678A (en) Auxiliary judgment method based on deep learning and virtual reality
Kästner et al. Integrative object and pose to task detection for an augmented-reality-based human assistance system using neural networks
CN104616336A (en) Animation construction method and device
Man et al. (Retracted) Digital immersive interactive experience design of museum cultural heritage based on virtual reality technology
CN113752264A (en) Mechanical arm intelligent equipment control method and system based on digital twins
CN104408782B (en) Facial visibility attendance system
CN108985667A (en) Home education auxiliary robot and home education auxiliary system
Chejara et al. Multimodal Learning Analytics research in the wild: challenges and their potential solutions
CN113965550B (en) Intelligent interactive remote auxiliary video system
Sun et al. Virtual Training and Ergonomics Evaluation System for Industrial Production Safety Based on Visible Light Communication
CN109902904A (en) The ability of innovation analysis system and method
Poulkov et al. The HOLOTWIN project: Holographic telepresence combining 3D imaging, haptics and AI
CN112270296A (en) Cloud platform based smart city visual management system and method
Ryu et al. Performance Analysis of Applying Deep Learning for Virtual Background of WebRTC-based Video Conferencing System
Wilmsherst et al. Utilizing virtual reality and three-dimensional space, visual space design for digital media art
CN104464001B (en) Facial view degree attendance method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination