CN111144359B - Exhibit evaluation device and method and exhibit pushing method - Google Patents

Exhibit evaluation device and method and exhibit pushing method Download PDF

Info

Publication number
CN111144359B
CN111144359B CN201911406923.6A CN201911406923A CN111144359B CN 111144359 B CN111144359 B CN 111144359B CN 201911406923 A CN201911406923 A CN 201911406923A CN 111144359 B CN111144359 B CN 111144359B
Authority
CN
China
Prior art keywords
recognition module
exhibit
evaluation
alpha
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911406923.6A
Other languages
Chinese (zh)
Other versions
CN111144359A (en
Inventor
余众泽
甘松云
丁志龙
冯笑炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Zhihengxin Technology Co ltd
Original Assignee
Anhui Zhihengxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Zhihengxin Technology Co ltd filed Critical Anhui Zhihengxin Technology Co ltd
Priority to CN201911406923.6A priority Critical patent/CN111144359B/en
Publication of CN111144359A publication Critical patent/CN111144359A/en
Application granted granted Critical
Publication of CN111144359B publication Critical patent/CN111144359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an exhibit evaluation device and method and an exhibit pushing method, comprising a plurality of cameras, a voice recognition module, an on-line data acquisition module and a big data processing platform, wherein the cameras are arranged around the exhibit, and comprise a face recognition module and a limb language recognition module, and the face recognition module is used for recognizing the facial expression of a person facing the exhibit; the limb language identification module is used for acquiring the operation actions of tourists on each region of the exhibit; the voice recognition module is used for capturing and recognizing voices of audiences and recognizing voices about evaluation; the on-line data acquisition module acquires on-line data, the voice recognition module, the face recognition module, the limb language recognition module and the on-line data acquisition module transmit the data acquired by the modules to the big data processing platform, and the big data processing platform processes the data transmitted by the modules and outputs the display evaluation result. The invention can improve interaction and user experience in exhibition.

Description

Exhibit evaluation device and method and exhibit pushing method
Technical Field
The invention belongs to the field of exhibition and exhibition industries, and particularly relates to an exhibited item evaluation device and method and an exhibited item pushing method.
Background
At the end of the 70 s of the 20 th century, with the advent of the science spring and the science popularization spring, the construction of science museums in China has been vigorously developed. The expansion of the exhibition hall area, the increase of the variety of exhibits and the improvement of the management mode, and a series of evolution brings about the transition of a plurality of science and technology libraries to an automatic management mode adopting bar code recognition, internet and software technology through the traditional management mode. Although many modern technologies are used, there are a number of problems that plague the effective management of technological libraries.
With the vigorous development of science and technology museums in China, science and technology museums which are newly built, rebuilt and enlarged in various places inevitably encounter the problem of exhibits. The quality of the exhibits largely determines the vitality and vitality of the science and technology museum, and determines whether the education purpose of the science and technology museum can be truly realized. The quality evaluation of the exhibited products in the science and technology museum is an important matter, and has a guiding effect on the healthy development of the national science and technology museum. Therefore, the exhibited items are the basis of the science and technology museum, the present evaluation of the exhibited items of the science and technology museum lacks unified standards and processes, and from the evaluation mode, most of the exhibited items are single-dimensional customer evaluation, and multi-dimensional full-automatic intelligent evaluation is not realized, so that the evaluation result is not objective enough.
The internet of things is a sensor network, and is called as the third wave of the world information industry after a computer and the internet. The appearance and application of the internet of things technology have great influence on the aspects of improving the automatic management capability, realizing humanized service, improving the working efficiency and the like of a science and technology center; in addition, venues such as science and technology museums and museums become academic research hotspots in related fields, and the hotspots provide new research ideas and technical supports for evaluating exhibits of the science and technology museums.
But the current science and technology museum exhibits evaluation mainly has the following problems: the method takes customer evaluation as a main means and result, lacks intelligent evaluation capability, has relatively weak technical application, leads to single evaluation mode, thin evaluation dimension, long evaluation response period (questionnaire), objective evaluation result, low customer participation degree and satisfaction degree and the like.
Disclosure of Invention
1. Problems to be solved
Aiming at the problems of low participation of clients, non-objective evaluation results and the like of the conventional science and technology museum exhibition hall, the invention provides an exhibited item evaluation device and method and an exhibited item pushing method.
2. Technical proposal
In order to solve the problems, the technical scheme adopted by the invention is as follows: the display evaluation device comprises a plurality of cameras, a voice recognition module, an online data acquisition module and a big data processing platform, wherein the cameras are arranged around the display and comprise a face recognition module and a limb language recognition module, and the face recognition module is used for recognizing facial expressions of people facing the display; the limb language identification module is used for acquiring the operation actions of tourists on each region of the exhibit; the voice recognition module is used for capturing and recognizing voices of audiences and recognizing voices about evaluation; the on-line data acquisition module acquires on-line data, the voice recognition module, the face recognition module, the limb language recognition module and the on-line data acquisition module transmit the data acquired by the modules to the big data processing platform, and the big data processing platform processes the data transmitted by the modules and outputs the display evaluation result. According to the technical scheme, through collection, analysis and processing of data such as figures, sounds and postures, multidimensional evaluation of the exhibited products can be achieved, the practicability of evaluation data of visitors is improved, the use value of big data is improved, and interaction and user experience in exhibition and exhibition are improved.
Further, the voice recognition module further comprises a voiceprint recognition module. The voiceprint recognition module is used for recognizing when the voices of multiple persons are collected, and the voice of the collected visitor is ensured to be independently recorded and not to be interfered by noise.
The invention also provides an exhibit evaluation method, which comprises the following steps:
s1, a big data processing platform receives data of a voice recognition module, a face recognition module, a limb language recognition module and an online data acquisition module;
s2, processing the data of each module in the S1 by using a big data processing platform to obtain an exhibit evaluation result R=alpha 1 *n 12 *n 2 + α 3 *n 34 *n 45 *n 56 *n 67 *n 7 Wherein n is 1 +n 2 +n 3 +n 4 +n 5 +n 6 +n 7 =1,0<n 1 <1,0<n 2 <1,0<n 3 <1, 0<n 4 <1,0<n 5 <1,0<n 6 <1,0<n 7 <1;α 1 =number of visits/total number of entries on the day, α 2 For the frequency of operation, alpha 3 Alpha is the result of sound evaluation 3 =0, 1, 2, 3 or 4, α 4 For subjective message evaluation data, alpha 4 =0, 1, 2, 3 or 4, α 5 Number of clicks on line/number of visits on line; alpha 6 Number of online activity participation/number of online activity participants; alpha 7 Number of offline activity participation/number of offline activity participants.
Further, α in the step S2 4 Including the evaluation voice input by the voice recognition module and the evaluation input by the evaluator.
The invention also provides an exhibit evaluation pushing method, which comprises the following steps:
s1, a big data processing platform receives data of a voice recognition module, a face recognition module, a limb language recognition module and an online data acquisition module;
s2, processing the data of each module in the S1 by using a big data processing platform to obtain an exhibit evaluation result R=alpha 1 *n 12 *n 2 + α 3 *n 34 *n 45 *n 56 *n 67 *n 7 Wherein n is 1 +n 2 +n 3 +n 4 +n 5 +n 6 +n 7 =1,0<n 1 <1,0<n 2 <1,0<n 3 <1, 0<n 4 <1,0<n 5 <1,0<n 6 <1,0<n 7 <1;α 1 =number of visits/total number of entries on the day, α 2 For the frequency of operation, alpha 3 Alpha is the result of sound evaluation 3 =0, 1, 2, 3 or 4, α 4 For subjective message evaluation data, alpha 4 =0, 1, 2, 3 or 4, α 5 Number of clicks on line/number of visits on line; alpha 6 Number of online activity participation/number of online activity participants; alpha 7 Number of offline activity participation/number of offline activity participants;
and S3, obtaining the evaluation result ranking of the exhibits, and pushing the exhibit information according to the ranking. The specific push format may be presented as a ranking on the exhibition hall's official network, or a preferential push on the WeChat public number, etc.
3. Advantageous effects
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention can realize multidimensional evaluation of the exhibits through collection, analysis and processing of the data such as figures, sounds and postures, improves the practicability of the evaluation data of visitors, improves the use value of big data, improves the interaction and user experience in exhibition and exhibition, and finally promotes the continuous promotion of stadium business, exhibit quality and exhibition degree;
(2) The invention has simple structure, reasonable design and easy manufacture.
Drawings
Fig. 1 is a logical block diagram of the present invention.
Detailed Description
The invention is further described below in connection with specific embodiments.
As shown in fig. 1, the invention comprises a plurality of cameras, a voice recognition module, an online data acquisition module and a big data processing platform, wherein the cameras are arranged around an exhibit, and the invention comprises a face recognition module and a limb language recognition module, wherein the face recognition module is used for recognizing facial expressions, such as smiles, smiles and the like, of tourists facing the exhibit; the limb language recognition module is used for acquiring operation actions of tourists on various areas of the exhibit, such as standing watching, touching and the like; the voice recognition module is used for capturing and recognizing voices of audiences and recognizing voices about evaluation; the on-line data acquisition module acquires on-line data, the voice recognition module, the face recognition module, the limb language recognition module and the on-line data acquisition module transmit the data acquired by the modules to the big data processing platform, and the big data processing platform processes the data transmitted by the modules and outputs the display evaluation result.
During implementation, the camera is arranged near the exhibit, a certain area is arranged, the people stream entering the area is defined to be tourists visiting the exhibit, and the crowd movement condition is recorded through the face recognition module and the limb language recognition module in the camera. The camera head is used for identifying the flow of people passing through the exhibit by detecting the head and the background, and the flow of people is from the beginning of tourists entering the camera shooting monitoring area to the beginning of leaving the monitoring area, alpha 1 Representing the heat value of the exhibits, alpha 1 Number of visits per day/total number of entries; the influence of the most popular exhibits and the placement position of the exhibits on visitors can be known by the heat analysis, and the heat value alpha of the exhibits 1 And the important index is used as an important index for evaluating the exhibits and is transmitted back to a big data platform. In addition, the limb language recognition module of the camera can automatically recognize the behavior actions of the visitor, including the residence position, the area for touching the exhibit and a certain function for experiencing the exhibit.
The limb voice recognition module is used for detecting and capturing data of a tourist touching a certain area of the exhibit and experiencing a certain function (area) of the exhibit, and obtaining the behavior type of the tourist, such as the standing watching frequency a 1 Number of touches a 2 Number of depth uses a 3 Etc., the deep use means that tourists interact with certain exhibits with interaction function to obtain the operation frequency alpha of the tourists 2 ,α 2 = a 1 *k 1 +a 2 *k 2 +a 3 *k 3 Wherein k is 1 +k 2 +k 3 =1, and 0.ltoreq.k 1 ≤1,0≤k 2 ≤1,0≤k 3 Is less than or equal to 1, in particular is realDuring the application, k can be set according to the actual requirement of the exhibition hall 1 ,k 2 And k 3 For example, for some exhibits that do not allow touch, the number of touches a may be determined 2 Number of depth uses a 3 The weight of the number of standing watching times a is set to 0 1 Weight k of (2) 1 Set to 1; for some exhibits with interactive function, the depth use times a can be increased 3 Weight k of (2) 3 For example, the weight of the touch number a is increased by 0.7 2 Weight k of (2) 2 Set to 0.2, the number of standing watching times a 1 Weight k of (2) 1 Set to 0.1 and then pass through the formula alpha 2 =a 1 *k 1 +a 2 * k 2 +a 3 *k 3 Obtaining the operation frequency alpha 2 And will be alpha 2 Transmitted to a big data processing platform, alpha 2 The method has the advantages that the participation degree of tourists is obtained by collecting the operation frequency of the tourists on the exhibits, so that the optimization of the exhibiting scheme is facilitated, and the exhibiting degree of the exhibits is improved. In the present embodiment, "depth use" =touch duration exceeds 20s; "stay watching" = stay within the exhibit area 20s, but not touching the exhibit. Meanwhile, the camera also captures evaluation data alpha of the participation condition of the activities under the user line 7 The off-line activity refers to the evaluation activity related to the exhibits, such as evaluation of a certain exhibit by tourists during the process of visiting the exhibits, interaction with the exhibit, and the like, and alpha 7 Number of offline activity participation/number of offline activity participants.
The voice recognition module collects voice data of the visitor, the dialogue of the visitor is converted into characters from voice through voice-to-text, and voice recognition is carried out when the voice of a plurality of persons is collected through the voiceprint recognition module, so that the voice of the collected visitor is ensured to be independently recorded and not to be interfered by noise. And configuring a professional vocabulary library in a management background, carrying out semantic definition on the professional vocabulary and combining the professional vocabulary into a plurality of semantic recognition models. After voice is converted into characters, according to the similarity of the language habit and the semantic recognition model of the user, feeding back a plurality of semantic recognition results and confirming the final semantic recognition result to obtain a sound evaluation result alpha 3
Subjective message evaluation data alpha of visitor 4 The data comprises 2 sources, the first is voice data alpha collected by an external microphone and a recording device 4 After the external microphone or the recording equipment receives the sound, the sound of different people is distinguished through a voiceprint recognition module in the voice recognition module, and the sound is divided into multiple tracks; next, according to preset keywords, for example: the method comprises the steps of grasping keywords through semantic recognition, evaluating whether the sentence is effective voice, if the sentence is insufficient in definition or irrelevant voice, automatically discarding the sentence, and if the sentence is effective voice, further evaluating whether the sentence comment is "positive" or negative; the second is the evaluation data alpha entered by the scoring device 4 And the system is divided into 0, 1, 2, 3 and 4, and is recorded by tourists through an evaluator beside the exhibit. Subjective message evaluation data alpha 4 =α 4 ’*m 14 ”*m 2 Wherein m is 1 +m 2 =1, and 0<m 1 <1, 0<m 2 <1. The exhibition hall can set m according to the specific conditions of the hardware facilities 1 And m 2 For example, if a plurality of exhibition halls can have microphones and recording devices arranged in a relatively complete manner, then m can be properly set 1 Weight of (m) is increased 2 The weight of (2) becomes smaller, e.g. m 1 =0.7, m 2 =0.3; in the case of a exhibition hall in which the valuators at the side of the exhibits are arranged relatively well, m can be appropriately set 1 Weight of (m) is increased 2 The weight of (2) becomes smaller, e.g. m 1 =0.3,m 2 =0.7。
The online data acquisition module acquires online clicking actions of visitors, and marks, classifies and performs data statistics, wherein the online clicking actions refer to clicking actions such as praying and the like on the exhibits on the official network of the exhibition hall, and alpha is calculated by the online clicking actions 5 Number of clicks on a exhibit/number of visits on line; meanwhile, the online data acquisition module also acquires online activity participation evaluation data alpha of the visitor 6 The online activity participation condition refers to the frequency of visitors participating in clicking and praying on the official network of the exhibition hall; alpha 6 Number of online activity participants/number of online activity participants.
The data acquired by each module are transmitted back to a big data processing platform, and the big data processing platform processes the data of each module in S1 to obtain an exhibit evaluation result R=alpha 1 *n 12 *n 23 *n 34 *n 45 *n 56 *n 67 *n 7 Wherein n is 1 +n 2 +n 3 +n 4 +n 5 +n 6 +n 7 =1, and 0<n 1 <1,0<n 2 <1,0<n 3 <1,0<n 4 <1,0<n 5 <1,0<n 6 <1,0<n 7 <1;α 1 Alpha is the heat value of the exhibited article 1 The number of visits on the day/total number of entrance, which is the number of visits to a certain exhibit on the day; alpha 2 Is the operation frequency; alpha 3 As a sound evaluation result, alpha is obtained by capturing voice keywords of tourists and then comparing the result according to a model 3 =0, 1, 2, 3, or 4, wherein 0 indicates no evaluation or that the evaluation result is invalid, 1 indicates an evaluation of "difference", 2 indicates an evaluation of "medium", 3 indicates an evaluation of "good", and 4 indicates an evaluation of "excellent"; alpha 4 For subjective message evaluation data, alpha 4 =0, 1, 2, 3, or 4, wherein 0 indicates no message evaluation or a result of message evaluation is invalid, 1 indicates a message evaluation of "bad", 2 indicates a message evaluation of "medium", 3 indicates a message evaluation of "good", and 4 indicates a message evaluation of "excellent"; alpha 5 Representing the number of times of clicking on line per person, alpha 5 The number of online clicks refers to the number of online clicks for a certain exhibit, the number of online accesses refers to the total number of accesses to the exhibition hall's official network, etc.; alpha 6 Representing online activity participation assessment data, alpha 6 The number of online activity participation/the number of online activity participation, which is the number of online activity participation for a certain exhibit, and the number of online activity participation refers to the total number of online activity participation; alpha 7 Representing offline activityParticipation evaluation data, alpha 7 The number of offline activity participation/the number of offline activity participation, which is the number of offline activity participation for a certain exhibit, refers to the total number of offline activities. It should be noted that the present invention can be implemented by setting n according to the type of the exhibition 1 、n 2 、 n 3 、n 4 、n 5 、n 6 、n 7 For example, the interactive exhibit evaluation model can define the operation frequency alpha 2 Subjective message evaluation data alpha for high weight index term factor 4 Is a low index term factor, and n can be correspondingly calculated 2 Weight of (n) is increased 4 The weight of (a) becomes smaller, such as n 1 =0.1、n 2 =0.4、n 3 =0.1、n 4 =0.1、n 5 =0.1、n 6 =0.1、n 7 =0.1; subjective message evaluation data alpha of ornamental exhibit evaluation model 4 For high weight index term factor, the operation frequency alpha 2 Is a low weight index term factor; then n can be correspondingly set 4 Weight of (n) is increased 2 The weight of (a) becomes smaller, such as n 1 =0.1、n 2 =0.1、n 3 =0.1、n 4 =0.4、n 5 =0.1、n 6 =0.1、 n 7 =0.1; therefore, the exhibited items can be objectively evaluated according to the specific conditions of the exhibited items. And then pushing the exhibit information according to the evaluation result ranking of the exhibits. The specific push style may be presented as a ranking on the exhibition hall's official network, or a push on a WeChat public number, etc.

Claims (5)

1. The method for evaluating the exhibits is characterized by comprising the following steps of: the method comprises the following steps:
s1, a big data processing platform receives data of a voice recognition module, a face recognition module, a limb language recognition module and an online data acquisition module;
s2, processing the data of each module in the S1 by using a big data processing platform to obtain an exhibit evaluation result R=alpha 1 *n 12 *n 23 *n 34 *n 45 *n 56 *n 67 *n 7 Wherein n is 1 +n 2 +n 3 +n 4 +n 5 +n 6 +n 7 =1, and 0<n 1 <1,0<n 2 <1,0<n 3 <1,0<n 4 <1,0<n 5 <1,0<n 6 <1,0<n 7 <1;α 1 =number of visits/total number of entries on the day, α 2 For the frequency of operation, alpha 2 =a 1 *k 1 +a 2 *k 2 +a 3 *k 3 Wherein a is 1 To stay on the watching times, a 2 For the number of touches, a 3 For the number of deep use, the number of deep use refers to the number of times that tourists interact with certain exhibits with interaction function, k 1 +k 2 +k 3 =1, and 0.ltoreq.k 1 ≤1,0≤k 2 ≤1,0≤k 3 ≤1,k 1 、k 2 、k 3 A is respectively a 1 、a 2 、a 3 Weights, alpha 3 Alpha is the result of sound evaluation 3 =0, 1, 2, 3 or 4, α 4 For subjective message evaluation data, alpha 4 =0, 1, 2, 3 or 4, α 5 Number of clicks on line/number of visits on line; alpha 6 Number of online activity participation/number of online activity participants; alpha 7 The number of offline activity participation/number of offline activity participants, n1, n2, n3, n4, n5, n6, n7, is the weight of α1, α2, α3, α4, α5, α6, α7, respectively.
2. The exhibit evaluation method according to claim 1, wherein: alpha in said step S2 4 Including the evaluation input by the speech recognition module and the evaluation input by the evaluator.
3. An exhibit evaluation device realized based on the exhibit evaluation method of claim 1, characterized in that: the system comprises a plurality of cameras, a voice recognition module, an online data acquisition module and a big data processing platform, wherein the cameras are arranged around an exhibit, and the system comprises a face recognition module and a limb language recognition module, wherein the face recognition module is used for recognizing facial expressions of people facing the exhibit; the limb language identification module is used for acquiring the operation actions of tourists on each region of the exhibit; the voice recognition module is used for capturing and recognizing voices of audiences and recognizing voices about evaluation; the on-line data acquisition module acquires on-line data, the voice recognition module, the face recognition module, the limb language recognition module and the on-line data acquisition module transmit the acquired data to the big data processing platform, and the big data processing platform processes the data transmitted by each module and outputs an exhibit evaluation result.
4. The exhibit evaluation device according to claim 3, wherein: the voice recognition module also comprises a voiceprint recognition module.
5. The display evaluation pushing method is characterized by comprising the following steps of: the method comprises the following steps:
s1, a big data processing platform receives data of a voice recognition module, a face recognition module, a limb language recognition module and an online data acquisition module;
s2, processing the data of each module in the S1 by using a big data processing platform to obtain an exhibit evaluation result R=alpha 1 *n 12 *n 23 *n 34 *n 45 *n 56 *n 67 *n 7 Wherein n is 1 +n 2 +n 3 +n 4 +n 5 +n 6 +n 7 =1, and 0<n 1 <1,0<n 2 <1,0<n 3 <1,0<n 4 <1,0<n 5 <1,0<n 6 <1,0<n 7 <1;α 1 =number of visits/total number of entries on the day, α 2 For the frequency of operation, alpha 2 =a 1 *k 1 +a 2 *k 2 +a 3 *k 3 Wherein a is 1 To stay on the watching times, a 2 Is the number of times of touching,a 3 For the number of deep use, the number of deep use refers to the number of times that tourists interact with certain exhibits with interaction function, k 1 +k 2 +k 3 =1, and 0.ltoreq.k 1 ≤1,0≤k 2 ≤1,0≤k 3 ≤1,k 1 、k 2 、k 3 A is respectively a 1 、a 2 、a 3 Weights, alpha 3 Alpha is the result of sound evaluation 3 =0, 1, 2, 3 or 4, α 4 For subjective message evaluation data, alpha 4 =0, 1, 2, 3 or 4, α 5 Number of clicks on line/number of visits on line;
α 6 number of online activity participation/number of online activity participants; alpha 7 The number of offline activity participation/number of offline activity participants, n1, n2, n3, n4, n5, n6, n7 is the weight of α1, α2, α3, α4, α5, α6, α7, respectively;
and S3, obtaining the evaluation result ranking of the exhibits, and pushing the exhibit information according to the ranking.
CN201911406923.6A 2019-12-31 2019-12-31 Exhibit evaluation device and method and exhibit pushing method Active CN111144359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911406923.6A CN111144359B (en) 2019-12-31 2019-12-31 Exhibit evaluation device and method and exhibit pushing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911406923.6A CN111144359B (en) 2019-12-31 2019-12-31 Exhibit evaluation device and method and exhibit pushing method

Publications (2)

Publication Number Publication Date
CN111144359A CN111144359A (en) 2020-05-12
CN111144359B true CN111144359B (en) 2023-06-30

Family

ID=70522432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911406923.6A Active CN111144359B (en) 2019-12-31 2019-12-31 Exhibit evaluation device and method and exhibit pushing method

Country Status (1)

Country Link
CN (1) CN111144359B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750213B (en) * 2020-12-29 2022-06-14 深圳市顺易通信息科技有限公司 Parking service information pushing method, device, equipment and medium
CN113312507B (en) * 2021-05-28 2022-11-04 成都威爱新经济技术研究院有限公司 Digital exhibition hall intelligent management method and system based on Internet of things
CN115190324B (en) * 2022-06-30 2023-08-29 广州市奥威亚电子科技有限公司 Method, device and equipment for determining online and offline interactive live broadcast heat
CN116957633B (en) * 2023-09-19 2023-12-01 武汉创知致合科技有限公司 Product design user experience evaluation method based on intelligent home scene

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010091675A (en) * 2008-10-06 2010-04-22 Mitsubishi Electric Corp Speech recognizing apparatus
CN105046601A (en) * 2015-07-09 2015-11-11 传成文化传媒(上海)有限公司 User data processing method and system
CN106056310A (en) * 2016-06-17 2016-10-26 维朗(北京)网络技术有限公司 Intelligent push assessment system and method for museum exhibit display effect
CN106682637A (en) * 2016-12-30 2017-05-17 深圳先进技术研究院 Display item attraction degree analysis and system
CN107122399A (en) * 2017-03-16 2017-09-01 中国科学院自动化研究所 Combined recommendation system based on Public Culture knowledge mapping platform
CN107832370A (en) * 2017-10-27 2018-03-23 上海享岭网络科技有限公司 A kind of mobile electron shows system and construction method between virtual exhibition
CN107945175A (en) * 2017-12-12 2018-04-20 百度在线网络技术(北京)有限公司 Evaluation method, device, server and the storage medium of image
CN109359628A (en) * 2018-11-28 2019-02-19 上海风语筑展示股份有限公司 A kind of exhibition big data collection analysis platform
CN109658310A (en) * 2018-11-30 2019-04-19 安徽振伟展览展示有限公司 One kind being used for exhibition room museum intelligent identifying system
CN110545297A (en) * 2018-05-28 2019-12-06 上海驿卓通信科技有限公司 multi-information data analysis and display system for exhibition hall

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10762165B2 (en) * 2017-10-09 2020-09-01 Qentinel Oy Predicting quality of an information system using system dynamics modelling and machine learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010091675A (en) * 2008-10-06 2010-04-22 Mitsubishi Electric Corp Speech recognizing apparatus
CN105046601A (en) * 2015-07-09 2015-11-11 传成文化传媒(上海)有限公司 User data processing method and system
CN106056310A (en) * 2016-06-17 2016-10-26 维朗(北京)网络技术有限公司 Intelligent push assessment system and method for museum exhibit display effect
CN106682637A (en) * 2016-12-30 2017-05-17 深圳先进技术研究院 Display item attraction degree analysis and system
CN107122399A (en) * 2017-03-16 2017-09-01 中国科学院自动化研究所 Combined recommendation system based on Public Culture knowledge mapping platform
CN107832370A (en) * 2017-10-27 2018-03-23 上海享岭网络科技有限公司 A kind of mobile electron shows system and construction method between virtual exhibition
CN107945175A (en) * 2017-12-12 2018-04-20 百度在线网络技术(北京)有限公司 Evaluation method, device, server and the storage medium of image
CN110545297A (en) * 2018-05-28 2019-12-06 上海驿卓通信科技有限公司 multi-information data analysis and display system for exhibition hall
CN109359628A (en) * 2018-11-28 2019-02-19 上海风语筑展示股份有限公司 A kind of exhibition big data collection analysis platform
CN109658310A (en) * 2018-11-30 2019-04-19 安徽振伟展览展示有限公司 One kind being used for exhibition room museum intelligent identifying system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Y.Kitano等.Face-recognition based on higher-order local auto correlation feature for Sound Spot control.2010 IEEE International Conference on Systems, Man and Cybernetics.2010,第2312-2315页. *
常晓月.展览会品牌形象评价指标体系研究.中国优秀硕士学位论文全文数据库经济与管理科学辑.2015,第第2015年卷卷(第第2015年卷期),J157-1. *

Also Published As

Publication number Publication date
CN111144359A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111144359B (en) Exhibit evaluation device and method and exhibit pushing method
Song et al. Spectral representation of behaviour primitives for depression analysis
CN107030691B (en) Data processing method and device for nursing robot
US11783645B2 (en) Multi-camera, multi-sensor panel data extraction system and method
Scherer et al. A generic framework for the inference of user states in human computer interaction: How patterns of low level behavioral cues support complex user states in HCI
Gatica-Perez et al. Detecting group interest-level in meetings
Gatica-Perez Automatic nonverbal analysis of social interaction in small groups: A review
Mariooryad et al. Exploring cross-modality affective reactions for audiovisual emotion recognition
Zancanaro et al. Automatic detection of group functional roles in face to face interactions
CN102843543B (en) Video conferencing reminding method, device and video conferencing system
JP5433760B2 (en) Conference analysis system
JP2008262046A (en) Conference visualizing system and method, conference summary processing server
CN116484318B (en) Lecture training feedback method, lecture training feedback device and storage medium
Sun et al. Towards visual and vocal mimicry recognition in human-human interactions
CN111602168A (en) Information processing apparatus, information processing method, and recording medium
CN109697556A (en) Evaluate method, system and the intelligent terminal of effect of meeting
Turker et al. Audio-facial laughter detection in naturalistic dyadic conversations
CN114242235A (en) Autism patient portrait method based on multi-level key characteristic behaviors
Gupta et al. REDE-Detecting human emotions using CNN and RASA
JP2013115622A (en) Sound information analyzing apparatus and sound information analysis program
Galvan et al. Audiovisual affect recognition in spontaneous filipino laughter
CN114492579A (en) Emotion recognition method, camera device, emotion recognition device and storage device
Yu et al. Towards smart meeting: Enabling technologies and a real-world application
CN112905811A (en) Teaching audio and video pushing method and system based on student classroom behavior analysis
CN109190556B (en) Method for identifying notarization will authenticity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant