CN111144359A - Exhibit evaluation device and method and exhibit pushing method - Google Patents
Exhibit evaluation device and method and exhibit pushing method Download PDFInfo
- Publication number
- CN111144359A CN111144359A CN201911406923.6A CN201911406923A CN111144359A CN 111144359 A CN111144359 A CN 111144359A CN 201911406923 A CN201911406923 A CN 201911406923A CN 111144359 A CN111144359 A CN 111144359A
- Authority
- CN
- China
- Prior art keywords
- exhibit
- recognition module
- evaluation
- module
- processing platform
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses an exhibit evaluation device and method and an exhibit pushing method, wherein the exhibit evaluation device comprises a plurality of cameras, a voice recognition module, an online data acquisition module and a big data processing platform, wherein the cameras are arranged around an exhibit and comprise a face recognition module and a limb language recognition module, and the face recognition module is used for recognizing facial expressions of a person facing the exhibit; the limb language identification module is used for acquiring the operation action of the tourist on each area of the exhibit; the voice recognition module is used for capturing and recognizing the voice of the audience and recognizing the voice related to the evaluation; the online data acquisition module acquires online data, the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module transmit the data acquired by the modules to the big data processing platform, and the big data processing platform processes the data transmitted by the modules and outputs an exhibit evaluation result. The invention can improve the interaction and user experience in exhibition.
Description
Technical Field
The invention belongs to the field of exhibition and display industry, and particularly relates to an exhibit evaluation device and method and an exhibit pushing method.
Background
At the end of the 20 th century and the 70 th era, with the arrival of 'scientific spring' and 'science popularization spring', the construction of science and technology museums in China is developed vigorously. The expansion of the area of the exhibition hall, the increase of the types of the exhibited articles and the improvement of the management mode bring about the transition from the traditional management mode to the automatic management mode adopting the bar code identification, the internet and the software technology by a series of evolutions of numerous science and technology museums. Despite the use of many modern technologies, there are a number of problems that plague the effective management of technology libraries.
With the vigorous development of science and technology museum career in China, the problem of exhibit is inevitably encountered by the new-built, reconstructed and expanded science and technology museums in various regions. The quality of the exhibit determines the vitality and vitality of the science and technology museum to a great extent and determines whether the education purpose of the science and technology museum can be really realized. The quality evaluation of the exhibited products of the science and technology museum is of great significance, and has a guiding effect on the healthy development of the science and technology museum career in China. Therefore, the exhibit is the basis of science and technology museum work, and the evaluation to the exhibit of science and technology museum lacks unified standard and flow at present, and from the evaluation mode, mostly be the customer evaluation of single-dimensional, not realize multidimension's full-automatic intelligent evaluation yet, the evaluation result is objective inadequately.
The internet of things, namely the sensor network, is called as the third wave of the world information industry after the computer and the internet. The appearance and application of the technology of the internet of things have great influence on the aspects of improving the automatic management capability, realizing humanized service, improving the working efficiency and the like of a science and technology center; in addition, venues such as science museums and museums have become academic research hotspots in related fields, and the academic research hotspots provide new research ideas and technical supports for the evaluation of exhibits of the science museums.
But the evaluation of exhibits in the current science and technology museum mainly has the following problems: the method takes the client evaluation as a main means and result, lacks intelligent evaluation capability, and has relatively weak technical application, so that the method has the disadvantages of single evaluation mode, single evaluation dimension, long evaluation response period (questionnaire), not objective evaluation result, low client participation degree and satisfaction degree, and the like.
Disclosure of Invention
1. Problems to be solved
The invention provides an exhibit evaluation device and method and an exhibit pushing method, and aims to solve the problems that the participation degree of customers in an exhibition hall of the existing science and technology center is low, an evaluation result is not objective and the like.
2. Technical scheme
In order to solve the problems, the technical scheme adopted by the invention is as follows: a device for evaluating an exhibit comprises a plurality of cameras, a voice recognition module, an online data acquisition module and a big data processing platform, wherein the cameras are arranged around the exhibit and comprise a face recognition module and a limb language recognition module, and the face recognition module is used for recognizing facial expressions of a person facing the exhibit; the limb language identification module is used for acquiring the operation action of the tourist on each area of the exhibit; the voice recognition module is used for capturing and recognizing the voice of the audience and recognizing the voice related to the evaluation; the online data acquisition module acquires online data, the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module transmit the data acquired by the modules to the big data processing platform, and the big data processing platform processes the data transmitted by the modules and outputs an exhibit evaluation result. According to the technical scheme, the data such as the portrait, the voice, the posture and the like are collected, analyzed and processed, the multi-dimensional evaluation of the exhibit can be achieved, the practicability of evaluation data of visitors is improved, the use value of big data is improved, and then the interaction and the user experience in exhibition and display are improved.
Furthermore, the voice recognition module also comprises a voiceprint recognition module. The voiceprint recognition module is used for recognizing when voices of multiple persons are collected, and the condition that the voices of collected visitors are independently recorded without noise interference is guaranteed.
The invention also provides an exhibit evaluation method, which comprises the following steps:
s1, the big data processing platform receives data of the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module;
s2, the big data processing platform processes the data of each module in S1 to obtain the evaluation result R of the exhibit α1*n1+α2*n2+ α3*n3+α4*n4+α5*n5+α6*n6+α7*n7Wherein n is1+n2+n3+n4+n5+n6+n7=1,0<n1<1,0<n2<1,0<n3<1,0<n4<1,0<n5<1,0<n6<1,0<n7<1;α1α number of visits/entries on the day2For the frequency of operation, α3As a result of the sound evaluation, α30, 1, 2, 3 or 4, α4Evaluation data for subjective message α40, 1, 2, 3 or 4, α5α number of clicks on line/number of visitors on line6Number of online activity participants α7Number of offline activity participants/number of offline activity participants.
Further, α in the step S24Including the evaluation speech input by the speech recognition module and the evaluation input by the evaluator.
The invention also provides a exhibit evaluation pushing method, which comprises the following steps:
s1, the big data processing platform receives data of the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module;
s2, the big data processing platform processes the data of each module in S1 to obtain the evaluation result R of the exhibit α1*n1+α2*n2+ α3*n3+α4*n4+α5*n5+α6*n6+α7*n7Wherein n is1+n2+n3+n4+n5+n6+n7=1,0<n1<1,0<n2<1,0<n3<1,0<n4<1,0<n5<1,0<n6<1,0<n7<1;α1α number of visits/entries on the day2For the frequency of operation, α3As a result of the sound evaluation, α30, 1, 2, 3 or 4, α4Evaluation data for subjective message α40, 1, 2, 3 or 4, α5α number of clicks on line/number of visitors on line6Number of online activity participants α7Offline activity participation times/offline activity participation number;
and S3, obtaining the ranking of the evaluation result of the exhibit, and pushing the information of the exhibit according to the ranking. The specific push form can be expressed as ranking on the official network of the exhibition hall, or priority push on the WeChat public number, and the like.
3. Advantageous effects
Compared with the prior art, the invention has the beneficial effects that:
(1) by collecting, analyzing and processing the data such as portrait, sound, posture and the like, the invention can realize the multidimensional evaluation of the exhibit, improve the practicability of evaluating the data of visitors, improve the use value of big data, improve the interaction and user experience in exhibition and display, and finally promote the continuous improvement of venue service, the quality of the exhibit and the display degree;
(2) the invention has simple structure, reasonable design and easy manufacture.
Drawings
FIG. 1 is a logic diagram of the present invention.
Detailed Description
The invention is further described with reference to specific examples.
As shown in fig. 1, the system comprises a plurality of cameras, a voice recognition module, an online data acquisition module and a big data processing platform, wherein the cameras are arranged around an exhibit and comprise a face recognition module and a body language recognition module, and the face recognition module is used for recognizing facial expressions of a tourist facing the exhibit, such as expressions like laughter, smile and the like; the body language identification module is used for acquiring the operation actions of the tourist on each area of the exhibit, such as standing watching, touching and the like; the voice recognition module is used for capturing and recognizing the voice of the audience and recognizing the voice related to the evaluation; the online data acquisition module acquires online data, the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module transmit the data acquired by the modules to the big data processing platform, and the big data processing platform processes the data transmitted by the modules and outputs an exhibit evaluation result.
When the method is specifically implemented, a camera is arranged near the exhibit, a certain area is set, the people flow entering the area is defined as tourists visiting the exhibit, the movement condition of the crowd is recorded through a face recognition module and a limb language recognition module in the camera, the camera recognizes the people flow passing the exhibit through detecting the head and the background, and α is carried out from the time when the tourists enter a camera monitoring area to the time when the tourists leave the monitoring area1Indicating the calorific value of the exhibit, α1The influence of the most popular exhibits and the placing positions of the exhibits on visitors can be known through heat analysis, and the heat value of the exhibits is α1And the data is transmitted back to a big data platform as an important index for evaluating the exhibit. In addition, the body language identification module of the camera can automatically identify the behavior and actions of the visitor, including the function of holding a foot, touching the area of the exhibit and experiencing the exhibit.
The limb voice recognition module is used for detecting and capturing data of a certain area of the exhibit touched by the tourist and certain function (area) of the exhibit experienced by the tourist, and acquiring behavior types of the tourist, such as standing watching times a1Number of touches a2Number of deep uses a3The deep use means that the tourists interact with certain exhibits with interaction function to obtain the touristsGuest operation frequency α2,α2=a1*k1+a2*k2+a3*k3Wherein k is1+k2+k31, and 0 ≤ k1≤1,0≤k2≤1,0≤k3Less than or equal to 1, and when the method is implemented specifically, k can be set according to the actual requirements of exhibition hall1,k2And k3For some exhibits that are not allowed to touch, the number of touches a may be2Number of deep uses a3All the weights of (a) are set to 0, and the standing watching times a1Weight k of1Is set to 1; for some exhibits with interactive function, the depth use times a can be used3Weight k of3The weight of (2) is increased, for example, to 0.7, and the number of touches a is increased2Weight k of2Set to 0.2, standing watch times a1Weight k of1Set to 0.1 and then passed through equation α2=a1*k1+a2* k2+a3*k3Obtain operation frequency α2And will α2Transfer to big data processing platform, α2The significance of (1) acquiring the participation degree of the tourists by acquiring the operation frequency of the tourists on the exhibit, thereby facilitating the optimization of the exhibition placement scheme and improving the exhibition presentation degree, in the embodiment, the 'deep use' is longer than 20s when touched, the 'standing watching' is staying within 20s of the exhibit area but does not touch the exhibit, and meanwhile, the camera also captures the evaluation data α of the offline activity participation condition of the user7The offline activity refers to an evaluation activity related to the exhibit, such as the visitor evaluating a certain exhibit during the visit to the exhibit and interacting with the exhibit, α7Number of offline activity participants/number of offline activity participants.
The voice recognition module collects voice data of the visitor, converts dialogue of the visitor into characters through voice, and recognizes the dialogue through the voiceprint recognition module when collecting voice of multiple persons, so that the voice of the collected visitor is independently recorded without noise interference. Configuring a professional vocabulary library in a management background, performing semantic definition on professional vocabularies and combining the professional vocabulariesAfter the speech is converted into characters, according to the language habit of user and similarity of semantic recognition model, several semantic recognition results are fed back and the final semantic recognition result is confirmed so as to obtain sound evaluation result α3。
Subjective message evaluation data α of visitor4The data includes 2 sources, the first is voice data α collected by an external microphone and a recording device4After receiving sound, an external microphone or a recording device firstly needs to distinguish the sound of different people through a voiceprint recognition module in a voice recognition module and divide the sound into multiple tracks, secondly, according to preset keywords, such as a feeling, a feeling and the like, the keywords are captured through semantic recognition to evaluate whether the sentence is valid voice or not, if the sentence is not clear enough or irrelevant voice, the sentence is automatically abandoned, if the sentence is valid voice, the sentence is further evaluated to be ' commendable ' or devalued, and secondly, evaluation data α recorded through a grader is used for evaluating whether the comment of the sentence is commendable ' or devalued4"divided into 0, 1, 2, 3, 4, and recorded by the visitor through the evaluator beside the exhibit. subjective message evaluation data α4=α4’*m1+α4”*m2Wherein m is1+m21, and 0<m1<1, 0<m2<1. The exhibition hall can set m according to the specific situation of the hardware facility1And m2For example, in some exhibition halls, the microphone and the recording device may be configured completely, so that m can be properly configured1The weight of m is increased2Becomes smaller, e.g. m1=0.7, m20.3; for exhibition hall with complete configuration of evaluators beside exhibits, m can be properly adjusted1The weight of m is increased2Becomes smaller, e.g. m1=0.3,m2=0.7。
The on-line data acquisition module acquires on-line clicking actions of visitors, and carries out marking, classifying and data statistics, wherein the on-line clicking actions refer to clicking actions such as praise on exhibits on official networks of exhibition halls, α5The number of clicks on a certain exhibit line/the number of visitors on the line; at the same timeThe online data acquisition module also acquires the evaluation data α of the online activity participation condition of the visitor6The online activity participation condition refers to the frequency of clicking and praise participation of visitors on the official network of the exhibition hall α6Number of online activity participants/number of online activity participants.
And returning the data acquired by each module to a big data processing platform, and processing the data of each module in the S1 by the big data processing platform to obtain an exhibit evaluation result R α1*n1+α2*n2+α3*n3+α4*n4+α5*n5+α6*n6+α7*n7Wherein n is1+n2+n3+n4+n5+n6+n71, and 0<n1<1,0<n2<1,0<n3<1,0<n4<1,0<n5<1,0<n6<1,0<n7<1;α1As a display calorific value, α1α, wherein the number of visits/total number of entries on the day refers to the number of visits to a display on the day and the total number of entries refers to the total number of entries into the exhibition hall on the day2For frequency of operation α3For voice evaluation, the results are obtained by capturing the speech keywords of the guest and then matching them according to a model, α30 indicates no evaluation or invalid evaluation results, 1 indicates "poor", 2 indicates "medium", 3 indicates "good", 4 indicates "excellent", α4Evaluation data for subjective message α40 denotes no message evaluation or invalid message evaluation results, 1 denotes "poor" message evaluation, 2 denotes "medium" message evaluation, 3 denotes "good" message evaluation, and 4 denotes "good" message evaluation, α5Indicating the number of click-throughs per capita on the line, α5α, the number of clicks on the line/the number of visitors on the line, wherein the number of clicks on the line refers to the number of clicks on the line for a certain exhibit, and the number of visitors on the line refers to the total number of visitors to the official website of the exhibition hall, etc6Show activity on lineDynamic participation situation evaluation data, α6α, the number of online activity participants is the number of online activity participants for a certain exhibit, the number of online activity participants is the total number of online activities participants7Representing offline Activity engagement evaluation data, α7The offline activity participation frequency is the offline activity participation frequency for a certain exhibit, and the offline activity participation number refers to the total number of offline activities. It should be noted that n can be set according to the type of exhibition when the present invention is implemented1、n2、 n3、n4、n5、n6、n7For example, the interactive exhibit evaluation model may define the operation frequency α2The subjective message evaluation data α is a high weight index item factor4For low index factors, n can be adjusted accordingly2Increased weight of n4Becomes smaller, e.g. n1=0.1、n2=0.4、n3=0.1、n4=0.1、n5=0.1、n6=0.1、n70.1, and subjective message evaluation data α of ornamental exhibit evaluation model4For high weight index item factor, operation frequency α2Is a low weight index item factor; then n may be correspondingly adjusted at this time4Increased weight of n2Becomes smaller, e.g. n1=0.1、n2=0.1、n3=0.1、n4=0.4、n5=0.1、n6=0.1、 n70.1; therefore, the exhibit can be objectively evaluated according to the specific situation of the exhibit. And then, pushing the information of the exhibit according to the evaluation result ranking of the exhibit. The specific push form can be expressed as ranking on the official website of the exhibition hall, or pushing on the WeChat public number, etc.
Claims (5)
1. An exhibit evaluation device characterized in that: the system comprises a plurality of cameras, a voice recognition module, an online data acquisition module and a big data processing platform, wherein the cameras are arranged around an exhibit and comprise a face recognition module and a limb language recognition module, and the face recognition module is used for recognizing facial expressions of a person facing the exhibit; the limb language identification module is used for acquiring the operation action of the tourist on each area of the exhibit; the voice recognition module is used for capturing and recognizing the voice of the audience and recognizing the voice related to the evaluation; the online data acquisition module acquires online data, the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module transmit the data acquired by the modules to the big data processing platform, and the big data processing platform processes the data transmitted by the modules and outputs an exhibit evaluation result.
2. The exhibit evaluation device according to claim 1, characterized in that: the voice recognition module also comprises a voiceprint recognition module.
3. An exhibit evaluation method characterized by: the method comprises the following steps:
s1, the big data processing platform receives data of the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module;
s2, the big data processing platform processes the data of each module in S1 to obtain the evaluation result R of the exhibit α1*n1+α2*n2+α3*n3+α4*n4+α5*n5+α6*n6+α7*n7Wherein n is1+n2+n3+n4+n5+n6+n71, and 0<n1<1,0<n2<1,0<n3<1,0<n4<1,0<n5<1,0<n6<1,0<n7<1;α1α number of visits/entries on the day2For the frequency of operation, α3As a result of the sound evaluation, α30, 1, 2, 3 or 4, α4Evaluation data for subjective message α40, 1, 2, 3 or 4, α5Number of clicks on line/lineNumber of persons visiting α6Number of online activity participants α7Number of offline activity participants/number of offline activity participants.
4. The exhibit evaluation method according to claim 3, wherein α in step S24Including the evaluation speech input by the speech recognition module and the evaluation input by the evaluator.
5. A method for evaluating and pushing exhibits is characterized in that: the method comprises the following steps:
s1, the big data processing platform receives data of the voice recognition module, the face recognition module, the limb language recognition module and the online data acquisition module;
s2, the big data processing platform processes the data of each module in S1 to obtain the evaluation result R of the exhibit α1*n1+α2*n2+α3*n3+α4*n4+α5*n5+α6*n6+α7*n7Wherein n is1+n2+n3+n4+n5+n6+n71, and 0<n1<1,0<n2<1,0<n3<1,0<n4<1,0<n5<1,0<n6<1,0<n7<1;α1α number of visits/entries on the day2For the frequency of operation, α3As a result of the sound evaluation, α30, 1, 2, 3 or 4, α4Evaluation data for subjective message α40, 1, 2, 3 or 4, α5α number of clicks on line/number of visitors on line6Number of online activity participants α7Offline activity participation times/offline activity participation number;
and S3, obtaining the ranking of the evaluation result of the exhibit, and pushing the information of the exhibit according to the ranking.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911406923.6A CN111144359B (en) | 2019-12-31 | 2019-12-31 | Exhibit evaluation device and method and exhibit pushing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911406923.6A CN111144359B (en) | 2019-12-31 | 2019-12-31 | Exhibit evaluation device and method and exhibit pushing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111144359A true CN111144359A (en) | 2020-05-12 |
CN111144359B CN111144359B (en) | 2023-06-30 |
Family
ID=70522432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911406923.6A Active CN111144359B (en) | 2019-12-31 | 2019-12-31 | Exhibit evaluation device and method and exhibit pushing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111144359B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112750213A (en) * | 2020-12-29 | 2021-05-04 | 深圳市顺易通信息科技有限公司 | Parking service information pushing method, device, equipment and medium |
CN113312507A (en) * | 2021-05-28 | 2021-08-27 | 成都威爱新经济技术研究院有限公司 | Digital exhibition hall intelligent management method and system based on Internet of things |
CN115190324A (en) * | 2022-06-30 | 2022-10-14 | 广州市奥威亚电子科技有限公司 | Method, device and equipment for determining online and offline interactive live broadcast heat |
CN116957633A (en) * | 2023-09-19 | 2023-10-27 | 武汉创知致合科技有限公司 | Product design user experience evaluation method based on intelligent home scene |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010091675A (en) * | 2008-10-06 | 2010-04-22 | Mitsubishi Electric Corp | Speech recognizing apparatus |
CN105046601A (en) * | 2015-07-09 | 2015-11-11 | 传成文化传媒(上海)有限公司 | User data processing method and system |
CN106056310A (en) * | 2016-06-17 | 2016-10-26 | 维朗(北京)网络技术有限公司 | Intelligent push assessment system and method for museum exhibit display effect |
CN106682637A (en) * | 2016-12-30 | 2017-05-17 | 深圳先进技术研究院 | Display item attraction degree analysis and system |
CN107122399A (en) * | 2017-03-16 | 2017-09-01 | 中国科学院自动化研究所 | Combined recommendation system based on Public Culture knowledge mapping platform |
CN107832370A (en) * | 2017-10-27 | 2018-03-23 | 上海享岭网络科技有限公司 | A kind of mobile electron shows system and construction method between virtual exhibition |
CN107945175A (en) * | 2017-12-12 | 2018-04-20 | 百度在线网络技术(北京)有限公司 | Evaluation method, device, server and the storage medium of image |
CN109359628A (en) * | 2018-11-28 | 2019-02-19 | 上海风语筑展示股份有限公司 | A kind of exhibition big data collection analysis platform |
US20190108196A1 (en) * | 2017-10-09 | 2019-04-11 | Qentinel Oy | Predicting quality of an information system using system dynamics modelling and machine learning |
CN109658310A (en) * | 2018-11-30 | 2019-04-19 | 安徽振伟展览展示有限公司 | One kind being used for exhibition room museum intelligent identifying system |
CN110545297A (en) * | 2018-05-28 | 2019-12-06 | 上海驿卓通信科技有限公司 | multi-information data analysis and display system for exhibition hall |
-
2019
- 2019-12-31 CN CN201911406923.6A patent/CN111144359B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010091675A (en) * | 2008-10-06 | 2010-04-22 | Mitsubishi Electric Corp | Speech recognizing apparatus |
CN105046601A (en) * | 2015-07-09 | 2015-11-11 | 传成文化传媒(上海)有限公司 | User data processing method and system |
CN106056310A (en) * | 2016-06-17 | 2016-10-26 | 维朗(北京)网络技术有限公司 | Intelligent push assessment system and method for museum exhibit display effect |
CN106682637A (en) * | 2016-12-30 | 2017-05-17 | 深圳先进技术研究院 | Display item attraction degree analysis and system |
CN107122399A (en) * | 2017-03-16 | 2017-09-01 | 中国科学院自动化研究所 | Combined recommendation system based on Public Culture knowledge mapping platform |
US20190108196A1 (en) * | 2017-10-09 | 2019-04-11 | Qentinel Oy | Predicting quality of an information system using system dynamics modelling and machine learning |
CN107832370A (en) * | 2017-10-27 | 2018-03-23 | 上海享岭网络科技有限公司 | A kind of mobile electron shows system and construction method between virtual exhibition |
CN107945175A (en) * | 2017-12-12 | 2018-04-20 | 百度在线网络技术(北京)有限公司 | Evaluation method, device, server and the storage medium of image |
CN110545297A (en) * | 2018-05-28 | 2019-12-06 | 上海驿卓通信科技有限公司 | multi-information data analysis and display system for exhibition hall |
CN109359628A (en) * | 2018-11-28 | 2019-02-19 | 上海风语筑展示股份有限公司 | A kind of exhibition big data collection analysis platform |
CN109658310A (en) * | 2018-11-30 | 2019-04-19 | 安徽振伟展览展示有限公司 | One kind being used for exhibition room museum intelligent identifying system |
Non-Patent Citations (2)
Title |
---|
Y.KITANO等: "Face-recognition based on higher-order local auto correlation feature for Sound Spot control" * |
常晓月: "展览会品牌形象评价指标体系研究" * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112750213A (en) * | 2020-12-29 | 2021-05-04 | 深圳市顺易通信息科技有限公司 | Parking service information pushing method, device, equipment and medium |
CN112750213B (en) * | 2020-12-29 | 2022-06-14 | 深圳市顺易通信息科技有限公司 | Parking service information pushing method, device, equipment and medium |
CN113312507A (en) * | 2021-05-28 | 2021-08-27 | 成都威爱新经济技术研究院有限公司 | Digital exhibition hall intelligent management method and system based on Internet of things |
CN115190324A (en) * | 2022-06-30 | 2022-10-14 | 广州市奥威亚电子科技有限公司 | Method, device and equipment for determining online and offline interactive live broadcast heat |
CN115190324B (en) * | 2022-06-30 | 2023-08-29 | 广州市奥威亚电子科技有限公司 | Method, device and equipment for determining online and offline interactive live broadcast heat |
CN116957633A (en) * | 2023-09-19 | 2023-10-27 | 武汉创知致合科技有限公司 | Product design user experience evaluation method based on intelligent home scene |
CN116957633B (en) * | 2023-09-19 | 2023-12-01 | 武汉创知致合科技有限公司 | Product design user experience evaluation method based on intelligent home scene |
Also Published As
Publication number | Publication date |
---|---|
CN111144359B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111144359A (en) | Exhibit evaluation device and method and exhibit pushing method | |
Kossaifi et al. | Sewa db: A rich database for audio-visual emotion and sentiment research in the wild | |
Song et al. | Spectral representation of behaviour primitives for depression analysis | |
US11783645B2 (en) | Multi-camera, multi-sensor panel data extraction system and method | |
Cabrera-Quiros et al. | The MatchNMingle dataset: a novel multi-sensor resource for the analysis of social interactions and group dynamics in-the-wild during free-standing conversations and speed dates | |
Soleymani et al. | Analysis of EEG signals and facial expressions for continuous emotion detection | |
US10909386B2 (en) | Information push method, information push device and information push system | |
Scherer et al. | A generic framework for the inference of user states in human computer interaction: How patterns of low level behavioral cues support complex user states in HCI | |
Mariooryad et al. | Exploring cross-modality affective reactions for audiovisual emotion recognition | |
CN109460737A (en) | A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network | |
JP2008262046A (en) | Conference visualizing system and method, conference summary processing server | |
CN109902912B (en) | Personalized image aesthetic evaluation method based on character features | |
CN116484318B (en) | Lecture training feedback method, lecture training feedback device and storage medium | |
Jin et al. | Attention-block deep learning based features fusion in wearable social sensor for mental wellbeing evaluations | |
CN113822192A (en) | Method, device and medium for identifying emotion of escort personnel based on Transformer multi-modal feature fusion | |
Tzirakis et al. | Real-world automatic continuous affect recognition from audiovisual signals | |
Lin et al. | Looking at the body: Automatic analysis of body gestures and self-adaptors in psychological distress | |
CN114242235A (en) | Autism patient portrait method based on multi-level key characteristic behaviors | |
Galvan et al. | Audiovisual affect recognition in spontaneous filipino laughter | |
Masciadri et al. | Disseminating Synthetic Smart Home Data for Advanced Applications. | |
Cabrera-Quiros et al. | Multimodal self-assessed personality estimation during crowded mingle scenarios using wearables devices and cameras | |
CN112905811A (en) | Teaching audio and video pushing method and system based on student classroom behavior analysis | |
WO2012168798A2 (en) | Systems and methods for pattern and anomaly pattern analysis | |
CN113688685B (en) | Sign language identification method based on interaction scene | |
Sun et al. | Towards mimicry recognition during human interactions: automatic feature selection and representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |