CN116414287B - Intelligent interaction control method and system for multimedia equipment in digital exhibition hall - Google Patents

Intelligent interaction control method and system for multimedia equipment in digital exhibition hall Download PDF

Info

Publication number
CN116414287B
CN116414287B CN202310048714.9A CN202310048714A CN116414287B CN 116414287 B CN116414287 B CN 116414287B CN 202310048714 A CN202310048714 A CN 202310048714A CN 116414287 B CN116414287 B CN 116414287B
Authority
CN
China
Prior art keywords
target
display
display screen
audience
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310048714.9A
Other languages
Chinese (zh)
Other versions
CN116414287A (en
Inventor
奚学锋
赵政
曹李建
周坤
张科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jinzishu Intelligent Technology Co ltd
Original Assignee
Suzhou Jinzishu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jinzishu Intelligent Technology Co ltd filed Critical Suzhou Jinzishu Intelligent Technology Co ltd
Priority to CN202311048248.0A priority Critical patent/CN117193616A/en
Priority to CN202310048714.9A priority patent/CN116414287B/en
Publication of CN116414287A publication Critical patent/CN116414287A/en
Application granted granted Critical
Publication of CN116414287B publication Critical patent/CN116414287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention relates to the technical field of digital exhibition hall multimedia equipment control, in particular to an intelligent interaction control method and system for digital exhibition hall multimedia equipment, wherein the method comprises the following steps: acquiring authorized identity information and behavior information of a target audience in a digital exhibition hall; acquiring information of each display target in a digital exhibition hall; generating interaction information according to the identity information, the behavior information and the information of each display target of the target audience, wherein the interaction information comprises the association degree of the target audience and each display target and the display content and the display mode of the display target; determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interaction information, and generating a control instruction for controlling the target multimedia equipment by a central control data processor; and controlling each target multimedia device to perform interaction with the target audience according to the control instruction. The invention can reduce the interactive operation of the user and lead the user to concentrate on the display content of the digital exhibition hall.

Description

Intelligent interaction control method and system for multimedia equipment in digital exhibition hall
Technical Field
The invention relates to the technical field of digital exhibition hall multimedia equipment control, in particular to an intelligent interaction control method and system for digital exhibition hall multimedia equipment.
Background
The digital exhibition hall uses multimedia and digital technology as exhibition technology, uses latest movie and television animation technology, combines unique graphic digital and multimedia technology, attracts visitors with various novel technologies, and realizes exhibition form of man-machine interaction mode. Digital exhibition halls are usually integrated display platforms integrating various multimedia exhibition display systems, including digital sand tables, ring/arc/ball screen halls, greeting ground screen systems, interactive bar counter, interactive mirror surfaces, touch screens, etc. Meanwhile, various high-tech technologies are integrated, so that the exhibition hall has meaning and attraction, and the background and meaning contained in the exhibition objects are deeply excavated through combined application of media such as video, sound and animation, so that high-tech visual shock is brought to audiences. To enhance the user experience of the spectators, digital shows often focus on interactions between the spectators and the multimedia devices within the show. At present, a digital exhibition hall often adopts interaction modes such as a mouse, a keyboard, a touch screen, user voice, user gestures and the like to realize interaction between a spectator and multimedia equipment in the exhibition hall. In the prior art, for example, a human interaction subsystem in a patent with publication number CN111462334a recognizes a sound made by a user through a voice recognition technology; determining the action of a user through action capturing and identifying; generating corresponding feedback information through the operation of a user; the user can inquire the related information of the exhibition hall through the touch screen of the intelligent robot, the interaction modes are various, and special operations are needed to be carried out by the user (audience) during observation to input the user demand information to the exhibition hall control system, for example, the user needs to click a mouse, touch the touch screen, and special gestures or actions are specially made to input the self demand information to the control system of the exhibition hall. Additional operations beyond these viewing tend to affect the continuity of the viewer's viewing and distract from the content being displayed, thus reducing the viewer's viewing experience.
Disclosure of Invention
In view of the above, the embodiment of the invention provides an intelligent interaction control method and system for a digital exhibition hall multimedia device, which are used for solving the technical problems that the current digital exhibition hall interaction mode needs frequent operation by a user, thereby influencing the continuity of audience observation and dispersing the concentration of audience on display contents.
In a first aspect, the embodiment provides an intelligent interaction control method for a digital exhibition hall multimedia device, which includes the following steps:
acquiring authorized identity information and behavior information of a target audience in a digital exhibition hall, wherein the identity information at least comprises age, gender, occupation and hobbies of the target audience;
acquiring information of each display target in a digital exhibition hall;
generating interaction information according to the identity information, the behavior information and the information of each display target of the target audience, wherein the interaction information at least comprises the association degree of the target audience and each display target and the display content and the display mode of the display target;
determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interaction information, and generating a control instruction for controlling the target multimedia equipment by a central control data processor;
And controlling each target multimedia device to perform interaction with the target audience according to the control instruction.
In a second aspect, the present invention provides an intelligent interactive control system for a digital exhibition hall multimedia device, the system comprising: the system comprises a central control data processor, a multimedia device and a data acquisition device; the device types of the multimedia include: the system comprises a spotlight, a display screen, a carpet screen, a water flowing screen, a ribbon screen and a sound box, wherein the data acquisition equipment comprises a monitoring camera, and the system adopts the data processing method for intelligent node control based on the mode scene in the first aspect.
The beneficial effects are that: according to the intelligent interaction control method and system for the digital exhibition hall multimedia equipment, interaction information is generated through the identity information of audiences and the behavior information in the exhibition hall, then a control instruction is generated according to the interaction information, and the display of display targets is realized through the form that the control instruction controls interaction of each multimedia equipment and users. In the process of interaction between the digital exhibition hall and the user through the multimedia equipment, the user is not required to make a touch screen specially outside the exhibition, the mouse is moved through gesture control, the keyboard is clicked, the process of natural exhibition of the user is not interrupted, the user can concentrate on exhibiting the content, and therefore the exhibition experience of the audience can be remarkably improved. According to the invention, the interaction requirement of the user is comprehensively judged after the information of two dimensions of the identity information and the behavior information of the audience is combined, and the interaction experience which meets the requirement of the audience can be improved under the condition of reducing the extra operation of the user.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described, and it is within the scope of the present invention to obtain other drawings according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the intelligent interactive control method for the digital exhibition hall multimedia equipment;
FIG. 2 is a flow chart of a method for determining a display mode of a display object according to a relevance interval according to the present invention;
FIG. 3 is a flow chart of a method for generating control instructions according to interaction information in a simple display mode of the present invention;
FIG. 4 is a flow chart of a method of generating a boot animation according to the present invention;
FIG. 5 is a flowchart illustrating a method for generating control commands according to interaction information in a detailed display mode of the present invention;
FIG. 6 is a flow chart of a method for adjusting a display image according to illumination conditions of the present invention;
FIG. 7 is a flow chart of a method for generating control instructions based on interaction information in an immersive presentation mode of the present invention;
FIG. 8 is a flow chart of a method for calculating the association of a target audience with a presentation object according to the present invention;
FIG. 9 is a flowchart of a method for updating correction coefficients according to user behavior information according to the present invention;
FIG. 10 is a schematic diagram of a digital exhibition venue of the present invention;
FIG. 11 is a schematic diagram of a planned path of movement according to the present invention;
FIG. 12 is a schematic view of an original guide image of the present invention in an initial position;
FIG. 13 is a schematic illustration of the original guide image of the present invention as it moves to another position along the path of travel;
FIG. 14 is a diagram illustrating the acquisition of the path length of a movement under the viewing angle of a target audience according to the present invention;
FIG. 15 is a schematic illustration of the present invention with a flowing stream as a guide animation;
FIG. 16 is a schematic illustration of the present invention projecting a second intermediate image onto a display screen;
FIG. 17 is a schematic diagram showing the correspondence between the projection of the pixels of the second intermediate image onto the display screen and the pixels on the display screen according to the present invention;
FIG. 18 is a schematic view of a digital exhibition of the present invention in a detailed exhibition mode;
FIG. 19 is a schematic view of a digital exhibition of the present invention in an immersive exhibition mode;
FIG. 20 is a schematic illustration of the present invention for determining a gaze range from a head angle and a gaze angle of a target viewer;
FIG. 21 is a schematic view of the present invention for determining the angle of view of a target audience;
Fig. 22 is a block diagram of the intelligent interactive control system for the digital exhibition hall multimedia device according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It should be noted that, if not conflicting, the embodiments of the present invention and the features of the embodiments may be combined with each other, which are all within the protection scope of the present invention.
Example 1
The embodiment of the invention provides an intelligent interaction control method for digital exhibition hall multimedia equipment, which comprises the steps that after an intermediate control data processor analyzes and processes identity information and behavior information of audiences, each multimedia equipment in the exhibition hall is controlled to display an exhibition target 1 by utilizing a plurality of technical means according to the requirements of users, wherein the multimedia equipment comprises a first display screen 3, a second display screen 4 and a plurality of third display screens 5 which are in one-to-one correspondence with the exhibition target 1, the first display screen 3 is paved on the ground of the exhibition hall and is used for a target audience 2 to walk, the second display screen 4 is paved on an exhibition hall and is used for displaying the exhibition target 1, one end of the third display screen 5 is connected with the first display screen 3, the opposite end of the third display screen 5 is connected with the second display screen 4 at a position corresponding to the exhibition target 1, and the first display screen 3, the second display screen 4 and the third display screens 5 are respectively controlled by the intermediate control data processor; the multimedia device in this embodiment further comprises a fourth display screen 12, said fourth display screen 12 being laid on the wall surface and ceiling of the digital exhibition hall. In order to make the detail of the presentation object 1 clearly visible when the target audience 2 views the presentation object 1 at a short distance, respectively, the multimedia device of the present embodiment further includes a spotlight 10. In order to enhance the user's viewing experience in immersive presentations, the multimedia device of the present embodiment also includes sound.
The central control data processor is provided with 1 path of network signals, 2 paths of RS485 and 2 paths of RS232, is responsible for distributing serial port signals, and has high response speed and no delay. The serial port communication adopts 16 scale system self definition, the serial port parameter can be freely set, and the serial port communication is convenient to smoothly link with the controlled equipment. The product parameters of the central control data processor are as follows: ARM+Linux system; and (3) a power supply: DC24V; communication interface: modbus bus, udp, tcp communication; the installation mode is as follows: DIN standard rail; appearance size: length x width x height = 160mm x 95mm x 56. As shown in fig. 1, the control method of the present embodiment includes the steps of:
s1: acquiring authorized identity information and behavior information of a target audience 2 in a digital exhibition hall, wherein the identity information at least comprises age, sex, occupation and hobbies of the target audience 2; the viewer may choose whether to accept the interactive mode of this embodiment before entering the exhibition, and if so, may obtain some identity information of the user under the authorization of the viewer. The spectator can input own identity information to the central control data processor through the input device, and the operations are completed before the user enters the exhibition hall, so that the viewing experience of the user after entering the exhibition hall is not affected. If the interactive method of the non-embodiment is not accepted by the viewer, the normal interactive method can be adopted, and the viewer accepting the interactive method of the embodiment can be regarded as the target viewer 2. The behavior information of the audience refers to some natural behaviors of the audience in the process of watching after the audience enters the exhibition hall, such as stay behaviors of the audience near 1 of each display object, 1 focusing behaviors of the audience on each display object, and the like. Various sensors arranged in the digital exhibition hall collect data related to the behaviors of audiences, and then a central control data processor analyzes and processes the data to obtain the behavior information of the users.
S2: acquiring information of each display object 1 in a digital exhibition hall; the display object 1 can be a real object for display or a virtual display object. Or historical celebrity statues, cultural heritage and the like, without limitation.
S3: generating interaction information according to the identity information, the behavior information and the information of each display target 1 of the target audience 2, wherein the interaction information at least comprises the association degree of the target audience 2 and each display target 1 and the display content and display mode of the display target 1; in the step, the audience combines the identity information, the behavior information and the information of the display target 1 of the target audience 2 to analyze and process so as to obtain the interaction information meeting the interaction requirement of the user. The interactive information determines the display content and the display mode which meet the requirements of audiences. The association degree refers to the probability that the audience is interested in 1 of each display object, and the higher the association degree is, the higher the probability that the audience is interested in 1 of the display object is.
S4: determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interaction information, and generating a control instruction for controlling the target multimedia equipment by a central control data processor;
The method comprises the step of controlling each multimedia device to cooperate according to the display content and the display mode determined in the interaction information, and displaying the display content to a viewer in the display mode. Since different multimedia devices are often used in different display modes and display contents, this step can select the multimedia devices to be used according to the interaction information, and then determine the actions to be executed by these multimedia devices according to the display contents and display modes.
S5: each target multimedia device is controlled to perform an interaction with the target audience 2 according to the control instruction. The central control data processor generates the generated control instruction to the corresponding target multimedia device. The target multimedia device executes interaction according to the control command, for example, the display screen displays the displayed images and videos according to the control command, the sound plays the displayed voice and music according to the control command, and the light device sends out the set light effect according to the control command. As shown in fig. 2, in this embodiment, S3: generating interaction information according to the authorized identity information, behavior information and information of each display target 1 of the target audience 2, wherein the interaction information at least comprises the association degree of the target audience 2 and each display target 1 and the display content and display mode of the display target 1, and the interaction information further comprises the following steps:
S301: acquiring a first association degree interval, a second association degree interval, a third association degree interval and 1 association degree of each display object, which are divided according to the association degree size range; because the association degree of the same audience to different display targets 1 may be different, the association degree of the audience and the display targets 1 is divided into sections in this step, so that the display content and the display mode of each display target 1 are reasonably selected according to the section to which the association degree belongs. For example, the present embodiment divides the association degree into three sections, wherein the first association degree section is a section with relatively low association degree, and the second association degree section is a section with relatively high association degree. And the association degrees of the first association degree interval, the second association degree interval, the third association degree interval and the fourth association degree interval are sequentially increased. The ranges of the first association degree interval, the second association degree interval, and the third association degree interval in this embodiment may be set empirically. For example, the value of the association is a real number ranging from 0 to 100, wherein the first association interval is [20,40], the second association interval is (40, 80], and the second association interval is (80, 100].
S302: when the 1 association degree of a certain display object is in a first association degree interval, the 1 display mode of the display object is a simple display mode; that is, when the association degree of the target audience 2 to a certain display target 1 is low, the display target 1 is displayed in the first mode. Because of the low degree of association, the first mode may display the display object 1 in a more compact form and/or content without displaying too much detail.
S303: when the 1 association degree of a certain display object is in the second association degree interval, the 1 display mode of the display object is a detailed display mode; namely, when the association degree of the target audience 2 to a certain display target 1 is high, the display target 1 is displayed in the second mode. Because of the higher degree of association, the first mode may be used to display the display object 1 more abundantly and/or with greater depth to the target audience 2 to more fully and deeply understand the display object 1 of interest.
S304: when the 1 association degree of a certain display object is in a third association degree interval, the 1 display mode of the display object is an immersive display mode; when the association degree of the target audience 2 to a certain display object 1 is in the third association interval, the target audience 2 is interested in the display object 1, the display object 1 can be displayed in an immersive mode, the whole digital exhibition is built into an environment suitable for the display object 1 by using multimedia equipment in the digital exhibition, and then the target audience 2 is immersed in the environment related to the display object 1. In this embodiment, the target display mode is selected automatically according to the association degree interval to which the target is to be displayed. And under the condition that the association degree reflects that the target audience has a certain interest in the display target, but the interest is not particularly high, the simple display mode displays concise information of some display targets to the user in a mode of guiding the target audience to pay attention, so that the user can conveniently and quickly determine whether the display target needs to be further known or not through the simple information. When the association degree reflects that the target audience is interested in comparing the display targets, a detailed display mode is adopted, so that the target audience can fully know the conditions of the display targets. When the association degree reflects that the target audience is very interested in the display target, an immersive display mode is adopted, so that the user can observe the display target in a mode of being integrated into a relevant scene of the display target. In the embodiment, different display modes are adopted according to the association degree of the target audience to the display target, so that various multimedia devices in the digital exhibition hall can provide the most suitable display interaction mode according to the display requirement of the target audience to the display target.
Although there are also techniques for determining the location of a viewer in a digital exhibition by WiFi and then providing a guide to the viewer based on the location of the viewer in the digital exhibition. However, the technology only pushes navigation information to the mobile client used by the audience according to the position of the audience, but does not automatically interact with the audience according to the abundant behavior information of the audience, so that the interaction requirement of the user can not be accurately acquired, and the additional operation of the user on the mobile client is not reduced. As an alternative but advantageous implementation manner, as shown in fig. 10, in this embodiment, the multimedia device includes a first display screen 3, a second display screen 4 and a plurality of third display screens 5 corresponding to the display targets 1 one by one, where the first display screen 3 is laid on the floor of the exhibition hall and is used for the target audience 2 to walk, the second display screen 4 is laid on the exhibition stand and is used for displaying the display targets 1, one end of the third display screen 5 is connected with the first display screen 3, the opposite end is connected with the second display screen 4 at a position corresponding to the display targets 1, and the first display screen 3, the second display screen 4 and the third display screen 5 are respectively controlled by the central control data processor; in this embodiment, the first display screen 3 is used to lay the ground of the digital exhibition hall, so that the display function of the first display screen 3 can be utilized to improve the display effect of the digital exhibition hall. In this embodiment, the second display screen 4 is set to be in the form of a display stand, and the display target 1 can be placed on the second display screen 4, where the second display screen 4 displays related characters, images, animations, etc. according to the display mode and the display content. The third display screen 5 connects the first display screen 3 and the second display screen 4 to form an integrated display screen. Since the audience walks or stays on the first display screen 3 during the exhibition, the display target 1 is placed on the second display screen 4, and therefore, the audience and the display target 1 can be fused in the same scene through the three display screens. The interactive information comprises the position of the target audience 2 in the digital exhibition hall, and the information of the display target 1 comprises the position of the display target 1 in the exhibition hall and the position of the target audience 2 in the digital exhibition hall is set to be a first position; the target audience 2 is located at the second position in the digital exhibition hall, and the exhibition content of the exhibition target 1 includes a first exhibition content, the first exhibition content is a rough exhibition content, when the exhibition target 1 exhibition mode is a simple exhibition mode as shown in fig. 3, the step S4: determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interaction information, and generating a control instruction for controlling the target multimedia equipment by the central control data processor, wherein the control instruction comprises the following steps;
S41: acquiring a display target 1 with the association degree in a first association degree interval as a first target display target 1; when the audience just enters the digital exhibition hall, the control system obtains less information of the exhibition behaviors of the user, and at the moment, the identity information input by the user before entering the digital exhibition hall is mainly used for judging the interested target exhibition target 1, so that the association degree of each exhibition target 1 is lower, only certain exhibition targets 1 with higher matching degree with the identity information of the audience are in a first association degree interval, and the exhibition targets 1 are used as the first target exhibition targets 1. S42: acquiring an original guiding image 7 for guiding a target audience 2 to pay attention to a target display target, wherein the guiding image comprises a first display content 1 of the target display target; since the presentation object 1 belonging to the first relevancy range is not particularly interesting to the target audience 2, the presentation object 1 will not be described and presented in detail in the simple presentation mode. Wherein the first presentation is only a 1-bit textual description and a simple image presentation of the target presentation. The easy show mode focuses on guiding the user to focus on the target show object 1. S43: according to the first position and the second position, a moving path 6 of an original guiding image 7 is obtained, as shown in fig. 11, the moving path 6 sequentially passes through the first display screen 3, the third display screen 5 corresponding to the first target display object 1 and the second display screen 4 from the first position and extends to the second position; the present embodiment uses moving animation to guide the eyes of the audience to turn to the first target display object 1, so that the audience can quickly find the display object 1 which is more likely to be interested in from the plurality of display objects 1 after entering the digital exhibition hall. As shown in fig. 12 and 13, in which the guide animation refers to an animation in which the original guide image 7 is moved from the first position to the second position along the moving path 6, the viewer's eyes can be shifted to the position of the first target presentation object 1 following the guide animation. The guiding animation can be an arrow sequence moving from the position of the target audience 2 to the position 1 of the first target display object, can be a stream slowly flowing from the position of the target audience 2 to the position 1 of the first target display object, or can be a scroll slowly unfolding from the position of the target audience 2 to the position 1 of the first target display object.
S44: processing the original guide image 7 according to the first position and the moving path 6 to generate a target guide animation; s45: the central control data processor generates control instructions for controlling the first display screen 3, the second display screen 4 and the third display screen 5 to display the guide animation according to the target guide animation. After the guide animation is generated, the hollow data respectively send the data of the moving animation and the control instructions for controlling the first display screen 3, the second display screen 4 and the third display screen 5 to display the guide animation to the first display screen 3, the second display screen 4 and the third display screen 5, and the three display screens are controlled to execute the display command to display the guide animation through the cooperative cooperation of the three display screens. As shown in fig. 4, the following 44 is provided in the present embodiment: processing the original guiding image 7 according to the first position and the moving path 6 to generate a target guiding animation, which comprises the following steps: s441: acquiring the length of the moving path 6 under the view angle of the target audience 2 according to the first position and the moving path 6; since the viewing angles at which the target audience 2 views the guide animation are different at different positions, the viewing effects are also different. In order to enable the target audience 2 to view the best guide animation effect at that position, the present embodiment generates a guide animation from the audience's perspective. As shown in fig. 14, this may be performed by virtual image projection, that is, placing an image with the most ideal effect at the best viewing position for the viewer, and then projecting the ideal image onto the display screen, and the resulting projected image is used as the image to be displayed on the display screen. So that the display effect of the image of the display screen seen by the target audience 2 is closest to the ideal image. For this, the movement path 6 is projected from the display screen onto the plane where the viewing position of the viewer is optimal, and the projection 14 of the movement path is obtained, and the length of the projection is the length of the movement path 6 under the viewing angle of the target viewer 2. The method for acquiring the viewing angle of the target audience is to acquire the position of the target audience in the digital exhibition hall. The position of the target audience in the digital exhibition can be acquired through images shot by the monitoring cameras in the digital exhibition. The first display screen may also be configured as a capacitive screen, and the target audience may generate different current changes in the capacitive screen when the target audience determines the position of the eyes of the target audience at different positions of the capacitive screen, and the position of the target audience on the capacitive screen may be determined through the current changes. Then, the image of the target audience is shot by the monitoring camera, the height of the eyes of the target audience is determined through the positions of the eyes of the target audience in the image, and then the positions of the eyes of the target audience are determined in the digital exhibition hall, the positions are taken as the positions of the viewing angles of the target audience, and the viewing angles of human bodies at the positions are defined as the viewing angles of the target audience. S442: the original guide image 7 is subjected to extension processing according to the length of the moving path 6, and a first intermediate image containing the original guide image 7 is obtained; when the relative positions of the target audience 2 and the first target display target 1 are different, the length of the moving path 6 is also different under the view angle of the target audience 2, and in order to facilitate the image processing, the original guide image 7 may be subjected to the extension processing, so that the length of the image obtained after the extension processing may fill the whole moving path 6. Since the guide animation is composed of one frame of images, the original guide image 7 corresponding to each frame of the guide animation can be extended to the same length according to the length of the movement path 6. The specific extension processing is to increase the pixels of the original image along the direction of the moving path 6 so that the length of the image after the increased pixels coincides with the length of the moving path 6, and then set the pixel values of these increased pixels to 0, and the extension processing is performed under the viewing angle condition of the target audience 2.
S443: generating a plurality of frames of second intermediate images 8 according to the first intermediate images and the moving path 6, wherein for any frame of second intermediate images 8, the position of the original guide image 7 in the frame of second intermediate images 8 corresponds to the position of the original guide image 7 under the view angle of the target audience 2 when the frame of second intermediate images 8 are displayed; the guide animation is composed of a plurality of frames of images arranged in time sequence, so that each frame of images can be acquired first, then the images are stored in time sequence, and finally the images are played in time sequence to form the effect of the guide animation. At the perspective of the target audience 2, the original guide image 7 moves along the moving path 6 continuously as the animation is played, so that the original guide image 7 is at a different position in each frame of the second intermediate image 8. This step then moves the original guiding image 7 to its correct position in a frame of the image on the basis of the first intermediate image to obtain a second intermediate image 8 of the frame. As shown in fig. 17, for such animations as running water or scrolling, periodic images may be filled between the starting position of the second intermediate image 8 and the original guiding animation for each frame. Where the starting position is the position of the second intermediate image 8 closest to the target audience 2.
S444: for any frame of second intermediate image 8, determining a target pixel corresponding to each pixel in the second intermediate image 8 and a pixel value of the target pixel according to the projection relation between the second intermediate image 8 and the first display screen 3, the second display screen 4 and the third display screen 5 under the view angle of the target audience 2, wherein the target pixel is a pixel in the first display screen 3 and/or the second display screen 4 and/or the third display screen 5 and the first display screen 3, and all the target pixels form a frame of target image 9 corresponding to the frame of second intermediate image 8; according to the embodiment, the second intermediate image with the ideal effect is projected on the display screen through the visual angle of the target client to obtain the target image displayed by the display screen, so that abrupt changes of the image effect caused by different display screen connection positions can be eliminated, and the target client can watch the guide animation closest to the ideal effect through the 3 display screens at all positions. As shown in fig. 16, in implementation, the second intermediate image 8 is placed at an optimal position for viewing by a viewer, and a pixel of the second intermediate image 8 may be represented by a rectangle. The present embodiment takes the intermediate positions of both eyes of the target audience 2 as the origin. As shown in fig. 17, the origin and the edges of the pixels of the second intermediate image 8 are respectively connected by a plurality of straight lines, and the area surrounded by the intersections of the straight lines on the display screen is the area on the display screen corresponding to the pixels on the second intermediate image 8, and the pixels on the display screen in the area are the pixels corresponding to the pixels of the second intermediate image 8. One pixel of the second intermediate image 8 may occupy one pixel position or may occupy multiple pixel positions on the display screen when projected onto the display screen. If a pixel position on the display screen is occupied, the pixel value of the pixel position is the pixel value of the projected pixel of the second intermediate image 8, for example, the pixel f1 of the second intermediate image 8 in fig. 17 corresponds to the 4 pixels f11, f12, f13, and f14 after being projected on the display screen. If a plurality of pixel locations are occupied, the pixel values of these plurality of pixel locations are all the pixel values of the projected pixels of the second intermediate image 8. If a plurality of pixels of the second intermediate image 8 are projected to the same pixel location of the display screen, the pixel value of this pixel location is the average of the pixel values of all the pixels projected. For example, n pixels of the second intermediate image 8 are projected onto the same pixel position on the display screen, and if the pixel value of the i-th pixel of the n pixels is Ri, the pixel value of the pixel position on the display screen is (w1+w2+ … … +wn)/n. When the pixels of the plurality of second intermediate images 8 are projected to the same pixel position of the display screen, the pixel values may also be calculated in an area-ratio manner in order to improve the projection accuracy. Assuming that the i-th pixel occupies an area of the same pixel position on the display screen as Si, the pixel value of the same pixel position on the display screen is (w1×s1+w2×s2+ … … +wn×sn)/(s1+s2+ … … +sn). The method for calculating the pixel value fully considers the influence of different pixel area occupation ratios of the second intermediate image on the projection image, and can further improve the proximity degree between the target image and the ideal image. After determining all the pixel values projected onto the display screen in the above-described way, these pixels are combined to obtain a frame of image target image 9.
S445: and arranging the target images 9 of each frame according to the time sequence of the second intermediate image 8 of each frame to obtain the target guide animation. After the target audience 2 has remained around a display target 1 for a certain period of time, the system determines that the interest of the target audience 2 in the display target 1 is high, and then enters a detailed display mode capable of displaying the display target 1 for the target audience 2 more fully. In this embodiment, the multimedia device includes a sound and a plurality of 1-lamps 10 for illuminating the display target, when the display target 1 display mode is the detailed display mode, the information of the display target 1 includes 1 auxiliary light effect information of the display target, the display content of the display target 1 further includes a second display content, the second display content is detailed display content, and the second display content includes detailed text and images to be displayed, as shown in fig. 5, and S4: determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interaction information, and generating a control instruction for controlling the target multimedia equipment by the central control data processor, wherein the control instruction comprises the following steps;
s47: acquiring a display target 1 with the association degree in a second association degree interval as a second target display target 1; s48: determining a 1-spot lamp 10 for irradiating the second target display target as a target spot lamp 10 according to the 1-auxiliary lamp light effect information of the second target display target, and determining the illumination intensity and illumination color of the target spot lamp 10;
S49: and determining a sound for playing the display content and a display screen for displaying the display content according to the second target display target 1, and generating a control instruction for controlling the sound to play the display content and the display screen to display the display content. As shown in fig. 18, since the target audience 2 is located closer to the second target display target 1 when entering the second display mode, the spotlight 10 may be turned on to illuminate the second target display target 1, so that the target audience 2 may clearly see the details of the second target display target 1, and introduce the second target display target 1 with detailed text and rich images in the second display content. The second target presentation object 1 may also be presented by means of speech. In order to provide sufficient illumination for the second target display object 1, the illumination intensity of the spotlight 10 is often high, and the effect of the image display will be affected when the spotlight 10 irradiates the display image on the display screen, which is an alternative but advantageous embodiment, in this embodiment, as shown in fig. 6, the step S49: determining a sound for playing the display content and a display screen for displaying the display content according to the second target display object 1, and generating a control instruction for controlling the sound to play the display content and the display screen to display the display content, wherein the control instruction further comprises the following steps: s491: acquiring an image displayed by a display screen as an original image according to the display content; s492: acquiring the illumination intensity and the illumination color of the target spot lamp 10 irradiated on the display image; during implementation, a light sensor can be arranged at the position of the display screen for displaying the image, and the illumination intensity and the illumination color can be detected. S493: the pixel values of all pixels of the original image are adjusted according to the illumination intensity and the illumination color to obtain a target image 9; when the spotlight 10 is strong in illumination of a certain color, the pixel value of the illuminated display corresponding to the color can be adjusted down to reduce or eliminate the influence of the color light on the display image.
S494: and generating a control instruction for controlling the display screen to display the target image 9. When the target audience 2 stays around a certain display object 1 for a long time, the target audience 2 is interested in the display object 1, and the system enters an immersion display mode to immerse the audience in a scene related to the target display object 1. In this regard, the multimedia device in this embodiment further includes a fourth display screen 12 disposed to be laid on the wall surface and ceiling of the digital exhibition hall. In order to create a scene related to the display target 1 by means of image and video display, the fourth display screen 12 of the embodiment is further arranged on the wall surface and the ceiling of the digital exhibition hall, and the combination of the fourth display screen 12 with the first display screen 3, the second display screen 4 and the third display screen 5 can enable the whole digital exhibition hall to form a dead-angle-free display screen, so that the whole digital exhibition hall can be created into the scene related to the target display target 1 by means of image and picture display. The presentation of the presentation object 1 also comprises a third presentation, which comprises a scene image 11 and a scene sound associated with the presentation object 1. As shown in fig. 7, the step S4: determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interaction information, and generating a control instruction for controlling the target multimedia equipment by the central control data processor, wherein the control instruction comprises the following steps;
S401: acquiring a display target 1 with the association degree in a third association degree interval as a third target display target 1;
s402: acquiring a scene image 11 and a scene sound of a third target display target 1; as shown in fig. 19, for example, when the target display object 1 is a porcelain used in a palace, a scene image 11 in a palace around which the porcelain is placed may be used as the scene image 11, and music played in a palace of the time in which the porcelain is placed may be used as the scene sound. For example, when the target display object 1 is a weapon used on a battlefield, an image of a battlefield environment of the time when the weapon is located may be used as the scene image 11, and a sound in a battlefield scene may be used as the scene sound. S403: determining a display screen for displaying the scene image 11 as a target display screen and target sound equipment according to the scene image 11 and the scene sound of a third target display target, wherein the target display screen at least comprises a first display screen 3, a second display screen 4, a third display screen 5 and a fourth display screen 12; s404: dividing the scene image 11 into a plurality of sub-scene images 11 corresponding to each target display screen according to the display screen for displaying the scene image 11; since the scene image 11 is cooperatively displayed by a plurality of different display screens, this step divides the scene image 11 by display screen, and divides the sub-scene image 11 of the scene image 11 displayed by each display screen from the entire scene image 11. S405: a control instruction for controlling each target display screen to display the corresponding sub-scene image 11 and a control instruction for controlling the target audio device to play the scene sound are generated. By adopting the control mode, the multimedia equipment in the digital exhibition hall can be utilized to create an environment related to the display target for the target audience besides displaying the detailed information of the display target which is very interested in the target audience, so that the target audience can simply acquire the information of the display target, but is immersed in the environment related to the display target, and the understanding depth of the display target is increased through the environment related to the display target. In order to select the display mode according to the interest of the target audience 2 in the display target 1, the behavior information in this embodiment includes at least the gaze attention range and gaze attention time of the target audience 2, and the information of the display target 1 further includes the interest level value of the display target 1 in the crowd of different age groups, the interest level value in the crowd of different types of interest, the interest level value in the crowd of different types of occupations, and the interest level value in the crowd of different types of interest. The aforementioned interestingness value may be empirically set. The interest value takes any real number between 0 and 10, and the larger the value is, the higher the interest of the target audience in the display object is. For example, when display target 1 is a popular garment suitable for young men, the display target 1 has an interestingness value of 2 for people between the ages of 0 and 14, an interestingness value of 6 for people between the ages of 14 and 30, and an interestingness value of 2 for people above 30. The interest degree value of the display target 1 to the female population is 1, and the interest degree value to the male population is 6. The interest level of the display target 1 for the crowd engaged in clothing design and manufacture is 6, the interest level of the display target 1 for the crowd of other professional types is 1, the interest level of the display target 1 for the crowd interested in fashion clothing is 6, and the interest level of the display target 1 for the crowd interested in other interests is 1. In order to accurately acquire the eye attention range of the target audience, the method of the present embodiment further includes the steps of: acquiring an image containing a nose part of a target audience shot by a nodding camera; wherein the overhead photographing head is installed on the ceiling of the digital exhibition hall, and photographs the head of the target audience from the top down. Acquiring the head angle of a target audience according to the middle line of the nose part in the image; the outline of the nose is extracted from the image, then the minimum rectangular frame 20 of the outline of the nose is acquired, and the angle of the minimum rectangular frame in the reference coordinate system is acquired as the head angle. For example, the angle of the direction e in the reference coordinate system x-y in fig. 20 is the angle of the smallest rectangular frame in the reference coordinate system. Wherein the direction e is a direction parallel to the side of the rectangular frame that does not intersect the nose tip of the nose profile. Acquiring an image containing an eye part of a target audience shot by a shooting image head; wherein the positive shooting camera is mounted on a wall facing the target audience in the digital exhibition hall. In order to obtain the front shooting images of the target audience at all positions, front shooting cameras can be installed on walls of all directions of the digital exhibition hall, and then the cameras facing the front of the target audience are selected to be used as the front shooting cameras according to the positions of the target audience at the digital exhibition hall. Acquiring the sight angle of a target audience according to the position of the pupil center in the eyes in the image; as shown in fig. 21, this step acquires the outline of the eyes of the target audience first, then acquires the pupils of the target audience from the image, and calculates the center position 16 of the pupils. And acquiring a long axis of the eye contour, and vertically projecting the pupil center onto the long axis to obtain a projection point on the long axis. The position of the projection point 17 on the long axis 19 serves as the position of the pupil center in the eye. Let the total length of the long axis be S1, take the end point on the left of the long axis as the reference point 18, take the length from the projection point to the reference point as S2, calculate the ratio of S2 to S1 as the reference ratio, and then convert the ratio into the sight angle of the target audience. The corresponding relation between the reference ratio and the sight angle of the target audience can be calibrated in advance through experiments. The experimenter observes the mark points of the test image line at different set angles, and records the reference ratio of the experimenter in observing each mark point. The embodiment adopts a length ratio method instead of an absolute length method, so that errors caused by different shooting angles can be avoided. And determining the eye gaze range of the target audience according to the head angle and the line of sight angle. The head angle (the angle of the h direction in the x-y coordinate system in fig. 21) and the line of sight angle (the angle between the e direction and the h direction in fig. 21) are added to obtain the eye-focusing direction (the h direction in fig. 21) of the target audience, and finally the symmetric center line of the eye-focusing angle range of the target audience is overlapped with the eye-focusing direction to obtain the eye-focusing range (the range indicated by the angle A1 in fig. 21) of the target audience. In addition to the foregoing method, the present embodiment may also use other methods in the prior art to obtain the gaze direction of the target audience. As shown in fig. 8, the step S3: generating interaction information according to the identity information, the behavior information and the information of each display object 1 of the target audience 2, wherein the interaction information at least comprises the association degree of the target audience 2 and each display object 1 and the display content and display mode of the display object 1, and the interaction information further comprises the following steps: s31, acquiring a first interest factor q1 according to the identity information of the target audience 2 and the information of the display target 1, wherein the method specifically comprises the following steps: the first interest value a1 of the display object 1 to the target audience 2 is acquired according to the age group of the target audience 2, for example, the age of the target audience 2 is 20 years, and then the first interest value of the display object 1 such as popular clothes to the target audience 2 is 6. And acquiring a second interest value a2 of the display object 1 to the target audience 2 according to the sex type of the sex of the target audience 2, wherein for example, the target audience 2 is male, and then the second interest value of the display object 1 such as fashion clothing to the target audience 2 is 6. And acquiring a third interest level a3 of the display target 1 to the target audience 2 according to the occupation type of the target audience 2, wherein if the target audience 2 is an occupation other than the popular clothing design production, the third interest level of the display target 1 to the target audience 2 such as popular clothing is 1. The earth sound interest level value a4 of the display target 1 to the target audience 2 is obtained according to the interest type of the interest of the target audience 2, for example, the interest of the target audience 2 is the interest except popular clothes, and then the fourth interest level value of the display target 1 to the target audience 2 is 1. A first interest factor is calculated, wherein the first interest factor q1=a1+a2+a3+a4. The method comprises the steps of evaluating the interest value of a display target 1 to a target audience 2 from multiple dimensions by using identity information of the target audience 2, and then comprehensively calculating by using the interest value of each dimension to obtain a first interest factor determined by the matching degree of the identity information of the target audience 2 and the information of the display target 1. S32, acquiring a second interest factor q2 according to the gaze attention range, gaze attention time and the position of the display target 1 of the target audience 2; the step may first determine whether the display target is in the eye-focusing range of the target audience 2 according to the position of the display target 1, if so, acquire the time length of the display target 1 continuously in the eye-focusing range as the eye-focusing time, and then determine the size of the second interest factor according to the target focusing time, where the longer the target focusing time is, the larger the second interest factor is. The specific correspondence between the length of the target attention time and the size of the second interest factor may be empirically set. S33, acquiring a third interest factor q3 according to the stay range, the stay time and the position of the display target 1 of the target audience 2; when the position change of the target audience 2 in the set period is smaller than the circle of the first preset radius, it may be determined that the target audience 2 is in the stay state, and the stay range of the target audience 2 is in the circle of the second preset radius with the current position of the user as the center. And judging whether the position of the display target 1 is within the stay range of the target audience 2 or not, if so, acquiring the time length of the display target 1 continuously within the stay range as the stay time, and determining the size of the third interest factor according to the stay time, wherein the longer the target attention time is, the larger the third interest factor is. The specific correspondence between the first preset radius, the second preset radius and the target attention time and the size of the third interest factor can be set empirically.
S34, acquiring a first weight k1 (ta), a second weight k2 (ta) and a third weight k1 (ta); wherein the first weight k1 (ta), the second weight k2 (ta) and the third weight k1 (ta) are weights occupied by the first interest factor, the second interest factor and the third interest factor, respectively, which vary with the time of the target audience 2 in the exhibition hall. S35, acquiring a first correction coefficient C1 (ta), a second correction coefficient C2 (ta) and a third correction coefficient C3 (ta) at the current moment according to the total time ta of stay of the target audience 2 in the exhibition hall;
since the influence of the identity information and different behavior information on the determination of the degree of association changes as the behavior information of the target audience 2 in the exhibition hall is continuously enriched, the embodiment sets the correction coefficient to correct the 3 weights. These correction factors can be updated as a function of time and thus can be regarded as a function of time. Since the target audience 2 just enters the exhibition hall, the behavior information is limited, and enough party information cannot be provided for the system to judge the intention of the target audience 2, the system mainly depends on the identity information provided by the target audience 2, and the second weight and the third weight are lower. The specific adjustment method is shown in fig. 9, and the adjustment method comprises the following steps: s38, when the stay time of the eyes of the target audience 2 in the same eye range exceeds a first threshold T1, adjusting the current value of the second correction coefficient, wherein C2 (ta) =c2 (T0) × (1+m1× (Ts-T1)), wherein m1 is the first adjustment coefficient, C2 (T0) is the initial value of the second correction coefficient, and Ts is the stay time of the eyes of the target audience 2 in the same eye range; if the viewer's gaze stays in the same gaze range for a longer period of time, this indicates that the viewer's attention begins to be focused on those showcases 1 in that gaze range. Therefore, the behavior information of the target audience 2 represented by the target audience's gaze range and gaze time increases, and in order to make full use of the useful information, the present embodiment increases the weight of the second interest factor by increasing the correction coefficient, so as to more accurately acquire the association degree of the target audience 2 to the display target 1. When the time that the target audience 2 is out of the same gaze range exceeds a third threshold, it is indicated that the target audience 2 has no longer focused on the presentation object 1 in the previous gaze range. Resetting the current value of the second correction coefficient to C2 (t 0); s39, when the stay time of the target audience 2 in the same area exceeds a second threshold T2, adjusting the current value of the third correction coefficient, wherein C3 (ta) =c3 (T0) × (1+m2× (Tc-T1)), wherein m2 is the second adjustment coefficient, C3 (T0) is the initial value of the third correction coefficient, and Ts is the stay time of the gaze of the target audience 2 in the same area; the current value of the third correction coefficient is reset to C3 (t 0) when the position of the target audience 2 is outside the same area. If the target audience 2 stays in the same area for a longer time, the audience has a thicker interest in the display target 1 near the area, so that the behavior information of the target audience 2 reflected by the stay range and the stay time of the target audience 2 is increased, and in order to fully utilize the useful information, the embodiment increases the weight of the third interest factor by increasing the correction coefficient, so as to more accurately acquire the relevance of the target audience 2 to the display target 1. When the time that the target audience 2 is out of the same gaze range exceeds a third threshold, it is indicated that the target audience 2 has no longer focused on the presentation object 1 in the previous gaze range. When the time when the position of the target audience 2 is outside the same area exceeds the fourth threshold value, indicating that the target audience 2 has no longer focused on the presentation object 1 in the vicinity of the previous stay range, the current value of the third correction coefficient may be reset to C3 (t 0). S36, calculating the association degree R (ta) of the target audience 2 and the display target 1 at the current moment, wherein R (ta) =C1 (ta) xk1 xq1+C2 (ta) xk2 xq2+C3 (ta) xk3 xq 3. In the embodiment, the correction coefficient is continuously adjusted in the process of the target audience observation in the foregoing manner, so that the weights of the three interest factors are dynamically adjusted according to the process of the target audience observation. And then, the relevance of the target audience to each display target is updated in real time by using the adjusted weight, so that the identity information and the behavior information are most reasonably utilized in the non-observable stage. By adopting the adjustment method of the correction coefficient, the interested degree of the target client on each display object can be primarily judged by mainly relying on the identity information provided by the target client when the target audience just enters the exhibition hall. When the target clients focus on some interesting presentation targets, the weight occupied by interest factors determined by the behavior information of the focus attention range and the eye stay time is increased. Waiting until the target audience approaches a presentation object of great interest in a short distance for a long time increases the weight of the interest factor determined by the target audience's stay range and stay time. The degree of interest of the target audience to the display target can be reflected to the greatest extent by the relevance obtained by the method, so that the system executes interactive operation meeting the requirements of the target user.
Example 2
As shown in fig. 22, the present embodiment provides an intelligent interaction control system for a digital exhibition hall multimedia device, the system includes: the system comprises a central control data processor, a multimedia device and a data acquisition device; the device types of the multimedia include: the data acquisition equipment comprises a monitoring camera 13, a spotlight 10, a display screen, a carpet screen, a water flow screen, a ribbon screen and a sound box, and the system adopts the data processing method for intelligent node control based on the mode scene in the embodiment 1
The above is a detailed description of the data processing method and system for intelligent node control based on the mode scene.

Claims (7)

1. The intelligent interaction control method for the digital exhibition hall multimedia equipment is characterized by comprising the following steps of:
acquiring authorized identity information and behavior information of a target audience in a digital exhibition hall, wherein the identity information at least comprises age, sex, occupation and interest of the target audience, and the behavior information at least comprises a gaze attention range and gaze attention time of the target audience;
obtaining information of each display target in a digital exhibition hall, wherein the information of the display targets comprises interest values of the display targets for people of different age groups, interest values of the display targets for people of different types, interest values of the display targets for people of different professional types and interest values of the display targets for people of different interest types;
Generating interaction information according to the identity information, the behavior information and the information of each display target of the target audience, wherein the interaction information at least comprises the association degree of the target audience and each display target and the display content and the display mode of the display target, and specifically comprises the following steps:
the method comprises the following steps of:
acquiring a first interest value a1 of a display target to a target audience according to an age group to which the age of the target audience belongs;
acquiring a second interest value a2 of the display target to the target audience according to the sex type of the target audience;
acquiring a third interest value a3 of the display target to the target audience according to the occupation type of the target audience;
acquiring a land sound interest level value a4 of the display target to the target audience according to the interest type of the target audience;
calculating a first interest factor, wherein the first interest factor q1=a1+a2+a3+a4;
acquiring a second interest factor q2 according to the gaze attention range, gaze attention time and the position of the display target of the target audience;
acquiring a third interest factor q3 according to the stay range, stay time and the position of the display target of the target audience;
Acquiring a first weight k1 (ta), a second weight k2 (ta) and a third weight k1 (ta);
acquiring a first correction coefficient C1 (ta), a second correction coefficient C2 (ta) and a third correction coefficient C3 (ta) at the current moment according to the total stay time ta of the target audience in the exhibition hall;
calculating a degree of association R (ta) of the target audience and the present time of the presentation target, wherein R (ta) =c1 (ta) ×k1×q1+c2 (ta) ×k2×q2+c3 (ta) ×k3×q3;
acquiring a first association degree interval, a second association degree interval, a third association degree interval and association degrees of each display object, which are divided according to the association degree size range;
when the association degree of a certain display target is in a first association degree interval, the display mode of the display target is a simple display mode;
when the association degree of a certain display target is in the second association degree interval, the display mode of the display target is a detailed display mode;
when the association degree of a certain display target is in a third association degree interval, the display mode of the display target is an immersive display mode;
determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interaction information, and generating a control instruction for controlling the target multimedia equipment by a central control data processor;
The multimedia equipment comprises a first display screen, a second display screen and a plurality of third display screens which are in one-to-one correspondence with the display targets, wherein the first display screen is paved on the ground of an exhibition hall and is used for a target audience to walk, the second display screen is paved on an exhibition stand and is used for displaying the display targets, one end of the third display screen is connected with the first display screen, the other opposite end of the third display screen is connected with the second display screen at a position corresponding to the display targets, and the first display screen, the second display screen and the third display screen are respectively controlled by a central control data processor;
the interactive information comprises the position of a target audience in a digital exhibition hall, the information of the display target comprises the position of the display target in the exhibition hall, and the position of the target audience in the digital exhibition hall is set to be a first position; the position of the target audience in the digital exhibition hall is a second position, the display content of the display target comprises a first display content, the first display content is rough display content, when the display mode of the display target is simple display mode, the method for determining at least one part of multimedia equipment in the digital exhibition hall to be used as target multimedia equipment according to the interaction information and generating a control instruction for controlling the target multimedia equipment by the central control data processor further comprises the following steps:
Acquiring a display target with the association degree in a first association degree interval as a first target display target;
acquiring an original guide image for guiding a target audience to pay attention to a first target display target, wherein the guide image comprises first display content of the first target display target;
acquiring a moving path of an original guide image according to the first position and the second position, wherein the moving path sequentially passes through the first display screen, the third display screen and the second display screen which correspond to the first target display object from the first position and extends to the second position;
processing the original guide image according to the first position and the moving path to generate a target guide animation;
the central control data processor generates control instructions for controlling the first display screen, the second display screen and the third display screen to display the guide animation according to the target guide animation;
and controlling each target multimedia device to perform interaction with the target audience according to the control instruction.
2. The intelligent interactive control method for the digital exhibition hall multimedia equipment according to claim 1, wherein the generating the target guide animation after processing the original guide image according to the first position and the moving path comprises the following steps:
Acquiring the length of a moving path under the view angle of a target audience according to the first position and the moving path;
the original guide image is subjected to extension processing according to the length of the moving path, and a first intermediate image containing the original guide image is obtained;
generating a plurality of frames of second intermediate images according to the first intermediate images and the moving paths, wherein the positions of the original guide images in the frames of second intermediate images correspond to the positions of the original guide images in the view angles of target audiences when the frames of second intermediate images are displayed for any frame of second intermediate images;
for any frame of second intermediate image, determining a target pixel corresponding to each pixel in the second intermediate image and a pixel value of the target pixel according to the projection relation between the second intermediate image and the first display screen, the second display screen and the third display screen under the view angle of a target audience, wherein the target pixel is a pixel in the first display screen and/or the second display screen and/or the first display screen of the third display screen, and all the target pixels form a frame of target image corresponding to the frame of second intermediate image;
and arranging the target images of each frame according to the time sequence of the second intermediate image of each frame to obtain the target guide animation.
3. The intelligent interactive control method for digital exhibition hall multimedia equipment according to claim 1, wherein the multimedia equipment comprises a sound and a plurality of spot lights for illuminating a display target, when the display mode of the display target is a detailed display mode, the information of the display target comprises auxiliary light effect information of the display target, the display content of the display target also comprises second display content, the second display content is detailed display content, the second display content comprises detailed characters and images to be displayed, and the method for determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interactive information and generating control instructions for controlling the target multimedia equipment by a central control data processor further comprises the following steps:
Acquiring a display target with the association degree in a second association degree interval as a second target display target;
determining a spotlight for irradiating the second target display target as a target spotlight according to the auxiliary light effect information of the second target display target, and determining the illumination intensity and the illumination color of the target spotlight;
and determining a sound for playing the display content and a display screen for displaying the display content according to a second target display target, and generating a control instruction for controlling the sound to play the display content and the display screen to display the display content.
4. The intelligent interactive control method for a digital exhibition hall multimedia device according to claim 3, wherein the determining the sound for playing the exhibition content and the display screen for displaying the exhibition content according to the second target exhibition target and generating the control instruction for controlling the sound to play the exhibition content and the display screen to display the exhibition content further comprises the following steps:
acquiring an image displayed by a display screen as an original image according to the display content;
acquiring illumination intensity and illumination color of the target spot lamp irradiated on the display image;
The pixel values of all pixels of the original image are adjusted according to the illumination intensity and the illumination color to obtain a target image;
and generating a control instruction for controlling the display screen to display the target image.
5. The intelligent interactive control method for digital exhibition hall multimedia equipment according to claim 1, wherein the multimedia equipment further comprises a fourth display screen arranged on the wall surface and the ceiling of the digital exhibition hall, the display content of the display target further comprises a third display content, the third display content comprises a relevant scene image and scene sound of the display target, the method comprises the steps of determining at least one part of multimedia equipment in the digital exhibition hall as target multimedia equipment according to the interactive information, and generating a control instruction for controlling the target multimedia equipment by a central control data processor, wherein the control instruction comprises the following steps of;
acquiring a display target with the association degree in a third association degree interval as a third target display target;
acquiring a scene image, a scene temperature, a scene humidity and a scene sound related to a third target exhibition target;
determining a display screen for displaying the scene image and sound equipment for playing the scene sound as target sound equipment according to a scene image and the scene sound of a third target display target, wherein the target display screen at least comprises a first display screen, a second display screen, a third display screen and a fourth display screen;
Dividing the scene image into a plurality of sub-scene images corresponding to each target display screen according to the display screen for displaying the scene image;
and generating a control instruction for controlling each target display screen to display the corresponding sub-scene image and a control instruction for controlling the target sound equipment to play the scene sound and controlling the air conditioner to adjust the temperature and humidity of the exhibition hall to the scene temperature and the scene humidity.
6. The intelligent interactive control method of a digital exhibition hall multimedia device according to claim 1, wherein the interactive information is generated according to the identity information, the behavior information and the information of each exhibition target, the interactive information at least comprises the association degree of the target audience and each exhibition target and the exhibition content and the exhibition mode of the exhibition target, and further comprises the following steps:
when the stay time of the target audience in the same gaze range exceeds a first threshold value T1, adjusting the current value of the second correction coefficient, wherein C2 (ta) =C2 (T0) x (1+m1 x (Ts-T1)), wherein m1 is the first adjustment coefficient, C2 (T0) is the initial value of the second correction coefficient, and Ts is the stay time of the target audience in the same gaze range; resetting the current value of the second correction coefficient to C2 (t 0) when the time when the target audience's gaze exceeds the same gaze range exceeds a third threshold;
When the stay time of the target audience in the same area exceeds a second threshold value T2, adjusting the current value of a third correction coefficient, wherein C3 (ta) =C3 (T0) × (1+m2× (Tc-T1)), wherein m2 is the second adjustment coefficient, C3 (T0) is the initial value of the three correction coefficients, and Ts is the stay time of the target audience in the same area; and resetting the current value of the third correction coefficient to C3 (t 0) when the time when the position of the target audience is outside the same area exceeds the fourth threshold.
7. An intelligent interactive control system for a digital exhibition hall multimedia device, the system comprising: the system comprises a central control data processor, a multimedia device and a data acquisition device; the device types of the multimedia include: the system comprises a spotlight, a display screen, a carpet screen, a water flowing screen, a ribbon screen and a sound box, wherein the data acquisition equipment comprises a monitoring camera, and the system adopts the intelligent interaction control method for the digital exhibition hall multimedia equipment according to any one of claims 1-6.
CN202310048714.9A 2023-02-01 2023-02-01 Intelligent interaction control method and system for multimedia equipment in digital exhibition hall Active CN116414287B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311048248.0A CN117193616A (en) 2023-02-01 2023-02-01 Interaction control method and system for digital exhibition hall multimedia equipment in multiple modes
CN202310048714.9A CN116414287B (en) 2023-02-01 2023-02-01 Intelligent interaction control method and system for multimedia equipment in digital exhibition hall

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310048714.9A CN116414287B (en) 2023-02-01 2023-02-01 Intelligent interaction control method and system for multimedia equipment in digital exhibition hall

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311048248.0A Division CN117193616A (en) 2023-02-01 2023-02-01 Interaction control method and system for digital exhibition hall multimedia equipment in multiple modes

Publications (2)

Publication Number Publication Date
CN116414287A CN116414287A (en) 2023-07-11
CN116414287B true CN116414287B (en) 2023-10-17

Family

ID=87048700

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202311048248.0A Pending CN117193616A (en) 2023-02-01 2023-02-01 Interaction control method and system for digital exhibition hall multimedia equipment in multiple modes
CN202310048714.9A Active CN116414287B (en) 2023-02-01 2023-02-01 Intelligent interaction control method and system for multimedia equipment in digital exhibition hall

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202311048248.0A Pending CN117193616A (en) 2023-02-01 2023-02-01 Interaction control method and system for digital exhibition hall multimedia equipment in multiple modes

Country Status (1)

Country Link
CN (2) CN117193616A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206339845U (en) * 2016-11-30 2017-07-18 广州微至科技有限公司 Digital intelligent central control system
CN112198963A (en) * 2020-10-19 2021-01-08 深圳市太和世纪文化创意有限公司 Immersive tunnel type multimedia interactive display method, equipment and storage medium
CN112528139A (en) * 2020-11-30 2021-03-19 宁波市方略博华文化发展有限公司 Multimedia intelligent display system
CN113312507A (en) * 2021-05-28 2021-08-27 成都威爱新经济技术研究院有限公司 Digital exhibition hall intelligent management method and system based on Internet of things
CN216623720U (en) * 2021-11-01 2022-05-27 河南江与城文化传播有限公司 Digital corridor of digital exhibition hall
CN115205443A (en) * 2021-04-08 2022-10-18 云南骏宇国际文化博览股份有限公司 Exhibition hall display method and system based on digital multimedia technology
CN115542766A (en) * 2022-11-25 2022-12-30 苏州金梓树智能科技有限公司 Data processing method and system for intelligent node control based on mode scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200021875A1 (en) * 2018-07-16 2020-01-16 Maris Jacob Ensing Systems and methods for providing media content for an exhibit or display

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206339845U (en) * 2016-11-30 2017-07-18 广州微至科技有限公司 Digital intelligent central control system
CN112198963A (en) * 2020-10-19 2021-01-08 深圳市太和世纪文化创意有限公司 Immersive tunnel type multimedia interactive display method, equipment and storage medium
CN112528139A (en) * 2020-11-30 2021-03-19 宁波市方略博华文化发展有限公司 Multimedia intelligent display system
CN115205443A (en) * 2021-04-08 2022-10-18 云南骏宇国际文化博览股份有限公司 Exhibition hall display method and system based on digital multimedia technology
CN113312507A (en) * 2021-05-28 2021-08-27 成都威爱新经济技术研究院有限公司 Digital exhibition hall intelligent management method and system based on Internet of things
CN216623720U (en) * 2021-11-01 2022-05-27 河南江与城文化传播有限公司 Digital corridor of digital exhibition hall
CN115542766A (en) * 2022-11-25 2022-12-30 苏州金梓树智能科技有限公司 Data processing method and system for intelligent node control based on mode scene

Also Published As

Publication number Publication date
CN116414287A (en) 2023-07-11
CN117193616A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
US10375382B2 (en) System comprising multiple digital cameras viewing a large scene
US20210183155A1 (en) Dynamic augmented reality vision systems
US9268406B2 (en) Virtual spectator experience with a personal audio/visual apparatus
US8624962B2 (en) Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images
KR100588042B1 (en) Interactive presentation system
US20130260360A1 (en) Method and system of providing interactive information
CN109416842A (en) Geometric match in virtual reality and augmented reality
US20100110069A1 (en) System for rendering virtual see-through scenes
US20090237564A1 (en) Interactive immersive virtual reality and simulation
US20120188279A1 (en) Multi-Sensor Proximity-Based Immersion System and Method
US10789912B2 (en) Methods and apparatus to control rendering of different content for different view angles of a display
CN103500446A (en) Distance measurement method based on computer vision and application thereof on HMD
CN111897431A (en) Display method and device, display equipment and computer readable storage medium
CN110559632A (en) intelligent skiing fitness simulation simulator and control method thereof
Marner et al. Exploring interactivity and augmented reality in theater: A case study of Half Real
Broll Augmented reality
CN116414287B (en) Intelligent interaction control method and system for multimedia equipment in digital exhibition hall
CN210583568U (en) Wisdom skiing body-building simulator
KR20170008896A (en) An apparatus for providing augmented virtual exercising space in an exercising system based on augmented virtual interactive contents and the method thereof
US20070298396A1 (en) Computer executable dynamic presentation system for simulating human meridian points and method thereof
CN106023858B (en) Move the infrared projection advertisement interaction systems over the ground of anti-tampering formula
Wang et al. An intelligent screen system for context-related scenery viewing in smart home
CN114092671A (en) Virtual live broadcast scene processing method and device, storage medium and electronic equipment
KR101975150B1 (en) Digital contents temapark operating system
CN109461351B (en) Three-screen interactive augmented reality game training system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant