CN111752389A - Interactive system, interactive method and machine-readable storage medium - Google Patents

Interactive system, interactive method and machine-readable storage medium Download PDF

Info

Publication number
CN111752389A
CN111752389A CN202010592708.6A CN202010592708A CN111752389A CN 111752389 A CN111752389 A CN 111752389A CN 202010592708 A CN202010592708 A CN 202010592708A CN 111752389 A CN111752389 A CN 111752389A
Authority
CN
China
Prior art keywords
module
user
image
display
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010592708.6A
Other languages
Chinese (zh)
Other versions
CN111752389B (en
Inventor
冯朋朋
踪家双
董文储
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010592708.6A priority Critical patent/CN111752389B/en
Publication of CN111752389A publication Critical patent/CN111752389A/en
Priority to PCT/CN2021/101944 priority patent/WO2021259341A1/en
Application granted granted Critical
Publication of CN111752389B publication Critical patent/CN111752389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/043Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves
    • G06F3/0436Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves in which generating transducers and detecting transducers are attached to a single acoustic waves transmission substrate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04102Flexible digitiser, i.e. constructional details for allowing the whole digitising part of a device to be flexed or rolled like a sheet of paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B30/00Energy efficient heating, ventilation or air conditioning [HVAC]
    • Y02B30/70Efficient control or regulation technologies, e.g. for control of refrigerant flow, motor or heating

Abstract

The invention relates to an interactive system, an interactive method and a machine-readable storage medium. The system comprises an ultrasonic module, a blowing module and a display module; the display module comprises a vertical display screen, and a designated area is arranged in front of the vertical display screen; the ultrasonic module is arranged at a first preset position in the appointed area, and the blowing module is arranged at a second preset position in the appointed area; the vertical display screen is used for displaying at least one preset image; the ultrasonic module is used for emitting ultrasonic waves when a user exists in the designated area so as to generate ultrasonic tactile feedback acting on the simulated image content of the user in a space corresponding to the designated area; the air blowing module is used for outputting air flow when a user exists in the designated area so as to generate a wind field environment corresponding to image content or provide simulated water flow tactile feedback. In this way, the user interaction experience can be enhanced by adding tactile feedback to the user in this embodiment.

Description

Interactive system, interactive method and machine-readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an interactive system, an interactive method, and a machine-readable storage medium.
Background
At present, with the development of intelligent identification technology, man-machine interaction technology is more and more applied to intelligent equipment. For example, the user can stroke a gesture, the intelligent device recognizes the gesture, pages and clicks to view information according to the recognized gesture, or simply interacts games such as cutting fruits by sliding an arm, and the like, so that the interaction experience of the user is improved. However, the above interaction manner is only applicable to smart devices and some game scenes, but is very rare in consumption scenes with large actual personalized differences, such as bank product recommendation, insurance product recommendation, catena characteristic food recommendation, gym sport course recommendation, and the like, so that the user experience in the consumption scenes is reduced.
Disclosure of Invention
The present invention provides an interactive system, an interactive method and a machine-readable storage medium to solve the problems in the related art.
In a first aspect, an embodiment of the present invention provides an interactive system, where the system includes: the ultrasonic wave module, the blowing module and the display module; the display module comprises a vertical display screen, and a designated area is arranged in front of the vertical display screen; the ultrasonic module is arranged at a first preset position in the appointed area, and the blowing module is arranged at a second preset position in the appointed area;
the vertical display screen is used for displaying at least one preset image;
the ultrasonic module is used for emitting ultrasonic waves when a user exists in the designated area so as to generate ultrasonic tactile feedback acting on the simulated image content of the user in a space corresponding to the designated area;
the air blowing module is used for outputting air flow when a user exists in the designated area so as to generate a wind field environment corresponding to image content or provide simulated water flow tactile feedback.
Optionally, the ultrasonic module comprises a speaker group capable of emitting ultrasonic waves;
alternatively, the first and second electrodes may be,
the display module comprises at least one of the following: the LED display screen comprises an LCD display screen, an LED display screen and a flexible display screen, wherein the distance between two adjacent LEDs is smaller than or equal to the preset distance.
Optionally, the system further comprises a server module and an action recognition module; the server module is respectively connected with the action recognition module, the ultrasonic module, the blowing module and the display module; the action recognition module is arranged at the top end of the vertical display screen;
the action identification module is used for identifying the action and the position of the user when the user is detected to exist in the designated area, and sending the action and the position to the server module;
the server module is used for controlling the display module to display the at least one image in turn according to the action; and controlling the ultrasonic module to emit ultrasonic waves and the blowing module to output airflow according to the position.
Optionally, the motion recognition module comprises at least one of: infrared camera, structured light camera, ordinary camera.
Optionally, the system further comprises a face recognition module; the face recognition module is connected with the server module and arranged at the top end of the vertical display screen;
the face recognition module is used for acquiring a face image of a user in a designated area and sending the face image to the server module;
the server module is used for acquiring a target object corresponding to the user according to the face image and controlling the display module to display the target object in a superposition mode on the basis that the image serves as a background image.
Optionally, the face recognition module includes a camera or a camera array.
Optionally, the system further comprises a touch module; the touch control module is connected with the server module and is arranged at the bottom of the vertical display screen or on a horizontal display screen in the display module;
the touch control module is used for detecting the touch control operation of the user and sending the touch control operation to the server module;
the server module is further configured to control the display module to display a target object corresponding to the touch operation in an overlapping manner on the basis that the image serves as a background image according to the touch operation.
Optionally, the touch module includes one of: touch-sensitive screen, radar, pressure sensor.
Optionally, the system further comprises a fragrance module disposed at a third preset location in the designated area; the fragrance module is connected with the server module;
the server module is further used for controlling the fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
Optionally, the system further comprises a humidification module; the humidifying module comprises at least one humidifier and is connected with the server module;
the server module is further used for controlling each humidifier to release water vapor according to the currently displayed target object or image so as to generate a humidity environment corresponding to the image content.
In a second aspect, an embodiment of the present invention provides an interaction method, which is adapted to the interaction system in the first aspect, and includes:
controlling a display module to display at least one preset image;
when a user exists in a designated area in front of the display module, controlling an ultrasonic wave module to emit ultrasonic waves so as to generate ultrasonic wave tactile feedback acting on the simulation image content of the user in a space corresponding to the designated area; and controlling the blowing module to output airflow to generate a wind field environment corresponding to the image content or provide simulated water flow tactile feedback.
Optionally, the method further comprises:
acquiring the user action sent by the action identification module;
and controlling the display module to switch to the image corresponding to the action.
Optionally, before controlling the display module to switch to the image corresponding to the action, the method further includes:
controlling the display module to switch from an automatic switching mode to a manual switching mode in response to acquiring the action; the automatic switching mode refers to that the display module switches the displayed images according to a set time interval, and the manual switching mode refers to that the display module switches the images corresponding to the user action.
Optionally, the method further comprises:
acquiring a facial image of the user sent by a face recognition module;
acquiring feature information of the user according to the facial image;
acquiring a target object corresponding to the characteristic information based on the preset corresponding relation between the characteristic information and the object;
and controlling the display module to display the target object in an overlapping manner on the basis of the image as a background image.
Optionally, the method further comprises:
acquiring a facial image of the user sent by a face recognition module;
acquiring feature information of the user according to the facial image;
acquiring user information of the user based on the characteristic information;
acquiring a target object corresponding to user information based on the corresponding relation between the user information and the target object;
and controlling the display module to display the target object in an overlapping manner on the basis of the image as a background image.
Optionally, the method further comprises:
acquiring touch operation sent by a touch module;
and controlling the display module to display at least one preset object or a target object corresponding to the touch operation in an overlapping manner on the basis of using the image as a background image according to the touch operation.
Optionally, a payment channel is arranged at a preset position of each preset object or target object; the method further comprises the following steps:
acquiring a payment request initiated by an external terminal through the payment channel;
processing transaction data corresponding to the payment request to obtain a transaction result of successful transaction or failed transaction;
and controlling a display module to display the transaction result.
Optionally, the method further comprises:
and controlling a fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
Optionally, the method further comprises:
and controlling the fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
Optionally, the target object includes at least one of: bank financing products, insurance products, gourmet foods and online education courses.
In a third aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which an executable program is stored, and the executable program, when executed, implements the steps of the method of the second aspect.
The interactive system provided by the embodiment is provided with an ultrasonic module, and the ultrasonic module can emit ultrasonic waves when a user exists in the designated area, so that ultrasonic tactile feedback acting on the simulated image content of the user is generated in a space corresponding to the designated area; and a blowing module is arranged in the interactive system, and the blowing module can output airflow when a user exists in a designated area so as to generate a wind field environment corresponding to image content or provide simulated water flow tactile feedback. In this way, the user interaction experience can be enhanced by adding tactile feedback to the user in this embodiment.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a block diagram illustrating an interactive system according to an embodiment of the present invention.
Fig. 2 is a schematic view of a scenario shown in the embodiment of the present invention.
Fig. 3 is a schematic diagram of a touch detection area according to an embodiment of the invention.
Fig. 4 is a schematic diagram of another touch detection area according to an embodiment of the present invention.
Fig. 5 illustrates an effect of displaying a waterfall according to an embodiment of the present invention.
Fig. 6 is an effect diagram for displaying a digital waterfall according to an embodiment of the present invention.
Fig. 7 is a diagram illustrating an effect of dividing regions according to an embodiment of the present invention.
Fig. 8 is a flow chart illustrating an interaction method according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The manner in which the following exemplary embodiments are described does not represent a manner consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Fig. 1 is a block diagram of an interactive system according to an exemplary embodiment of the present invention, and referring to fig. 1, the interactive system may be applied to a place where an interactive area is provided, such as a bank, a food street, and the like, and includes an ultrasonic module 1, a display module 2, and a blowing module 3. The display module 2 may include a vertical display screen, and vertical means that a plane of the display screen is perpendicular to a horizontal plane. Therefore, a designated area can be arranged in front of the vertical display screen, and the designated area is used as an interaction area for a user to interact with the interaction system, namely, the user can interact in the designated area according to the display image of the vertical display screen.
The ultrasonic module 1 is disposed at a first preset position in a designated area, and when a user is present in the designated area, the ultrasonic module 1 can transmit ultrasonic waves to the user, so that ultrasonic tactile feedback acting on the user's simulated image content, such as rocks, spheres, water surface, etc., can be generated in a corresponding space of the designated area.
The air blowing module 3 is arranged at a second preset position in the designated area, and when a user exists in the designated area, the air blowing module 3 can output air flow in the designated area to generate a wind field environment corresponding to image content or provide simulated water flow tactile feedback. That is, the user in the designated area can feel the effect of the image content of the image displayed in the vertical display screen interacting with the user, for example, when the image content is a waterfall, the user can feel the tactile feedback that the flowing air blows over the face or the hand when the water flow falls down, or simulate the tactile feedback that the water flow in the hand-inserted water flows over the hand.
In one example, the ultrasonic module 1 may be constituted by a device that emits ultrasonic waves, such as a speaker or a speaker group that can emit ultrasonic waves. The ultrasonic module 1 can create a force field by adjusting the direction and the force of transmitting ultrasonic waves by a time division method. The skin may be caused to produce tactile feedback when the ultrasound waves are focused on the skin of the user; also, when the ultrasonic waves vibrate at different frequencies, different sensory feedback can be created, such as various textures, edges, corners, etc. The direction, force and the like of the ultrasonic module 1 for generating the tactile feedback can be obtained according to the tactile experiment of the user, namely, a control model corresponding to the image content is established in advance, and the intensity of the ultrasonic wave in each direction can be obtained based on the control model, so that the corresponding force is generated. Technicians can adjust the control model according to specific scenes, and the corresponding scheme falls into the protection scope of the invention. Therefore, when the user moves to the corresponding position, the user can have tactile feedback of touching the entity object, and the improvement of the user interaction experience is facilitated.
The tactile feedback may be the entire structure of the image content of the image displayed on the display module 2 or the target object or a partial structure located near the hand or face of the user. Taking a partial structure as an example, the ultrasonic module 1 may emit ultrasonic waves only to various positions of the user's hand, thereby causing the user's hand to generate tactile feedback. Since the hand area is small, the energy consumption or design difficulty of the ultrasonic module 1 can be reduced.
In an example, the display module 2 may include a display screen and/or a projection device for implementing a flat display or a holographic stereoscopic display. Taking the display screen as an example, the display module 2 includes at least one of the following: the LED display screen comprises an LCD display screen, an LED display screen and a flexible display screen, wherein the distance between two adjacent LEDs is smaller than or equal to the preset distance. The display module 2 is used for displaying a preset image or a target object and other contents. The preset image may be an image, a text, or a video, and the image is used as an example to describe each scheme subsequently.
Referring to fig. 2, the display module 2 may include a vertical display screen that may be set to the above-described gray area, such as the area within the dashed box; the projection device may be set to a designated area, such as an area within a solid box, with the steps shown in FIG. 2 indicating the presence of a user in the designated area. The skilled person can select the specific configuration of the display module 2 according to the specific scene, and the corresponding scheme falls into the protection scope of the present invention.
In this example, when there is no user in the designated area, the display module 2 may operate in an automatic switching mode, that is, the display module 2 may automatically switch the displayed image, such as a waterfall image, according to a preset interval (e.g., 5-10 seconds, which may be set). The display module 2 may also switch to the manual switching mode when the user exists in the designated area, and at this time, the display module 2 may display an image corresponding to the user's motion. For example, when the user's motion is that the left arm is kept horizontal, the motion representation returns to the previous image, and the display module 2 may display the previous image; when the user's motion remains horizontal as the right arm, the motion characterization continues with the next image, at which point the display module 2 may display the next image.
In this example, the blowing module 3 may be disposed at a second preset position in the designated area, and the blowing module 3 may be composed of a plurality of fans. The blowing module 3 may control the rotation speed of each fan according to the currently displayed target object or image, so that air flows with different intensities may be output to a designated area to generate a wind field environment corresponding to the image content or provide a simulated water flow tactile feedback. For example, when the display module 2 displays a waterfall, if the user is located in front of the display module, which is equivalent to standing under the feet of the waterfall, the user may feel the tactile feedback that the flowing air sweeps over the face or the hand when the water flow falls down, or simulate the tactile feedback that the water flow in the hand-inserted water flows over the hand.
It should be noted that the ultrasonic module 1 and the blowing module 3 may be connected to the display module 2, so that the display module 2 may send the image identifier to the ultrasonic module 1 and the blowing module 3 when displaying each image, and the ultrasonic module 1 and the blowing module 3 may locally obtain the preset control parameter, respectively, so as to generate the tactile feedback of the simulated image content acting on the user. In an example, with continued reference to fig. 1, the interactive system further includes a server module 4, and the server module 4 is respectively connected to the ultrasonic module 1, the display module 2, and the blowing module 3, and may be implemented by a server, a server cluster, and the like. In this way, the server module 4 can control the ultrasound module 1, the display module 2 and the blowing module 3, respectively. For example, the server module 4 may control whether the display mode of the display module 2 is an automatic switching mode or a manual switching mode, and may also control an image or a target object displayed by the display module 2. For another example, the server module 4 may control the ultrasound module 1 to achieve the effects of controlling the ultrasound module 1 to emit ultrasound and generate corresponding haptic feedback by sending a control instruction, an image identifier, an ultrasound control parameter, or the like. As another example, the server module 4 may control the blower module 3 to output different airflows by controlling the switches of the respective blowers and the input voltage.
In an example, the interaction system further includes a motion recognition module 5, and the motion recognition module 5 may be connected to the server module 4, or connected to the display module 2, and will be described by taking the connection to the server module 4 as an example. The action recognition module 5 may be disposed at the top end of the vertical display screen, and the field angle thereof covers the designated area, and may be configured to recognize the action and the position of the user when the presence of the user in the designated area is detected, and send the action and the position to the server module 4. In specific implementation, the motion recognition module 5 may be implemented by one or more of an infrared camera, a structured light camera, and a general camera. Taking an infrared camera as an example, the infrared camera can acquire continuous infrared image video frames, acquire the profile of the user in each video frame based on an image recognition technology, and then acquire the action of the user according to the profile and the sequence in each video frame; and determining the position information of the user according to the light intensity attenuation degree of the infrared light. Of course, the motion recognition module 5 may also be implemented by a wearable device, and the wearable device may be worn on the wrist or ankle of the user without being disposed on the top of the vertical display screen. Taking wearable equipment as an example, a user may wear at least one piece of wearable equipment, the wearable equipment may acquire user data such as a movement direction and location information, and the action recognition module 5 may determine an action of the user according to the user data of the at least one piece of wearable equipment. Technicians can select the specific structure of the distance action recognition module 5 according to specific scenes, and under the condition that the actions of the user can be recognized, the corresponding scheme falls into the protection scope of the invention.
The server module 4 may control the display module 2 to display at least one image in turn when receiving the motion uploaded by the motion recognition module 5, where the image may be a preset image, text, or video. And when the server module 4 receives the position of the user, the ultrasonic module 1 can be controlled to emit ultrasonic waves and the air blowing module 3 can be controlled to output air flow so as to ensure that the ultrasonic waves act on the skin of the user and the air flow blows over the skin of the user.
It should be noted that in this example, the effective detection range of the motion recognition module 5 may be adjusted to just cover the designated area or the space corresponding to the designated area, so that the motion recognition module 5 obtains the motion and the position of the user, and the server module 4 obtains the motion and the position of the user to determine that the user exists in the designated area.
In an example, referring to fig. 1, the interactive system further comprises a face recognition module 6, and the face recognition module 6 is connected with the server module 4. During specific implementation, the face recognition module 6 can be implemented by a camera or a camera array, can be arranged at the top end of the vertical display screen, and the shooting range of the face recognition module can cover the designated area, so as to collect the face image of the user in the designated area and send the face image of the user to the server module 4. The server module 4 may obtain the feature information of the user by using a face recognition technology in the related art. Since the user may be a registered user or a non-registered user, when the user is a non-registered user, the server module 4 may perform the following operations: and acquiring a target object corresponding to the characteristic information based on the preset corresponding relation between the characteristic information and the object, and controlling the display module 2 to display the target object in an overlapping manner on the basis of taking the currently displayed image as a background image. When the user is a registered user in the server module 4, the server module may perform the following operations: after the feature information is acquired, user information of a user can be acquired based on the feature information, a target object corresponding to the user information is acquired based on a preset corresponding relationship between the user information and the target object, and finally the display module 2 is controlled to display the target object in an overlapping manner on the basis that a currently displayed image is used as a background image. Therefore, in the interaction process of the user, the target object which is personalized and recommended by the registered user or the unregistered user can be enriched in the recommendation mode, and the user experience and the recommendation efficiency are improved.
In an example, referring to fig. 1, the interactive system further comprises a touch module 7, and the touch module 7 is connected with the server module 4. In a specific implementation, the touch module 7 may be formed by a touch screen, a radar, or a pressure sensor, and the like, and is configured to detect a touch operation of a user, that is, to locate an operation position of the user in a display area of the display screen.
Referring to fig. 3, taking a radar as an example, the server module 4 or the radar may connect (the head of) a user and each vertex of the display screen, acquire a (virtual, invisible) touch detection area parallel to the display screen in an area between the display screen and the user, and preset and store mapping relationships between positions on the touch detection area and positions on the display screen. The touch detection area is in a space range where the arms of the user can move. Then, the server module 4 may control the radar to transmit microwave signals to all directions within the touch detection area, so that when the user is at the virtual point position P1 in the air, the radar may detect the position P1 and send the position P1 to the server module 4; the server module 4 may determine the position P2 of P1 corresponding to the display screen according to the mapping relationship, and then perform the corresponding operation at the position P2 in combination with the displayed image, the target object or the operation button. Wherein, the target object comprises at least one of the following contents: bank financing products, insurance products, gourmet foods and online education courses.
For example, taking the waterfall image as an example, the server module 4 may control the ultrasonic module 1 to adjust the direction and intensity of the emitted ultrasonic waves after determining the position P2, so as to generate a feeling of water flowing through the user's finger.
For another example, taking the target object as an example, after determining the position P2, the server module 4 may control the display module 2 to display attribute information of the target object, such as product introduction, related video, product appearance, and the like, and may further include a payment channel (e.g., a two-dimensional code).
Taking the payment channel as a two-dimensional code as an example, the user may scan the two-dimensional code by using the terminal, and initiate a payment request to the server module 4, and the server module 4 may process transaction data corresponding to the payment request, such as price, account information, fund transfer, and the like of the target object, so as to obtain a transaction result of transaction success or transaction failure. Then, the server module 4 may control the display module 2 to display the transaction result, or the server module 4 may feed back the transaction result to the terminal for displaying on the terminal.
It should be noted that fig. 3 shows a scene in which the touch detection area may be parallel to the vertical display screen. When the display module 2 further includes a horizontal display screen, a touch detection area as shown in fig. 4 may be further provided, so that a touch operation of a user on the horizontal display screen may be detected.
In an example, referring to fig. 1, the interactive system further comprises a fragrance module 8, and the fragrance module 8 is connected to the server module 4, and may be disposed at a third preset position in the designated area, for example, the designated area is in the air, the height may exceed 2 meters, or disposed at the edge of the designated area, and the exit direction points to the head position of the user. In the specific implementation, the fragrance module 8 can be implemented by a plurality of fragrance machines, and liquids with different smells, such as perfume or essential oil or others, are added into each fragrance machine. The server module 4 may pre-store the corresponding relationship between the target object and each aromatherapy machine, that is, each target object corresponds to a kind of smell, and different starting time lengths of each aromatherapy machine may be controlled to obtain the smell, so that the sprayed gas is mixed to obtain the smell corresponding to the target object. In this example, the server module 4 may control the fragrance module 8 to release the scent corresponding to the target object when the currently displayed target object is acquired. For example, the server module 4 may directly control the fragrance machines; or send the odor code to the fragrance module 8, and the start-up time of each aromatherapy machine is controlled by the fragrance module 8. Certainly, when the number of the target objects is small, the mixed liquid with the smell corresponding to each target object can be added into each aromatherapy machine, so that the aromatherapy machine is controlled to be started up each time.
In an example, referring to fig. 1, the interactive system further comprises a humidifying module 9, the humidifying module 9 being connected with the server module 4. In a specific implementation, the humidifying module 9 may be implemented by at least one humidifier. In this way, after acquiring the location of the user (uploaded by the action recognition module 5), the server module 4 may control each humidifier to release water vapor according to the currently displayed target object or image, so as to generate a humidity environment corresponding to the image content. For example, when the display module 2 displays a waterfall, if the user is located in front of the display module, which is equivalent to the user standing under the feet of the waterfall, the moisture of the waterfall can be simulated and generated by the humidifying module 9, so that the user has a feeling of being personally on the scene.
In an example, the interactive system further comprises an audio module, which is connected with the server module 4. In a specific implementation, the audio module may be implemented by using a plurality of speakers, and is used for at least one of the following audio frequencies: audio of the image, audio of the target object, audio of the open target object. Technicians can adjust the audio frequency of images, target objects or other operations according to specific scenes, and the audio module can play the corresponding audio frequency at corresponding time.
It should be noted that, in the specific implementation, in the case of no conflict, a technician may combine 2 or more of the above modules, and implement them in the same entity, for example, adding liquids with different odors in the humidification module 9 may be used as a combination of the fragrance module 8 and the humidification module 9, and in the case of implementing corresponding functions, the corresponding solution falls within the protection scope of the present invention.
Referring to fig. 1, 2 and 3, the operation of the above-mentioned interactive system is described below in connection with the recommendation of financial products by banks:
the motion recognition module 5 may detect whether a user is present within a specified range in real time or periodically.
When there is no user, the server module 4 may control the display module 2 to play a preset image, such as a video of a real waterfall, or multimedia content and effects made according to the waterfall, as shown in fig. 5. In particular, the description of the waterfall can be added to the image. The display module 2 may automatically switch to another image according to a preset interval time, so that the user can enjoy a plurality of preset waterfalls outside the designated area.
When a user exists, the server module 4 controls the display module 2 to be switched from the automatic switching mode to the mobile phone switching mode. Optionally, the server module 4 may further control the display module 2 to display an indication image on the image in an overlapping manner, so as to prompt the user to lift the left and right arms for image switching.
The action recognition module 5 can recognize the action of the user, when the action of the user is that the left arm is kept horizontal, the action representation returns to the previous image, and at the moment, the display module 2 can display a waterfall; when the user's motion remains horizontal as the right arm, the motion characterization continues to display the next image, at which point the display module 2 may display the next waterfall. Meanwhile, the action recognition module 5 can acquire the position of the user, the server module 4 controls the ultrasonic module 4 to generate a 3D virtual object according to the position of the user and the waterfall flow real-time state, and the user can feel the touch feeling caused by ultrasonic waves, namely the user feels like impact feeling caused by pushing air when water flows down quickly, so that the user can feel personally on the scene. Accordingly, the server module 4 can also control the humidifying module 9 to release moisture and control the blowing module 3 to output air flow, so that the feeling of the user is richer.
After the user stays for a period of time, the server module 4 can control the display module 2 to display the interactive image, and prompt the user to enter an interactive link.
At this time, the touch module 7 may detect a touch operation of the user, and the server module 4 may determine an icon clicked by the user according to the touch operation. After the user clicks the interactive icon, the server module 4 may control the display module 2 to use the current image as a background image, or use a multimedia digital waterfall stream as a background, and a product icon or a game icon flowing from top to bottom along the waterfall stream appears on the waterfall stream, where the product icon and the game icon may be multiple, and the effect is as shown in fig. 6. The products can be financing, fund, insurance and the like sold by banks, and can also be commodities in bank malls.
The touch module 7 may continue to detect the touch operation of the user, and the server module 4 may determine the icon clicked by the user according to the touch operation.
In an example, when the user clicks the product icon, product information corresponding to the product image is displayed on the display module 5, such as product introduction, related video, product appearance, and the like, and a payment channel (such as a two-dimensional code) may be further included, so that the user can conveniently purchase the product on line.
Of course, after entering the interaction link, the interaction system may start the face recognition module 6. The face recognition module 6 acquires a face image of the user and uploads the face image to the server module 4. The server module 4 determines whether the user is a registered user, and when the user is the registered user, the server module can acquire user information such as asset condition, risk preference and the like of the user from a local or bank background, accurately recommend a corresponding financial product based on the user information, and when the user is a non-registered user, acquire characteristic information (such as age, gender and the like) of the user, and recommend the financial product based on the characteristic information.
The touch module 7 may continue to detect the touch operation of the user, and when the user selects the close button to close the product information, the server module 4 may control the display module 2 to close the product information. In practical application, if the product information frame is not closed for a long time and the action recognition module 5 detects that no person exists in the interaction area, the server control module 1 can automatically close the product information and return to the automatic switching mode to perform waterfall display.
In one example, when the user clicks on a non-product location, server module 4 may control display module 2 to display a halo effect at the clicked location, i.e., an effect that simulates a finger touching the water surface.
In an example, when the user clicks on the game icon, the server module 4 may control the display module 2 to display the game content in full screen. The game content may be in a variety of forms, with different games displayed as different icons. The user enters the corresponding game by clicking the icon, and meanwhile, the music and the sound effect of the game are played through the audio module. For example, the step introduction of the game playing method is shown first, and the touch module 7 detects that the user clicks the next step, so that the operation method of the game can be known. For another example, when the touch module 7 detects that the user selects a music beat game, the system may preset music pieces with different difficulty levels for the user to select. After detecting that the user selection is completed, the control display module 5 is divided into a plurality of areas in the longitudinal direction (up-down direction), and the effect is shown in fig. 7. At this time, different notes start to appear continuously in each region, the notes randomly appear in each region and slide down at different speeds, when the notes slide down to the bottom end (touch click region) of the display module 5, the touch module 7 detects that the user clicks/clicks the notes, and when the tapping is successful (the notes are being tapped), the audio module is controlled to emit the corresponding sound of the notes; if the beat is not in the middle, the musical notes flow, and the audio module does not sound or produces specific sound.
Fig. 8 is a flowchart illustrating an interaction method according to an exemplary embodiment of the present invention, and referring to fig. 8, an interaction method includes steps 81 to 83, which is adapted to the above-mentioned interaction system, wherein:
in step 81, controlling the display module to display at least one preset image;
in step 82, when a user exists in a designated area in front of the display module, controlling an ultrasonic module to emit ultrasonic waves to generate ultrasonic tactile feedback acting on the simulated image content of the user in a space corresponding to the designated area; and controlling the blowing module to output airflow to generate a wind field environment corresponding to the image content or provide simulated water flow tactile feedback.
In an embodiment, the method further comprises:
acquiring the user action sent by the action identification module;
and controlling the display module to switch to the image corresponding to the action.
In an embodiment, before controlling the display module to switch to the image corresponding to the action, the method further includes:
controlling the display module to switch from an automatic switching mode to a manual switching mode in response to acquiring the action; the automatic switching mode refers to that the display module switches the displayed images according to a set time interval, and the manual switching mode refers to that the display module switches the images corresponding to the user action.
In an embodiment, the method further comprises:
acquiring a facial image of the user sent by a face recognition module;
acquiring feature information of the user according to the facial image;
acquiring a target object corresponding to the characteristic information based on the preset corresponding relation between the characteristic information and the object;
and controlling the display module to display the target object in an overlapping manner on the basis of the image as a background image.
In an embodiment, the method further comprises:
acquiring a facial image of the user sent by a face recognition module;
acquiring feature information of the user according to the facial image;
acquiring user information of the user based on the characteristic information;
acquiring a target object corresponding to user information based on the corresponding relation between the user information and the target object;
and controlling the display module to display the target object in an overlapping manner on the basis of the image as a background image.
In an embodiment, the method further comprises:
acquiring touch operation sent by a touch module;
and controlling the display module to display at least one preset object or a target object corresponding to the touch operation in an overlapping manner on the basis of using the image as a background image according to the touch operation.
In one embodiment, a payment channel is arranged at a preset position of each preset object or target object; the method further comprises the following steps:
acquiring a payment request initiated by an external terminal through the payment channel;
processing transaction data corresponding to the payment request to obtain a transaction result of successful transaction or failed transaction;
and controlling a display module to display the transaction result.
In an embodiment, the method further comprises:
and controlling a fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
In an embodiment, the method further comprises:
and controlling the fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
In one embodiment, the target object includes at least one of: bank financing products, insurance products, gourmet foods and online education courses.
It should be noted that an interactive method provided in this embodiment corresponds to the above interactive system, and for detailed contents, reference is made to the contents of the embodiment shown in the interactive system, which is not described herein again.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium, on which computer instructions are stored, and the computer instructions, when executed by a processor, implement the steps of the above-described interactive method.
In the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise. In the present invention, two components connected by a dotted line are in an electrical connection or contact relationship, and the dotted line is only used for the sake of clarity of the drawings, so that the solution of the present invention can be understood more easily.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (21)

1. An interactive system, characterized in that the system comprises: the ultrasonic wave module, the blowing module and the display module; the display module comprises a vertical display screen, and a designated area is arranged in front of the vertical display screen; the ultrasonic module is arranged at a first preset position in the appointed area, and the blowing module is arranged at a second preset position in the appointed area;
the vertical display screen is used for displaying at least one preset image;
the ultrasonic module is used for emitting ultrasonic waves when a user exists in the designated area so as to generate ultrasonic tactile feedback acting on the simulated image content of the user in a space corresponding to the designated area;
the air blowing module is used for outputting air flow when a user exists in the designated area so as to generate a wind field environment corresponding to image content or provide simulated water flow tactile feedback.
2. The interactive system according to claim 1, wherein the ultrasonic module comprises a speaker set capable of emitting ultrasonic waves;
alternatively, the first and second electrodes may be,
the display module comprises at least one of the following: the LED display screen comprises an LCD display screen, an LED display screen and a flexible display screen, wherein the distance between two adjacent LEDs is smaller than or equal to the preset distance.
3. The interactive system according to claim 1 or 2, characterized in that the system further comprises a server module and an action recognition module; the server module is respectively connected with the action recognition module, the ultrasonic module, the blowing module and the display module; the action recognition module is arranged at the top end of the vertical display screen;
the action identification module is used for identifying the action and the position of the user when the user is detected to exist in the designated area, and sending the action and the position to the server module;
the server module is used for controlling the display module to display the at least one image in turn according to the action; and controlling the ultrasonic module to emit ultrasonic waves and the blowing module to output airflow according to the position.
4. The interactive system of claim 3, wherein the motion recognition module comprises at least one of: infrared camera, structured light camera, ordinary camera.
5. The interactive system of claim 3, wherein the system further comprises a face recognition module; the face recognition module is connected with the server module and arranged at the top end of the vertical display screen;
the face recognition module is used for acquiring a face image of a user in a designated area and sending the face image to the server module;
the server module is used for acquiring a target object corresponding to the user according to the face image and controlling the display module to display the target object in a superposition mode on the basis that the image serves as a background image.
6. The interactive system according to claim 5, wherein the face recognition module comprises a camera or a camera array.
7. The interactive system of claim 3, wherein the system further comprises a touch module; the touch control module is connected with the server module and is arranged at the bottom of the vertical display screen or on a horizontal display screen in the display module;
the touch control module is used for detecting the touch control operation of the user and sending the touch control operation to the server module;
the server module is further configured to control the display module to display a target object corresponding to the touch operation in an overlapping manner on the basis that the image serves as a background image according to the touch operation.
8. The interactive system of claim 7, wherein the touch module comprises one of: touch-sensitive screen, radar, pressure sensor.
9. The interactive system as claimed in claim 3, wherein the system further comprises a fragrance module disposed at a third preset location in the designated area; the fragrance module is connected with the server module;
the server module is further used for controlling the fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
10. The interactive system as claimed in claim 3, wherein the system further comprises a humidifying module; the humidifying module comprises at least one humidifier and is connected with the server module;
the server module is further used for controlling each humidifier to release water vapor according to the currently displayed target object or image so as to generate a humidity environment corresponding to the image content.
11. An interaction method adapted to the interaction system of any one of claims 1 to 10, comprising:
controlling a display module to display at least one preset image;
when a user exists in a designated area in front of the display module, controlling an ultrasonic wave module to emit ultrasonic waves so as to generate ultrasonic wave tactile feedback acting on the simulation image content of the user in a space corresponding to the designated area; and controlling the blowing module to output airflow to generate a wind field environment corresponding to the image content or provide simulated water flow tactile feedback.
12. The interactive method of claim 11, wherein the method further comprises:
acquiring the user action sent by the action identification module;
and controlling the display module to switch to the image corresponding to the action.
13. The interactive method of claim 12, wherein before controlling the display module to switch to the image corresponding to the action, the method further comprises:
controlling the display module to switch from an automatic switching mode to a manual switching mode in response to acquiring the action; the automatic switching mode refers to that the display module switches the displayed images according to a set time interval, and the manual switching mode refers to that the display module switches the images corresponding to the user action.
14. The interactive method of claim 11, wherein the method further comprises:
acquiring a facial image of the user sent by a face recognition module;
acquiring feature information of the user according to the facial image;
acquiring a target object corresponding to the characteristic information based on the preset corresponding relation between the characteristic information and the object;
and controlling the display module to display the target object in an overlapping manner on the basis of the image as a background image.
15. The interactive method of claim 11, wherein the method further comprises:
acquiring a facial image of the user sent by a face recognition module;
acquiring feature information of the user according to the facial image;
acquiring user information of the user based on the characteristic information;
acquiring a target object corresponding to user information based on the corresponding relation between the user information and the target object;
and controlling the display module to display the target object in an overlapping manner on the basis of the image as a background image.
16. The interactive method of claim 11, wherein the method further comprises:
acquiring touch operation sent by a touch module;
and controlling the display module to display at least one preset object or a target object corresponding to the touch operation in an overlapping manner on the basis of using the image as a background image according to the touch operation.
17. The interaction method according to claim 16, wherein a payment channel is provided at each preset position of the preset object or the target object; the method further comprises the following steps:
acquiring a payment request initiated by an external terminal through the payment channel;
processing transaction data corresponding to the payment request to obtain a transaction result of successful transaction or failed transaction;
and controlling a display module to display the transaction result.
18. The interactive method of claim 11, wherein the method further comprises:
and controlling a fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
19. The interactive method of claim 11, wherein the method further comprises:
and controlling the fragrance module to release the odor corresponding to the target object according to the currently displayed target object.
20. The interaction method according to any one of claims 14 to 19, wherein the target object comprises at least one of the following: bank financing products, insurance products, gourmet foods and online education courses.
21. A non-transitory computer readable storage medium having stored thereon an executable program, wherein the executable program when executed performs the steps of the method of any of claims 11 to 20.
CN202010592708.6A 2020-06-24 2020-06-24 Interactive system, interactive method and machine-readable storage medium Active CN111752389B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010592708.6A CN111752389B (en) 2020-06-24 2020-06-24 Interactive system, interactive method and machine-readable storage medium
PCT/CN2021/101944 WO2021259341A1 (en) 2020-06-24 2021-06-24 Interaction system, interaction method and machine readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010592708.6A CN111752389B (en) 2020-06-24 2020-06-24 Interactive system, interactive method and machine-readable storage medium

Publications (2)

Publication Number Publication Date
CN111752389A true CN111752389A (en) 2020-10-09
CN111752389B CN111752389B (en) 2023-03-10

Family

ID=72677300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010592708.6A Active CN111752389B (en) 2020-06-24 2020-06-24 Interactive system, interactive method and machine-readable storage medium

Country Status (2)

Country Link
CN (1) CN111752389B (en)
WO (1) WO2021259341A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021259341A1 (en) * 2020-06-24 2021-12-30 京东方科技集团股份有限公司 Interaction system, interaction method and machine readable storage medium
WO2022170650A1 (en) * 2021-02-09 2022-08-18 南京微纳科技研究院有限公司 Contactless human-machine interaction system and method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114562803A (en) * 2022-01-20 2022-05-31 青岛海尔空调器有限总公司 Method and device for controlling air conditioner and air conditioner

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279249A1 (en) * 2009-05-29 2011-11-17 Microsoft Corporation Systems and methods for immersive interaction with virtual objects

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090102805A1 (en) * 2007-10-18 2009-04-23 Microsoft Corporation Three-dimensional object simulation using audio, visual, and tactile feedback
US20160364960A1 (en) * 2015-06-09 2016-12-15 Elwha Llc Systems and methods for ultrasonically induced tactile stimuli in an entertainment system
US10168767B2 (en) * 2016-09-30 2019-01-01 Intel Corporation Interaction mode selection based on detected distance between user and machine interface
CN111752389B (en) * 2020-06-24 2023-03-10 京东方科技集团股份有限公司 Interactive system, interactive method and machine-readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279249A1 (en) * 2009-05-29 2011-11-17 Microsoft Corporation Systems and methods for immersive interaction with virtual objects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021259341A1 (en) * 2020-06-24 2021-12-30 京东方科技集团股份有限公司 Interaction system, interaction method and machine readable storage medium
WO2022170650A1 (en) * 2021-02-09 2022-08-18 南京微纳科技研究院有限公司 Contactless human-machine interaction system and method

Also Published As

Publication number Publication date
CN111752389B (en) 2023-03-10
WO2021259341A1 (en) 2021-12-30

Similar Documents

Publication Publication Date Title
CN111752389B (en) Interactive system, interactive method and machine-readable storage medium
JP5859456B2 (en) Camera navigation for presentations
US10048763B2 (en) Distance scalable no touch computing
CA2767788C (en) Bringing a visual representation to life via learned input from the user
KR101966040B1 (en) Apparatus for dance game and method for dance game using thereof
JP5782440B2 (en) Method and system for automatically generating visual display
JP2009037594A (en) System and method for constructing three-dimensional image using camera-based gesture input
JP2001517344A (en) System and method for admitting three-dimensional navigation through a virtual reality environment using camera-based gesture input
CN112020836A (en) Virtual interactive audience interface
JP6945312B2 (en) Operation control system, character screening system and program
CN109391848B (en) Interactive advertisement system
Zhang et al. KaraKter: An autonomously interacting Karate Kumite character for VR-based training and research
US20200201437A1 (en) Haptically-enabled media
CN111857335A (en) Virtual object driving method and device, display equipment and storage medium
CN109714647B (en) Information processing method and device
US20220300143A1 (en) Light Field Display System for Consumer Devices
TWM412400U (en) Augmented virtual reality system of bio-physical characteristics identification
CN116139471A (en) Interactive movie watching system-dream riding
CN112804546B (en) Interaction method, device, equipment and storage medium based on live broadcast
KR20180085328A (en) Apparatus for dance game and method for dance game using thereof
JPH10214344A (en) Interactive display device
CN112135152B (en) Information processing method and device
CN113407146A (en) Terminal voice interaction method and system and corresponding terminal equipment
CN108156553A (en) A kind of speaker with projecting function
WO2021059642A1 (en) Information processing device, control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant