CN112198963A - Immersive tunnel type multimedia interactive display method, equipment and storage medium - Google Patents

Immersive tunnel type multimedia interactive display method, equipment and storage medium Download PDF

Info

Publication number
CN112198963A
CN112198963A CN202011119814.9A CN202011119814A CN112198963A CN 112198963 A CN112198963 A CN 112198963A CN 202011119814 A CN202011119814 A CN 202011119814A CN 112198963 A CN112198963 A CN 112198963A
Authority
CN
China
Prior art keywords
target object
information
face
deflection angle
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011119814.9A
Other languages
Chinese (zh)
Inventor
张建强
范碧琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Taihe Century Cultural And Creative Co ltd
Original Assignee
Shenzhen Taihe Century Cultural And Creative Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Taihe Century Cultural And Creative Co ltd filed Critical Shenzhen Taihe Century Cultural And Creative Co ltd
Priority to CN202011119814.9A priority Critical patent/CN112198963A/en
Publication of CN112198963A publication Critical patent/CN112198963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an immersive tunnel type multimedia interactive display method, equipment and a storage medium, and the technical scheme comprises the following steps: acquiring distance information between a plurality of moving targets and a screen; determining a target object in a plurality of moving targets and triggering a virtual scene picture change associated with the target object based on the distance information; acquiring image information of a target object; detecting face information associated with the target object in the image information; based on the facial information, it is determined whether to issue an interaction request. The method and the device have the advantages that the immersive virtual scene can flexibly interact with the target object, the target object can interact with the virtual scene through being close to the screen, and the change of the picture in the virtual scene is triggered, so that the multimedia interactive display has more interesting effect.

Description

Immersive tunnel type multimedia interactive display method, equipment and storage medium
Technical Field
The present application relates to the field of multimedia interaction technologies, and in particular, to an immersive tunnel type multimedia interaction presentation method, device, and storage medium.
Background
The immersive system organically combines a high-resolution stereoscopic projection technology, a three-dimensional computer graphic technology, a sound technology and the like together to generate a completely immersive virtual environment, and a user can interact with the immersive virtual environment through a head-mounted display, a glove display, a helmet shell display and a sensor, so that the user has good interaction with the virtual environment, and a feeling of being personally on the scene is provided for people.
In some immersive exhibition halls where multiple persons participate in interaction, because the interaction between the multiple persons and the virtual images is complex, the immersion of the participants in the virtual scenes is generally improved by presetting some virtual image changes, or the participants trigger the virtual scenes to change through interactive equipment, so that the interaction between the participants and the virtual scenes is poor in flexibility, and the immersion experience is low in interestingness.
The above prior art solutions have the following drawbacks: when a plurality of people participate, the interactive effect is poor.
Disclosure of Invention
The application provides an immersive tunnel type multimedia interactive display method, equipment and storage medium, and the immersive tunnel type multimedia interactive display method, equipment and storage medium have the advantages of being good in interactive flexibility and increasing interestingness of display.
In a first aspect, the present application provides an immersive tunnel type multimedia interactive display method, which adopts the following technical scheme:
an immersive tunnel type multimedia interactive presentation method comprises the following steps:
acquiring distance information between a plurality of moving targets and a screen;
determining a target object in a plurality of moving targets and triggering a virtual scene picture change associated with the target object based on the distance information;
acquiring image information of a target object;
detecting face information associated with the target object in the image information;
based on the facial information, it is determined whether to issue an interaction request.
By adopting the technical scheme, the distance information between the target object and the screen is detected in real time, when the moving target is detected to be positioned in the target area, the moving target is determined to be the target object and the picture change of the virtual scene is triggered, so that a good interaction effect is achieved between the virtual scene and the target object, when the target object is not detected in the target area, the picture of the virtual scene is not changed but is in a static state, and the interesting interaction effect is achieved, and more energy can be saved. And when the target object is detected to be positioned in the target area, the face information of the target object is further detected, and whether the interactive message needs to be sent or not can be judged according to the face information.
The present application may be further configured in a preferred example to: the step of determining a target object in a plurality of moving targets and triggering a virtual scene picture change associated with the target object based on the distance information comprises:
judging whether the distance between the moving target and the screen is in the target area or not according to the distance information;
and when the target object is located in the area, triggering the change of the virtual scene picture.
By adopting the technical scheme, after the distance between the moving target and the screen is detected, whether the moving target is located in the target area is further judged, and the moving target located in the target area is used as a target object to trigger the change of the virtual scene picture, namely, when only the target object exists in the target area, the change of the virtual scene picture is triggered.
The present application may be further configured in a preferred example to: the step of detecting face information associated with a target object in image information includes:
identifying a face position of a target object in an image based on the acquired image information;
extracting facial contour feature information according to the facial position;
and comparing the acquired face contour characteristic information with the reference image to acquire the deflection angle of the face contour characteristic information.
By adopting the technical scheme, when the acquired distance between the target object and the screen is judged to be within the preset range, the picture change of the virtual scene is triggered, and meanwhile, the further acquired image information of the target object is used for identifying the face information of the target object. After the target image information is obtained, the face position in the image information is identified, the face contour characteristic information of the face position is extracted, and the deflection angle of the face contour characteristic is obtained through comparing the face contour characteristic information with the reference image, namely the deflection angle of the face of the target object relative to the screen is obtained, so that the virtual scene picture can better interact with the target object.
The present application may be further configured in a preferred example to: the step of comparing the acquired facial contour feature information with the reference image to acquire a contour deflection angle includes:
acquiring local binary features corresponding to the facial features of the facial position image, processing the binary features of the face and identifying facial contour feature information;
and comparing the facial contour features with the reference image, and judging the facial contour deflection angle of the target object in the acquired image information.
By adopting the technical scheme, the face position of the target object is firstly identified in the acquired image information, after the face position is identified, the local binarization characteristic of the face position is acquired, the binarization characteristic of the face is processed, the contour characteristic information of the face is extracted and is used for comparing with a reference image, and the face contour deflection angle of the target object in the acquired image information is judged.
The present application may be further configured in a preferred example to: the step of comparing the acquired facial contour feature information with the reference image to acquire a contour deflection angle includes:
acquiring a deflection angle of image information based on a screen as a reference plane;
and acquiring the actual deflection angle of the target object face relative to the screen based on the deflection angle of the screen serving as the reference plane and the deflection angle of the face contour according to the image information.
By adopting the technical scheme, after the deflection angle of the target object in the image information is obtained, the actual deflection angle of the target object relative to the screen is calculated by combining the deflection angle of the image information based on the screen as a reference plane, and whether the target object is watching the interaction change of the virtual screen can be judged according to the deflection angle of the face of the target object.
The present application may be further configured in a preferred example to: the step of determining whether to send out interactive request information based on the face information comprises:
when the face deflection angle of a target object in the target area is detected to be the deflection angle at which the virtual scene in the screen can be observed, the virtual scene picture can continuously change;
when the face deflection angle of a target object in a target area is detected to be larger than the deflection angle at which the change of a virtual scene in a screen can be observed, sending out interaction request information;
and when the target object is detected to leave the target area, stopping the picture change of the virtual scene.
By adopting the technical scheme, after the target object is positioned in the target area and the virtual scene picture change in the screen is triggered, the face deflection angle of the target object can be further judged so as to judge whether the target object has the interaction change for watching the virtual scene picture in the screen, and when the face deflection angle of the target object is judged to be larger than the deflection angle for watching the virtual scene change in the screen, the interaction request message can be triggered to invite the target object to watch the screen for interaction. When it is detected that the target object leaves the target area, i.e. there is no target object in the target area, the picture change of the virtual scene is stopped.
In a second aspect, the present application provides an immersive tunnel type multimedia interactive exhibition system, which adopts the following technical solutions:
an immersive tunnel-type multimedia interactive presentation system comprising:
an acquisition module: the distance information between a plurality of moving targets and the screen is acquired;
a response module: the distance information is used for determining a target object in a plurality of moving targets and triggering a virtual scene picture change associated with the target object;
an image acquisition module: the system comprises a processor, a display and a display, wherein the processor is used for acquiring image information of a target object;
a detection module: for detecting face information associated with the target object in the image information;
a sending module: based on the facial information, it is determined whether to issue an interaction request.
In a third aspect, the present application provides a computer device, which adopts the following technical solution:
a computer device comprising a memory and a processor and a computer program stored in said memory and executable on said processor, said processor implementing the steps of said immersive tunnel-type multimedia interactive presentation method described above when executing said computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium, storing a computer program which, when executed by a processor, performs the steps of the immersive tunnel multimedia interactive presentation method described above.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the method comprises the steps of detecting distance information between a target object and a screen in real time, determining the moving target as the target object and triggering the picture change of a virtual scene when the moving target is detected to be located in a target area, so that a good interaction effect is achieved between the virtual scene and the target object, and when the target object is not detected in the target area, the picture of the virtual scene is not changed but is in a static state, so that the interesting interaction effect is achieved, and more energy can be saved. When the target object is detected to be located in the target area, the face information of the target object is further detected, and whether an interaction message needs to be sent or not can be judged according to the face information;
2. detecting distance information between a target object and a screen in real time through a distance detection module, and comparing the distance information between the target object and the screen with a preset distance range to judge whether the distance between the target object and the screen is within the preset distance range;
3. when the distance between the target object and the screen is judged to be within the preset range, the picture change of the virtual scene is triggered, meanwhile, whether the face state of the target object within the preset distance range is a front screen or not is detected in real time, when the face state of the target object within the preset distance range is detected to have no picture change of the virtual scene watched by the front screen, the change of the picture of the virtual scene is paused, a reminding message is sent to remind the front screen of the target object to watch the picture, and when the distance between the target object and the screen is detected to be not within the preset distance range again, the change of the picture of the virtual scene is stopped.
Drawings
Fig. 1 is a schematic flowchart of an immersive tunnel type multimedia interaction presentation method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of determining a target object based on target information and triggering a change in a virtual scene picture according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of detecting face information associated with a target object in image information according to an embodiment of the present application.
Fig. 4 is a functional block diagram of an immersive tunnel-type multimedia interactive presentation system according to an embodiment of the present application.
FIG. 5 is a functional block diagram of a computer device according to an embodiment of the present application.
In the figure, the device comprises an acquisition module 1, a response module 2, an image acquisition module 3, a detection module 4, a detection module 5 and a sending module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses an immersive tunnel type multimedia interactive display method, which is based on the following preprocessing steps:
establishing a facial feature reference image library, and storing reference images with different facial state deflection angles into the facial feature reference image library, wherein the information of the reference images comprises contour feature information of a face and face deflection angle information, and the face deflection angle information is a preset target object face deflection angle range. In the embodiment, the collected image information is compared with a reference image to judge the target face deflection angle, specifically, after extracting the facial contour feature information of the target from the image information, the extracted target face image information is compared with the reference image in the facial feature reference library. If the deflection angle of the face state of the target object is detected not to be within the preset deflection angle range of the face, the face state of the target object is judged to be a side face state which can not view the picture change in the virtual scene, the picture change of the virtual scene is stopped, and a reminding message for reminding the user of watching the front is sent out. It should be noted that, in practical applications, there may be other embodiments in combination with practical situations, and the embodiments of determining whether the target object face deflection angle is the angle at which the virtual scene picture is viewed are not listed in this embodiment.
Meanwhile, a target area for judging whether to trigger the change of the virtual scene picture is preset, namely the area range of the change of the virtual scene picture can be triggered, and when the distance between the moving target and the screen is detected to be in the target area, the target object is judged to exist in the target area, and the change of the virtual environment picture is triggered.
An immersive tunnel type multimedia interactive display method, as shown in fig. 1, specifically includes:
and S1, acquiring distance information between a plurality of moving targets and the screen.
Specifically, the method comprises the following steps:
the distance between the target object and the screen can be acquired in real time through the distance measuring equipment, in the embodiment, the distance measuring equipment can be a distance measuring sensor, the value of the distance between the target object detected by the distance measuring sensor and the screen is read in real time, the distance between a plurality of moving targets and the screen is directly detected through the distance measuring sensor, the acquisition of distance information is faster, and the calculated amount is smaller.
The distance measurement module can also be through the binocular camera in this embodiment, with the distance between detection target object and the screen, if adopt the binocular camera to detect the distance between target object and the screen, because the characteristic of camera optical lens makes the formation of image have radial distortion, at first carry out binocular camera calibration, when the calibration, mark single camera earlier, mainly calculate the internal reference and the external reference of camera, the internal reference includes the focus, formation of image origin and distortion parameter, the external reference then includes the world coordinate of single camera, then confirm rotation matrix and translation vector between two cameras. After the positions of the two cameras are calibrated, distortion elimination and line alignment are respectively carried out on the left view and the right view according to the monocular focal length, the imaging origin and the distortion parameter obtained after the cameras are calibrated and the rotation matrix and the translation vector between the two cameras, so that the imaging origin coordinates of the left view and the right view obtained by the binocular cameras are consistent, the optical axes of the two cameras are parallel, the left imaging plane and the right imaging plane are coplanar, and the epipolar lines are aligned.
After the steps are carried out, any point on the obtained image of the target object and any point on the other image are enabled to have the same line number, then corresponding image points of the same scene on left and right views are matched to obtain view difference, and finally, distance information between a plurality of moving targets and a screen is obtained through calculation after depth information is calculated.
In this embodiment, the distances between the plurality of moving targets and the screen may be detected simultaneously, and the embodiment of detecting the distance between the moving target and the screen may be combined with the actual situation in practical applications, which is not limited in this embodiment.
And S2, determining a target object in the plurality of moving targets and triggering the virtual scene picture change associated with the target object based on the distance information.
In particular, as shown in figure 2,
and S21, judging whether the distance between the moving target and the screen is in the target area or not according to the distance information. Specifically, the obtained value of the distance between the target object and the screen is compared with the range of the target area, the obtained comparison result is confirmed to be that the value of the distance between the target object and the screen is in the target area, the moving target located in the target area is marked as the target object, and a plurality of moving targets in the target area are marked to distinguish different target objects.
And S22, triggering the change of the virtual scene picture when the target object is positioned in the target area. Specifically, when it is determined that the distance between the target object and the screen is within the target area, that is, the target object exists in the target area, the change of the screen of the virtual scene of the screen is triggered to interact with the target object.
And S3, acquiring the image information of the target object.
Specifically, when the target object is located in the target area, the image information of the target object in the target area is acquired in real time.
And S4, detecting the face information associated with the target object in the image information.
In particular, as shown in figure 3,
and S41, identifying the face position of the target object in the image based on the image information. Specifically, the gray level correction and the noise processing are performed on the acquired image, then the image is subjected to the refinement processing, the detected face position is extracted and separated into images with certain sizes, the subsequent identification processing is facilitated, the steps of the refinement processing and the segmentation include the steps of performing light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering processing and sharpening processing on the image, and finally the image of the face position in the image is obtained after the processing.
And S42, extracting face contour characteristic information according to the face position. Specifically, after the face position of the target image in the image is identified, the face contour information is detected again, and the deflection angle of the target object relative to the screen is judged, so as to determine whether the target object is in a state that the screen can be observed. The method comprises the steps of obtaining local binary characteristics corresponding to the facial characteristics of a face position image, processing the binary characteristics of the face, identifying facial contour characteristic information, extracting a local binary image related to the facial characteristics in the face position image, and then processing to obtain a face edge linear contour.
And S43, comparing the acquired face contour characteristic information with the reference image to acquire a contour deflection angle.
Comparing the facial contour features with a reference image through detecting facial contour information, and judging and acquiring a facial contour deflection angle of a target object in the image information; specifically, after the extracted face contour feature is obtained, the face contour feature is compared with a reference image in a face feature reference image library, and the face deflection angle of the face contour feature in the extracted image is judged.
After the deflection angle of the face contour feature in the image information is extracted, the deflection angle of the target object relative to the screen is further obtained by combining the deflection angle of the acquired image information relative to the screen. Firstly, the shooting angle of the image acquisition equipment, namely the deflection angle of the acquired image relative to a screen, is acquired, and the deflection angle of the image information is obtained by combining world coordinates and camera coordinates. And further calculating the deflection angle of the face contour characteristic in the image information and the deflection angle of the image information based on the screen to obtain the deflection angle of the actual corresponding face of the target object to the screen, if the deflection angle of the actual face of the target object to the screen belongs to the preset deflection angle of the face, judging that the deflection angle of the face of the target object to the screen is the deflection angle capable of observing the change of the picture in the virtual scene in the screen, and if the deflection angle of the actual face of the target object to the screen is larger than the preset deflection angle of the face, judging that the deflection angle of the face of the target object to the screen is the deflection angle incapable of observing the virtual picture in the screen, so as to send an interaction request, and enable the target object and the virtual scene to be flexibly interacted.
The embodiment of detecting the face state of the target object may also have other embodiments in practical application in combination with practical situations, which are not listed in this embodiment.
S5, determining whether to issue an interactive request based on the face information.
Specifically, the method comprises the following steps: when the face deflection angle of a target object in the target area is detected to be the deflection angle at which the virtual scene in the screen can be observed, the virtual scene picture can continuously change; and when detecting that a target object exists in the target area, triggering the image change in the virtual scene for interactive display, detecting the face deflection state of the target object relative to the screen in real time in the interactive display process, and if detecting that the face deflection angle of the target object in the target area is at the deflection angle at which the change of the virtual scene can be observed, continuously keeping the image change of the virtual scene for interactive display.
When the face deflection angle of a target object in a target area is detected to be larger than the deflection angle at which the change of a virtual scene in a screen can be observed, an interaction request is sent out; in the process of interactive display of the virtual scene, when the deflection angle of the face of the target object relative to the screen is detected in real time to be larger than the deflection angle at which the change of the virtual scene in the screen can be observed, namely the target object is judged to be incapable of observing the change of the interactive display of the virtual scene, interactive request information is sent out, the interactive request information can be interactive voice or other information capable of interacting with the target object, and the target object can continuously watch the interactive display of the virtual scene.
When the target object is detected to leave the target area, stopping the picture change of the virtual scene; and when the target object leaves the target area, detecting that the target object does not exist in the target area, and stopping the picture change of the virtual scene, namely stopping the interactive display of the virtual scene.
In this embodiment, the types of the reminding message may be various, for example, a prompt display frame pops up in a virtual scene picture and shakes to attract the attention of a target object, or a diversified interactive voice reminding is set, and the interactive request message may be implemented in other ways in combination with the actual situation in the actual application, which are not listed in this embodiment.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The embodiment of the present application further provides an immersive tunnel type multimedia interactive exhibition system, an internal structure diagram of which can be shown in fig. 4, where the immersive tunnel type multimedia interactive exhibition system corresponds to the immersive tunnel type multimedia interactive exhibition methods in the embodiments one to one. The immersive tunnel type multimedia interactive display system comprises an acquisition module 1, a response module 2, an image acquisition module 3, a detection module 4 and a sending module 5. Specifically, as shown in fig. 4, each functional module is described in detail as follows:
an immersive tunnel-type multimedia interactive presentation system comprising:
the acquisition module 1: the distance information between a plurality of moving targets and the screen is acquired.
The acquisition module can acquire distance information directly acquired by a distance measuring sensor or acquire position information of a stereo image by a binocular camera and then further calculate the position information to obtain the distance information.
The response module 2: for determining a target object of a number of moving targets and triggering a virtual scene change associated with the target object based on the distance information.
And judging whether the moving target is in the target area or not based on the acquired distance information, if so, marking the moving target in the target area as a target object, and triggering the change of the virtual scene picture to carry out interactive display.
The image acquisition module 3: for acquiring image information of the target object.
When the distance between the target object and the screen is determined to be within the target area, triggering an image acquisition module to acquire image information of the target object, wherein the acquired image information of the target object is used for detecting whether the face deflection angle of the target object is in a state that the change of a virtual scene picture in the screen can be observed or not.
The detection module 4: for detecting face information associated with the target object in the image information.
After the image information of the target object is acquired, the face information of the target object in the image is detected, such as the face position of the target object in the identification image, then the external contour of the face is extracted, the extracted face contour is compared with the reference image in the face characteristic reference image library, and the face deflection angle of the target object in the image information is identified.
The sending module 5: based on the face information, it is determined whether to issue an interaction request.
When the distance between the target object and the screen is detected to be in the target area, and the face information of the target object is detected to be in a side face state, namely, a state that the front screen is not watched, the target object can be reminded of watching the change of the virtual scene picture in the screen by sending the interaction information to interact with the target object.
The specific definition of the immersive tunnel type multimedia interactive exhibition method can be referred to the definition of the immersive tunnel type multimedia interactive exhibition method in the following, and is not described herein again. The modules in the immersive tunnel-type multimedia interactive presentation system can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the electronic device, and can also be stored in a memory of the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, a computer device is provided, which is a server, and the internal structure diagram of which can be as shown in fig. 5. The computer device comprises a memory, a processor, a network interface and a question bank which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a question bank. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing an immersive tunnel-type multimedia interactive presentation system. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an immersive tunnel-type multimedia interactive presentation method:
and S1, acquiring distance information between a plurality of moving targets and the screen.
And S2, determining a target object in the plurality of moving targets and triggering the virtual scene picture change associated with the target object based on the distance information.
And S3, acquiring the image information of the target object.
And S4, detecting the face information associated with the target object in the image information.
S5, determining whether to issue an interactive request based on the face information.
The processor when executing realizes the steps of any of the embodiments described above with respect to the immersive tunnel-type multimedia interactive presentation method.
The embodiment of the application discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when being executed by a processor, the computer program realizes the following steps:
and S1, acquiring distance information between a plurality of moving targets and the screen.
And S2, determining a target object in the plurality of moving targets and triggering the virtual scene picture change associated with the target object based on the distance information.
And S3, acquiring the image information of the target object.
And S4, detecting the face information associated with the target object in the image information.
S5, determining whether to issue an interactive request based on the face information.
The computer readable storage medium stores a computer program which, when executed by a processor, performs the steps of the immersive tunnel-type multimedia interactive presentation method of any of the embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above.
The present embodiment is only for explaining the present application, and it is not limited to the present application, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present application.

Claims (9)

1. An immersive tunnel type multimedia interactive display method is characterized by comprising the following steps:
acquiring distance information between a plurality of moving targets and a screen;
determining a target object in a plurality of moving targets and triggering a virtual scene picture change associated with the target object based on the distance information;
acquiring image information of a target object;
detecting face information associated with the target object in the image information;
based on the facial information, it is determined whether to issue an interaction request.
2. The immersive tunnel-type multimedia interactive presentation method of claim 1, wherein said step of determining a target object of a plurality of moving targets based on said distance information and triggering a change in a virtual scene view associated with said target object comprises:
judging whether the distance between the moving target and the screen is in the target area or not according to the distance information;
and when the target object is located in the area, triggering the change of the virtual scene picture.
3. The immersive tunnel-type multimedia interactive presentation method of claim 1, wherein said step of detecting face information associated with the target object in the image information comprises:
identifying a face position of a target object in an image based on the image information;
extracting facial contour feature information according to the facial position;
and comparing the acquired face contour characteristic information with the reference image to acquire the deflection angle of the face contour characteristic information.
4. The immersive tunnel-type multimedia interactive presentation method according to claim 3, wherein the step of comparing the obtained face contour feature information with the reference image to obtain a contour deflection angle comprises:
acquiring local binary features corresponding to the facial features of the facial position image, processing the binary features of the face and identifying facial contour feature information;
and comparing the facial contour features with the reference image, and judging the facial contour deflection angle of the target object in the acquired image information.
5. The immersive tunnel-type multimedia interactive presentation method according to claim 4, wherein said step of comparing the acquired face contour feature information with the reference image to acquire a contour deflection angle comprises:
acquiring a deflection angle of image information based on a screen as a reference plane;
and acquiring the actual deflection angle of the target object face relative to the screen based on the deflection angle of the screen serving as the reference plane and the deflection angle of the face contour according to the image information.
6. The immersive tunnel-type multimedia interactive presentation method according to claim 1, wherein said step of determining whether to issue an interactive request based on said face information comprises:
when the face deflection angle of a target object in the target area is detected to be the deflection angle at which the virtual scene in the screen can be observed, the virtual scene picture can continuously change;
when the face deflection angle of a target object in a target area is detected to be larger than the deflection angle at which the change of a virtual scene in a screen can be observed, sending out interaction request information;
and when the target object is detected to leave the target area, stopping the picture change of the virtual scene.
7. An immersive tunnel-type multimedia interactive presentation system, comprising:
an acquisition module: the distance information between a plurality of moving targets and the screen is acquired;
a response module: the distance information is used for determining a target object in a plurality of moving targets and triggering a virtual scene picture change associated with the target object;
an image acquisition module: the system comprises a processor, a display and a display, wherein the processor is used for acquiring image information of a target object;
a detection module: for detecting face information associated with the target object in the image information;
a sending module: based on the facial information, it is determined whether to issue an interaction request.
8. A computer device comprising a memory and a processor, said memory having stored thereon a computer program which can be loaded by the processor and which can carry out the immersive tunnel multimedia interactive presentation method of any of claims 1 to 6.
9. A computer-readable storage medium, characterized in that a computer program is stored which can be loaded by a processor and which performs the immersive tunnel multimedia interactive presentation method as claimed in any of claims 1 to 6.
CN202011119814.9A 2020-10-19 2020-10-19 Immersive tunnel type multimedia interactive display method, equipment and storage medium Pending CN112198963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011119814.9A CN112198963A (en) 2020-10-19 2020-10-19 Immersive tunnel type multimedia interactive display method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011119814.9A CN112198963A (en) 2020-10-19 2020-10-19 Immersive tunnel type multimedia interactive display method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112198963A true CN112198963A (en) 2021-01-08

Family

ID=74009385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011119814.9A Pending CN112198963A (en) 2020-10-19 2020-10-19 Immersive tunnel type multimedia interactive display method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112198963A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784193A (en) * 2021-09-23 2021-12-10 广州长嘉电子有限公司 High-definition imaging method and device based on interaction of wireless signals and commercial display multimedia
CN113885703A (en) * 2021-09-30 2022-01-04 联想(北京)有限公司 Information processing method and device and electronic equipment
CN113900526A (en) * 2021-10-29 2022-01-07 深圳Tcl数字技术有限公司 Three-dimensional human body image display control method and device, storage medium and display equipment
CN114415907A (en) * 2022-01-21 2022-04-29 腾讯科技(深圳)有限公司 Media resource display method, device, equipment and storage medium
CN115359166A (en) * 2022-10-20 2022-11-18 北京百度网讯科技有限公司 Image generation method and device, electronic equipment and medium
CN116414287A (en) * 2023-02-01 2023-07-11 苏州金梓树智能科技有限公司 Intelligent interaction control method and system for multimedia equipment in digital exhibition hall
WO2023178921A1 (en) * 2022-03-23 2023-09-28 上海商汤智能科技有限公司 Interaction method and apparatus, and device, storage medium and computer program product

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120249590A1 (en) * 2011-03-29 2012-10-04 Giuliano Maciocci Selective hand occlusion over virtual projections onto physical surfaces using skeletal tracking
CN103543827A (en) * 2013-10-14 2014-01-29 南京融图创斯信息科技有限公司 Immersive outdoor activity interactive platform implement method based on single camera
US20150350628A1 (en) * 2014-05-28 2015-12-03 Lucasfilm Entertainment CO. LTD. Real-time content immersion system
CN105892639A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for controlling virtual reality (VR) device
CN106527712A (en) * 2016-11-07 2017-03-22 珠海市魅族科技有限公司 Information processing method for virtual reality device and virtual reality device
CN107450721A (en) * 2017-06-28 2017-12-08 丝路视觉科技股份有限公司 A kind of VR interactive approaches and system
CN107637076A (en) * 2015-10-14 2018-01-26 三星电子株式会社 Electronic equipment and its control method
WO2018067731A1 (en) * 2016-10-04 2018-04-12 Livelike Inc. Dynamic real-time product placement within virtual reality environments
US20180108147A1 (en) * 2016-10-17 2018-04-19 Samsung Electronics Co., Ltd. Method and device for displaying virtual object
US20180342103A1 (en) * 2017-05-26 2018-11-29 Microsoft Technology Licensing, Llc Using tracking to simulate direct tablet interaction in mixed reality
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN110933452A (en) * 2019-12-02 2020-03-27 广州酷狗计算机科技有限公司 Method and device for displaying lovely face gift and storage medium
CN111569421A (en) * 2020-05-08 2020-08-25 江圣宇 Virtual scene change synchronization method and system, VR playing equipment and storage medium
CN111627118A (en) * 2020-06-02 2020-09-04 上海商汤智能科技有限公司 Scene portrait showing method and device, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120249590A1 (en) * 2011-03-29 2012-10-04 Giuliano Maciocci Selective hand occlusion over virtual projections onto physical surfaces using skeletal tracking
CN103543827A (en) * 2013-10-14 2014-01-29 南京融图创斯信息科技有限公司 Immersive outdoor activity interactive platform implement method based on single camera
US20150350628A1 (en) * 2014-05-28 2015-12-03 Lucasfilm Entertainment CO. LTD. Real-time content immersion system
CN107637076A (en) * 2015-10-14 2018-01-26 三星电子株式会社 Electronic equipment and its control method
CN105892639A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for controlling virtual reality (VR) device
WO2018067731A1 (en) * 2016-10-04 2018-04-12 Livelike Inc. Dynamic real-time product placement within virtual reality environments
US20180108147A1 (en) * 2016-10-17 2018-04-19 Samsung Electronics Co., Ltd. Method and device for displaying virtual object
CN106527712A (en) * 2016-11-07 2017-03-22 珠海市魅族科技有限公司 Information processing method for virtual reality device and virtual reality device
US20180342103A1 (en) * 2017-05-26 2018-11-29 Microsoft Technology Licensing, Llc Using tracking to simulate direct tablet interaction in mixed reality
CN107450721A (en) * 2017-06-28 2017-12-08 丝路视觉科技股份有限公司 A kind of VR interactive approaches and system
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN110933452A (en) * 2019-12-02 2020-03-27 广州酷狗计算机科技有限公司 Method and device for displaying lovely face gift and storage medium
CN111569421A (en) * 2020-05-08 2020-08-25 江圣宇 Virtual scene change synchronization method and system, VR playing equipment and storage medium
CN111627118A (en) * 2020-06-02 2020-09-04 上海商汤智能科技有限公司 Scene portrait showing method and device, electronic equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784193A (en) * 2021-09-23 2021-12-10 广州长嘉电子有限公司 High-definition imaging method and device based on interaction of wireless signals and commercial display multimedia
CN113784193B (en) * 2021-09-23 2024-02-13 广州长嘉电子有限公司 High-definition imaging method and device based on wireless signal and commercial display multimedia interaction
CN113885703A (en) * 2021-09-30 2022-01-04 联想(北京)有限公司 Information processing method and device and electronic equipment
CN113900526A (en) * 2021-10-29 2022-01-07 深圳Tcl数字技术有限公司 Three-dimensional human body image display control method and device, storage medium and display equipment
CN114415907A (en) * 2022-01-21 2022-04-29 腾讯科技(深圳)有限公司 Media resource display method, device, equipment and storage medium
CN114415907B (en) * 2022-01-21 2023-08-18 腾讯科技(深圳)有限公司 Media resource display method, device, equipment and storage medium
WO2023178921A1 (en) * 2022-03-23 2023-09-28 上海商汤智能科技有限公司 Interaction method and apparatus, and device, storage medium and computer program product
CN115359166A (en) * 2022-10-20 2022-11-18 北京百度网讯科技有限公司 Image generation method and device, electronic equipment and medium
CN116414287A (en) * 2023-02-01 2023-07-11 苏州金梓树智能科技有限公司 Intelligent interaction control method and system for multimedia equipment in digital exhibition hall
CN116414287B (en) * 2023-02-01 2023-10-17 苏州金梓树智能科技有限公司 Intelligent interaction control method and system for multimedia equipment in digital exhibition hall

Similar Documents

Publication Publication Date Title
CN112198963A (en) Immersive tunnel type multimedia interactive display method, equipment and storage medium
US10896518B2 (en) Image processing method, image processing apparatus and computer readable storage medium
CN109086691B (en) Three-dimensional face living body detection method, face authentication and identification method and device
CN111091063A (en) Living body detection method, device and system
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
CN109299658B (en) Face detection method, face image rendering device and storage medium
CN108090463B (en) Object control method, device, storage medium and computer equipment
CN111737518A (en) Image display method and device based on three-dimensional scene model and electronic equipment
CN113313097B (en) Face recognition method, terminal and computer readable storage medium
CN112543343A (en) Live broadcast picture processing method and device based on live broadcast with wheat and electronic equipment
CN112802081B (en) Depth detection method and device, electronic equipment and storage medium
CN113689578A (en) Human body data set generation method and device
CN111325107A (en) Detection model training method and device, electronic equipment and readable storage medium
CN112184811A (en) Monocular space structured light system structure calibration method and device
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN112470189B (en) Occlusion cancellation for light field systems
CN115063339A (en) Face biopsy method, system, equipment and medium based on binocular camera ranging
CN110800020B (en) Image information acquisition method, image processing equipment and computer storage medium
CN112102404B (en) Object detection tracking method and device and head-mounted display equipment
US20100014760A1 (en) Information Extracting Method, Registration Device, Verification Device, and Program
CN111383256B (en) Image processing method, electronic device, and computer-readable storage medium
CN111383255A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116017129A (en) Method, device, system, equipment and medium for adjusting angle of light supplementing lamp
CN115514887A (en) Control method and device for video acquisition, computer equipment and storage medium
CN115426350A (en) Image uploading method, image uploading device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210108