CN114397959A - Interactive prompting method, device and equipment - Google Patents

Interactive prompting method, device and equipment Download PDF

Info

Publication number
CN114397959A
CN114397959A CN202111515467.6A CN202111515467A CN114397959A CN 114397959 A CN114397959 A CN 114397959A CN 202111515467 A CN202111515467 A CN 202111515467A CN 114397959 A CN114397959 A CN 114397959A
Authority
CN
China
Prior art keywords
interactive
interaction
place
seat
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111515467.6A
Other languages
Chinese (zh)
Inventor
孙秉鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Barley Culture Communication Co ltd
Original Assignee
Beijing Barley Culture Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Barley Culture Communication Co ltd filed Critical Beijing Barley Culture Communication Co ltd
Priority to CN202111515467.6A priority Critical patent/CN114397959A/en
Publication of CN114397959A publication Critical patent/CN114397959A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The specification provides an interactive prompting method, an interactive prompting device and interactive prompting equipment. The method comprises the following steps: acquiring a place image at least comprising place space information of a specified place and a limb posture of a target person; constructing a three-dimensional site model containing the limb postures of the target personnel by using the site images acquired by at least two acquisition devices; and positioning the direction in which the limb gesture of the target person is pointed in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction. By utilizing the embodiments of the specification, the field interaction is more intelligent, the interaction cost selected by an interactive user is reduced, and the interactive experience of the user is improved.

Description

Interactive prompting method, device and equipment
Technical Field
The present specification relates to the field of data processing technologies, and in particular, to an interactive prompting method, apparatus, and device.
Background
At the show site, the performers and the viewers typically have some interaction to enhance the atmosphere of the show. Currently, in the interaction of a performance site, a performance staff or a worker manually extracts a site audience as an interactive staff for interaction.
Disclosure of Invention
The embodiment of the specification provides an interactive prompting method, an interactive prompting device and interactive prompting equipment, and the interactive experience feeling of site interaction such as a performance site can be greatly improved.
The implementation mode of the specification provides an interactive prompting method which is applied to service equipment of an interactive prompting system; the interactive prompt system further comprises acquisition equipment arranged in a specified place, and the method comprises the following steps: acquiring a place image at least comprising place space information of a specified place and a limb posture of a target person; constructing a three-dimensional site model containing the limb postures of the target personnel by using the site images acquired by at least two acquisition devices; and positioning the direction in which the limb gesture of the target person is pointed in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
The implementation mode of the specification provides an interactive prompting device which is applied to service equipment of an interactive prompting system; the interactive prompt system further comprises acquisition equipment arranged in a designated place, and the device comprises: the system comprises an image acquisition module, a position acquisition module and a position acquisition module, wherein the image acquisition module is used for acquiring a position space information at least containing a designated position and a position image of the limb posture of a target person; the model building module is used for building a three-dimensional site model containing the limb postures of the target personnel by utilizing the site images acquired by at least two acquisition devices; and the direction positioning module is used for positioning the direction pointed by the limb posture of the target person in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
An embodiment of the present specification provides an interactive prompt system, where the system includes a cloud service device, and a local service device and a collection device that are arranged in a specific place: the local service equipment is used for acquiring a place image at least comprising place space information of a specified place and the limb posture of a target person and sending the place image to the cloud service equipment; the cloud service equipment is used for constructing a three-dimensional place model containing the limb postures of the target personnel by utilizing the place images acquired by at least two acquisition equipment; and positioning the direction pointed by the limb posture of the target person in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
The present specification embodiments provide a service device comprising at least one processor and a memory storing computer-executable instructions that, when executed by the processor, implement the steps of the method of any one or more embodiments.
The embodiment of the specification provides an interactive prompting method, which can at least utilize a place space information containing a specified place and a place image containing a target person's limb posture to construct a three-dimensional place model containing the target person's limb posture. Correspondingly, the constructed three-dimensional site model comprises three-dimensional space characteristics of the limb posture of the target person in the specified site. Then, the three-dimensional place model can be used for positioning the direction pointed by the limb posture of the target person in the three-dimensional space corresponding to the designated place, and further, the user corresponding to the direction in the designated place can be accurately selected to participate in interaction. By using the method provided by each embodiment of the specification, the users can be selected from the site to participate in the interaction more accurately and intelligently, so that the site interaction is more intelligent, the interaction cost selected by the interaction users is reduced, and the interaction experience of the users is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the specification, are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the principles of the specification. It is obvious that the drawings in the following description are only some embodiments of the present description, and that for a person skilled in the art, other drawings can be derived from them without inventive exercise. In the drawings:
FIG. 1 is a schematic diagram of an interactive prompt interaction provided in an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of a seating area location for interactive prompts provided by embodiments of the present disclosure;
fig. 3 is a schematic flowchart of an interactive prompt method provided in an embodiment of the present specification;
fig. 4 is a schematic block diagram of an interactive prompt device according to an embodiment of the present disclosure;
fig. 5 is a schematic block structure diagram of a service device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions in the present specification better understood, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present specification shall fall within the protection scope of the present specification.
For convenience of understanding, in one scenario example provided in this specification, a scenario provided in an embodiment of this specification is described below with a performance site as a designated site. The interactive prompting method of the scene example can be applied to an interactive prompting system. The interactive prompt system at least comprises service equipment and acquisition equipment which is arranged in the performance place and used for acquiring images. The service device may be a device that performs data processing, such as a server, a service terminal, and the like. The acquisition device can be a video acquisition device, an image acquisition device and the like. Such as concert venues, annual meeting venues, etc. Accordingly, the target person may be, for example, a performer, a presenter, a lecturer, etc.
In this scenario example, as shown in fig. 1, the service devices may include a cloud service device and a local service device disposed at a performance place. As shown in fig. 2, the performance venue may include a stage area set for performance personnel and an audience area set for audience personnel. The viewing area may be arranged with a plurality of seats and divided into a plurality of sub-seat areas in a designated direction. The specified direction may for example refer to a direction perpendicular to the viewing area towards the stage area. As shown in fig. 2, the direction of the viewing area toward the stage area is the Y direction, and accordingly, the seats in the viewing area may be divided in the X direction perpendicular to the Y direction to obtain a plurality of sub-seat areas A, B, C … …. The cloud service device may store area location information and area identifier A, B, C … … occupied by each sub-seat area, and may also store seat identifiers 11, 12, and … … MN included in each sub-seat area.
The cloud service device can provide ticket purchasing data processing. The ticket can be purchased by the viewing personnel through a ticketing application on the terminal device. For example, the ticketing application program interface of the terminal device may display the seat distribution of the viewing area to the viewing staff, and the viewing staff may select a seat and submit an order to complete ticket purchasing after selecting the seat. After the ticket booking is completed, the cloud service device can record ticket booking information of the audience member, for example, the seat identifier purchased by the audience member and the user identifier corresponding to the audience member can be stored in an associated manner. Correspondingly, the cloud service device can store the association relationship between the seat identification of the sold seat and the user identification.
As shown in fig. 1, at the performance site, local service equipment, a camera, a performance indicator light, and the like may be disposed. As shown in fig. 2, a plurality of cameras may be arranged at least in the stage area in the performance venue, so as to collect information more comprehensively for the stage area of the performance venue and for the performance personnel in the stage area. The camera may establish a communication connection with a local service device to communicate the collected information to the local service device. The camera and the local service equipment can establish communication connection by adopting wireless communication, such as Bluetooth, WIFI and the like. The local service equipment can also establish communication connection with cloud service equipment so as to transmit information acquired by the camera or data obtained after processing the acquired information to the cloud service equipment. The local service device and the cloud service device may establish a communication connection in a remote communication manner, for example, the remote communication connection may be established through a communication base station, a communication satellite, and the like.
The camera may perform image acquisition for the stage area and transmit the acquired image to the local server. The local service equipment can at least send images acquired by a plurality of acquisition equipment for acquiring images in the stage area, equipment parameters of each acquisition equipment and position information to the cloud service equipment. The cloud service device can construct a three-dimensional space model of the stage area based on texture maps and depth maps acquired by the acquisition devices, and device parameters and position information of the acquisition devices. In order to enable the three-dimensional space model to more comprehensively show the three-dimensional limb posture characteristics of the performers relative to the stage area, the three-dimensional space model of the stage area can be built by utilizing the acquisition equipment arranged aiming at a plurality of acquisition visual angles as much as possible. Of course, the local service device may also construct a three-dimensional space model of the stage area based on the texture map and the depth map acquired by each acquisition device, the device parameters of each acquisition device, and the position information, and send the constructed three-dimensional space model to the cloud service device.
When a performer wishes to interact with a performer at a performance site, the performer may select the performer located in a sub-seating area to participate in the interaction, for example, by extending an arm of the performer to the sub-seating area. For example, the performer may first issue an interactive start command through voice, such as "next, we want to select a friend on the spot to play an interactive mini game together on the stage", and then the performer may render the atmosphere selected by the interactive user through moving the arm, and after selecting the sub-seat area participating in the interaction, may indicate that the selected sub-seat area is selected through a specific gesture. Correspondingly, the local service equipment can receive the images acquired by the acquisition equipment in the stage area after the interactive opening instruction is sent out after the interactive opening instruction is detected, and then perform gesture recognition on the received images. When a specific gesture is detected, each image at the moment when the specific gesture is sent can be extracted as a target place image, so that a three-dimensional space model corresponding to a stage area is constructed by using the target place image.
The cloud service equipment can extract the extending direction of arms of the performers in the stage area from the three-dimensional space model of the stage area. The relative positions of the stage area and the sub-seating areas of the viewing area in the performance place are determined, and accordingly, the sub-seating areas of the viewing area corresponding to the extending direction can be determined based on the relative positions of the stage area and the sub-seating areas of the viewing area in the performance place, and the viewers can be selected from the sub-seating areas to participate in the interaction.
Or, a three-dimensional place model corresponding to the performance place may be stored in the cloud service device in advance. The local service device can send the performance place identifications corresponding to the performance places to the cloud service device together when sending the information to the cloud service device, so that the cloud service device can call the three-dimensional place model corresponding to the corresponding performance place identification. Or the cloud service equipment can also locate the performance place where the local service equipment is located by analyzing the address information of the local service equipment, and then call the three-dimensional place model corresponding to the performance place. The cloud service equipment can further replace the three-dimensional space model of the stage area with the corresponding part in the three-dimensional place model to obtain the interactive three-dimensional place model containing the body postures of the performing personnel. The cloud service equipment can extract the extending direction of arms of the performance personnel in the performance site from the interactive three-dimensional site model. The position information of each sub-seat area of the viewing area in the performance place is also known, and accordingly, the sub-seat area of the viewing area corresponding to the extending direction can be determined based on the position information of each sub-seat area of the viewing area in the performance place, and the viewing personnel can be selected from the sub-seat area to participate in the interaction.
As shown in fig. 2, the cloud service device may extract the extending direction of the arms of the performer in the performance site from the interactive three-dimensional site model, further extend the extending direction to the performance area, and locate a sub-seat area B in which the extending direction falls, so as to prompt the performer in the sub-seat area B to participate in interaction. After the sub-seat area in which the extending direction falls is located, the cloud service device can randomly extract a seat identifier corresponding to a sold seat from seat identifiers contained in the corresponding sub-seat area to serve as an interactive seat identifier.
The cloud service device can extract the user identifier corresponding to the interactive seat identifier based on the incidence relation between the interactive seat identifier and the user identifier, and send a notification of participating in interaction to the terminal device corresponding to the user identifier so as to prompt observers corresponding to the user identifier to participate in interaction. Alternatively, as shown in fig. 1, the cloud service device may also feed back the terminal device information of the user corresponding to the interactive seat identifier to the local service device, so that the local service device sends a notification of participating in the interaction to the terminal device. Or the cloud service equipment can also feed back the interactive seat identification to the local service equipment, so that the local service equipment controls the performance indicating lamp to illuminate the interactive seat corresponding to the interactive seat identification, and the audience corresponding to the interactive seat is prompted to participate in interaction. Or after the seat information of the audience participating in the interaction is confirmed, the seat identification and the position information of the audience participating in the interaction can be displayed on the stage screen, so that the performers can quickly know the positions of the audience participating in the interaction.
The performers can interact with the interactive viewers on-site, such as to sing songs together, to attend a show, etc. Certainly, the field service personnel can also associate the interactive activity information to be participated with the performance place identifier and then send the interactive activity information to be participated with the performance place identifier to the cloud service equipment through the ticketing application program of the terminal equipment. When the cloud service equipment feeds back the interactive seat identification or the terminal equipment information of the user corresponding to the interactive seat identification to the local service equipment, the interactive activity information corresponding to the corresponding performance place identification can be fed back to the local service equipment together. The local service equipment can control the interactive activity information to be displayed on a stage screen, or can also send the interactive activity information to terminal equipment of observers participating in interaction so as to be displayed on the terminal equipment. The specific interactive implementation mode can be configured according to the requirement, and is not limited here.
By utilizing the embodiment, the audience participating in the interaction can be quickly selected based on the field posture information and the field space information of the audience, so that the field interaction is more intelligent, the interaction cost selected by an interactive user is reduced, and the interactive experience of the user is improved.
Of course, the service device may be only a local service device. The cloud service device can associate the seat identifier purchased by the audience with the user identifier in advance, and then send the seat identifier to the local service device so that the local service device can store the seat identifier. The local service equipment can also store a three-dimensional place model of a corresponding performance place, position information of the sub-seat areas, seat identification contained in each sub-seat area and the like. Accordingly, the data processing of the interactive prompt is processed by the local service device, and the specific implementation may refer to the above scenario example, which is not described herein again.
Of course, the service device may also be only a cloud service device. The acquisition equipment can be directly in communication connection with the cloud service equipment to directly send acquired information to the cloud service equipment, so that the cloud service equipment executes steps of three-dimensional space model construction of a stage area, directional positioning of body postures of performance personnel in a performance place and the like, and after the interactive seat identification is obtained, a notification of participation in interaction and the like is sent to terminal equipment of the performance personnel corresponding to the interactive seat identification to prompt related performance personnel to participate in interaction. The specific implementation may refer to the above scenario example, which is not described herein again.
Fig. 3 is a schematic flowchart illustrating an interactive prompt method according to an embodiment of the present disclosure. Based on the above scenario example, as shown in fig. 3, an embodiment of the present specification further provides an interactive prompting method, which is applied to a service device of the interactive prompting system, and the method may include the following steps.
S302: a location image is acquired that includes at least location space information of a specified location and a limb pose of a target person.
The designated location may be, for example, a performance location, an annual meeting location, a competition location, or the like. The designated place may be a relatively closed place or an open place. For example, the performance venue may be a gym holding a concert, or may be an outdoor area temporarily set up for holding a concert. The target person may be, for example, a performer, a lecturer, a presenter, etc. The target person may be pre-specified. For example, persons located in a designated area in a designated place may be targeted, and persons located in a stage area in a performance place may be targeted. Or, a person with a certain characteristic may be used as the target person, for example, face information of the target person may be configured in the service device, and the target person may be determined by comparing the face information. Of course, the determination manner of the target person may also be configured according to the actual situation of the application scenario.
The acquisition device can perform image acquisition for a specified place. In the embodiments of the present specification, an image captured by a capturing apparatus for a specified place may be described as a place image. The capture device may send the captured image to the service device. For example, in a case that the service device includes a local service device and a cloud service device, the collecting device may transmit the collected image to the local service device in a wireless communication manner. Of course, in the case that the service device only includes the cloud service device, the collecting device may also transmit the collected image to the cloud service device through remote communication.
The service device can construct a three-dimensional space model containing three-dimensional space information of the limb gesture of the target person in the specified place by using at least a place image containing the limb gesture of the target person and the place space information of the specified place, so as to more accurately locate the direction pointed by the limb gesture of the target person in the specified place. For example, if a person located in a certain designated area is designated as a target person in advance, the service device may receive at least location images acquired by a plurality of acquisition devices for the designated area to construct a three-dimensional space model containing three-dimensional space information of the body pose of the target person in the designated location, such as the three-dimensional space model constructed for the stage area in the above-described scene example. If the target person is determined based on the face information, the service device may screen out a location image at least including location space information of the designated location and a body posture of the target person based on face information comparison after receiving the location image sent by each acquisition device. Of course, in an application scenario where the target person is determined based on other methods, other methods for screening the images of the places may be correspondingly set, which is not limited herein.
In other embodiments, the service device may further select a preset time interval including a sending time of the interactive start instruction when the interactive start instruction is detected; and acquiring the site space information which is acquired within the preset time interval and contains the specified site and the site image of the limb posture of the target person. The number of images acquired by the acquisition equipment on the spot is usually very large, the time and times of on-spot interaction are very small, and if the limb posture of the target person is positioned in real time, huge resource waste is caused. And the postures of the performers and the speeches which may appear on the scene are diversified, and even if the target person stretches arms to a certain sub-seat area, the target person cannot be shown to want to interact with the user in the sub-seat area, so that frequent interaction triggering can be caused, and bad experience is brought. In the embodiment, the interaction starting instruction is configured in advance, and the body posture of the target person is positioned under the condition that the interaction starting instruction is detected, so that the waste of system resources can be greatly reduced, and the interaction experience is improved.
If the performer wishes to interact with the audience, the performer may speak the voice message "we select a friend to sing together on the spot", and then the performer may select the sub-seating area to participate in the interaction by extending his arms. Correspondingly, voice information sent by the performers can be used as an interactive starting instruction. For example, voice information of a general performer, a host, and the like for starting interaction may be collected in advance, feature extraction may be performed, and an interaction starting voice instruction set may be constructed. Correspondingly, the voice acquisition equipment can acquire the voice information of the performance personnel in real time and send the voice information to the service equipment, and the service equipment can detect whether the performance personnel send an interactive starting identification instruction or not based on the interactive starting voice instruction set. The service equipment can extract the site space information containing the appointed site and the site image of the limb posture of the target person from the received site image after detecting that the voice information is an interactive starting instruction so as to construct a three-dimensional site model.
Alternatively, the performer may first select the area of the child seat that is to participate in the interaction by extending his arms, and in the process, issue a verbal message such as "we have selected a friend from this area to sing together, or so. Correspondingly, the service equipment can select a preset time interval containing the sending moment of the interactive starting instruction under the condition that the voice information is detected to be the interactive starting instruction, and acquire the site space information containing the appointed site and the site image of the body posture of the target person, which are acquired in the preset time interval, so as to construct a three-dimensional site model.
Of course, the above processing method is only a preferred example, and other modifications may be made in the specific implementation, but the functions and effects achieved by the method are all covered within the scope of protection of the present specification as long as they are the same or similar. For example, the interactive start instruction may be issued by a service person, the interactive start instruction may be issued by triggering another device, and the like.
In other embodiments, the service device may further extract, when detecting the interaction selection instruction, a location image at the time when the interaction selection instruction is issued from the received location image, as a target location image; correspondingly, a three-dimensional site model containing the limb postures of the target personnel is constructed by utilizing the target site image.
For example, the performer may first issue an interactive start command through voice, such as "next, we want to select a friend on the spot to play an interactive mini game together on the stage", and then the performer may render the atmosphere selected by the interactive user through moving the arm, and after selecting the sub-seat area participating in the interaction, may indicate that the selected sub-seat area is selected through a specific gesture. Correspondingly, the service equipment can receive the site images acquired by the acquisition equipment after the interactive opening instruction is sent out after the interactive opening instruction is detected, and screen out the site images containing the site space information of the specified site and the limb postures of the target personnel. And then, gesture recognition is carried out based on the screened place images, and when a specific gesture is detected, the place image with the specific gesture detected can be used as a target place image so as to construct a three-dimensional place model by using the target place image. The implementation of detecting the interaction selection instruction based on the specific gesture can refer to the implementation of detecting the interaction start instruction based on the voice information, which is not described herein again.
Or the service equipment can also determine whether the target person sends an interaction selection instruction based on the limb action characteristics of the target person by identifying the limb action characteristics of the target person in the continuous multi-frame site images. If the performer wishes to interact with the spectator, the arms can be extended and moved, and after selecting the sub-seat area participating in the interaction, the arm extension action can be fixed in that position for n seconds. Correspondingly, the arm can be stretched and moved for m seconds and then the arm can be fixed to stretch and move for n seconds, and the fixed arm can be used as the interactive selection instruction. The service equipment can determine that the performer sends an interaction selection instruction, namely an interactive sub-seat area is selected, under the condition that the service equipment is fixed for n seconds after detecting that the arm of the target person stretches and moves for m seconds based on the site images at a plurality of continuous acquisition moments, and correspondingly, the service equipment can select any one or more site images corresponding to the n seconds as the target site images so as to construct the three-dimensional site model.
By the implementation mode, the scene interaction atmosphere can be further rendered, meanwhile, the target place image is screened in advance, the three-dimensional place model is constructed based on the target place image, the image processing range of the interaction posture positioning can be further determined, and the accuracy and the efficiency of the interaction posture positioning are further improved.
Of course, the above processing method is only a preferred example, and other modifications may be made in the specific implementation, but the functions and effects achieved by the method are all covered within the scope of protection of the present specification as long as they are the same or similar.
S304: and constructing a three-dimensional site model containing the limb postures of the target personnel by utilizing the site images acquired by at least two acquisition devices.
The service device may acquire the location images acquired by the at least two acquisition devices, and construct a three-dimensional location model including the limb postures of the target person using the location images acquired by the at least two acquisition devices. For example, the service device may extract the body posture of the target person and the texture features and depth features of the scene around the target person from the location image including the body posture of the target person and the location space information of the designated location, and further construct a three-dimensional space model including the three-dimensional body posture features of the target person in the designated location by using a three-dimensional scene reconstruction technique in combination with device parameters, position information, and the like of the acquisition device. For convenience of description, the constructed three-dimensional space model may be described as a three-dimensional place model. In order to enable the representation of the three-dimensional limb posture characteristics of the target person in the appointed place to be more comprehensive and accurate, a three-dimensional place model can be constructed by utilizing the place images acquired by the acquisition equipment with a plurality of visual angles as far as possible; alternatively, a three-dimensional site model may be constructed in combination with other site images that do not include the limb postures of the target person.
In the actual image acquisition process, the image proportion of the target person contained in the partial image may be small, so that the limb posture feature of the target person is not obvious, and the limb posture of the target person is difficult to extract accurately. The image proportion may refer to a proportion of the outline image of the target person in the image. For example, the ratio of the area of the contour image F of the target person to the area of the image E may be used as the image proportion of the target person in the image E. Of course, the image ratio may be calculated in other manners, which is not limited herein. Correspondingly, in some embodiments, for the location image acquired by the acquisition device at a certain acquisition time t, the image proportion of the target person in the location image may be calculated, and the first K location images with larger image proportions are screened out, where K is an integer greater than or equal to 1. Then, a three-dimensional site model may be constructed based on the top K site images screened out. The three-dimensional site model is constructed based on the site image with relatively large image proportion of the target person, so that the limb posture of the target person can be positioned more accurately and efficiently.
Of course, the above processing method is only a preferred example, and other modifications may be made in the specific implementation, but the functions and effects achieved by the method are all covered within the scope of protection of the present specification as long as they are the same or similar.
S306: and positioning the direction in which the limb gesture of the target person is pointed in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
Under the condition of obtaining the three-dimensional site model, the service equipment can further extract the three-dimensional limb posture characteristics of the target person from the three-dimensional site model. For example, a contour image of the target person may be located from the three-dimensional site model, and then the three-dimensional limb posture feature of the target person may be extracted from the contour image based on the human limb key node. Or, a human body posture detection model can be constructed by using a deep learning network, so that the three-dimensional limb posture characteristics of the target person can be extracted from the three-dimensional site model by using the human body posture detection model. Of course, other methods may be adopted to extract the three-dimensional limb posture features of the target person.
After extracting the three-dimensional limb posture feature of the target person, the service device may further locate a direction in which the limb posture of the target person is pointed in the specified place based on the three-dimensional limb posture feature. For example, if the limb posture of the target person is that the arm extends outward, the direction pointed by the arm extension of the target person in the specified place can be located based on the three-dimensional spatial characteristics of the arm extension of the target person relative to the specified place. Of course, there may be other situations in the body posture of the target person, such as the performer pointing to a certain sub-seat area with a finger, positioning the direction pointed by the finger of the performer in the designated location based on the three-dimensional spatial characteristics of the performer's finger relative to the designated location, and so on.
In other embodiments, before the service device locates the direction in which the limb gesture of the target person points in the designated place, it may further identify whether the limb gesture of the target person is an interactive gesture; and under the condition that the limb posture of the target person is an interactive posture, positioning the direction pointed by the limb posture of the target person in the specified place based on the three-dimensional place model. And the accuracy of interactive selection positioning can be further improved by further judging the interactive posture.
For example, the interactive gesture is that the arm of the target person stretches towards the area where the user is located in the specified place. Correspondingly, whether the limb posture of the target person is the interactive posture can be determined by judging whether the arm of the target person stretches towards the direction of the area where the user is located in the specified place. Or, an interaction posture feature set may be pre-constructed, and a posture feature corresponding to a pre-specified interaction posture may be stored in the interaction posture feature set. For example, the interactive posture feature set may store a posture feature that an arm of the target person extends in a direction of an area where the user is located in the designated place. The extracted body posture features can be compared with the interactive posture feature set, and if the feature matching degree of each posture feature in the extracted body posture features and the interactive posture feature set is larger than a specified matching degree threshold value, the body posture of the target person can be confirmed to be the interactive posture. And judging the limb posture of the target person based on the interactive posture characteristic set, so that whether the limb posture of the target person is the interactive posture can be determined more quickly and accurately.
The method can confirm that the body posture of the target person is the interactive posture under the condition that the body posture of the target person is confirmed to be the interactive posture based on the three-dimensional place model corresponding to a certain acquisition time t. The body posture of the target person can be judged to be the interactive posture by further combining the three-dimensional site models corresponding to at least one acquisition time before and after the acquisition time t, for example, the body posture of the target person can be confirmed to be the interactive posture under the condition that the body posture of the target person is confirmed to be the interactive posture based on the three-dimensional site models corresponding to more than two continuous acquisition times, so that the accuracy of interactive posture recognition is further improved.
After locating the direction in the specified place to which the limb gesture of the target person is directed, the service device may further determine a user in the specified place corresponding to the direction to prompt the user in the specified place corresponding to the direction to participate in the interaction. For example, if the interaction gesture is that the arm of the target person extends to the area where the user is located in the specified place, the area where the user is located in the specified place corresponding to the extending direction may be determined based on the position information of the area where the user is located in the specified place and the extending direction of the arm of the target person, the user may be selected from the area where the user is located in the corresponding extending direction to serve as the interaction user, and the interaction user is prompted to participate in the interaction.
As described in the foregoing scenario example, in the case that the designated place is a performance place, the designated area may include a stage area where performance personnel are located and an observation area where the observation personnel are located, and the interactive three-dimensional place model may be constructed based on the implementation manner given in the foregoing scenario example. The interactive three-dimensional place model comprises three-dimensional characteristic information of the limb postures of the performers in the specified place. The service equipment can extract the extending direction of arms of the performers in the appointed place from the interactive three-dimensional place model, further determine the seat range of the performance area corresponding to the extending direction, and select the performers to participate in interaction from the seat range. As shown in fig. 2, the viewing area may be divided into a plurality of sub-seat areas in a designated direction, and the service device may extract an extending direction of arms of the performer in the designated place from the interactive three-dimensional place model, extend the extending direction further toward the viewing area, and locate a sub-seat area B in which the extending direction falls, so as to prompt the performer in the sub-seat area B to participate in interaction.
Or, the designated area may only include a viewing area where the viewing person is located, and the area where the target person is located is relatively flexible, at this time, the position acquisition device may further be used to acquire the position information of the target person at the designated location, and the position acquisition device uploads the acquired position information of the target person to the service device. The service equipment can further position the direction in which the arm of the target person points to the viewing area in the specified place by combining the position information of the target person and the arm extending direction positioned based on the three-dimensional place model, further determine the seat range of the viewing area corresponding to the extending direction, and select the viewing person to participate in interaction from the seat range.
Accordingly, in some embodiments, the designated location may include a viewing area in which the user is located, the viewing area having a plurality of seats disposed therein. Accordingly, the service device may further extract a partial seat corresponding to the above-mentioned orientation in the viewing area to prompt the user corresponding to the partial seat to participate in the interaction.
For example, the service device may have stored therein seat identifiers contained in the respective sub-seat areas. After the sub-seat area in which the extending direction falls is located, the service device can randomly extract one or more seat identifiers from the seat identifiers contained in the corresponding sub-seat area, use the extracted seat identifiers as interactive seat identifiers, and express the seats corresponding to the interactive seat identifiers as interactive seats. Under the condition that the service equipment comprises local service equipment and cloud service equipment, the cloud service equipment can feed back the interactive seat identification to the local service equipment, and the local service equipment can control the performance indicator lamp to illuminate the interactive seat corresponding to the interactive seat identification so as to prompt the audience corresponding to the interactive seat to participate in interaction. Or the local service equipment can also control the stage screen to display the interactive seat corresponding to the interactive seat identifier so as to prompt the audience corresponding to the interactive seat to participate in interaction.
Alternatively, a part of seats in the viewing area corresponding to the above-mentioned orientation direction may be displayed on the stage screen, and the performer selects one or more seats from the part of seats corresponding to the above-mentioned orientation direction as interactive seats, and displays the interactive seats on the stage screen to prompt the viewers corresponding to the interactive seats to participate in the interaction.
In actual conditions, the situation that users do not exist in part of seats possibly exists, so that no user can participate in interaction in the screened interactive seats, the interactive seats need to be additionally screened, resource waste can be caused, and poor user experience is brought. In other embodiments, the service device may further store the association relationship between the seat and the user. Correspondingly, the service equipment can screen out at least one seat associated with the user from the partial seats to serve as an interactive seat so as to prompt the user corresponding to the interactive seat to participate in interaction, and interaction cost selected by the interactive user is further reduced.
For example, the service device may store an association of a seat identification corresponding to a sold seat with a user identification of the seat purchased. The service device can randomly select one or more seat identifications from seat identifications corresponding to sold seats as interactive seat identifications. After the interactive seat identification is selected, the user identification associated with the interactive seat identification can be obtained, and a notification of participating in interaction is sent to the terminal equipment corresponding to the user identification so as to prompt the user corresponding to the user identification to participate in interaction. Certainly, the service device can also prompt the user corresponding to the interactive seat to participate in the interaction by controlling the show indicator lamp to illuminate the interactive seat corresponding to the interactive seat identifier and the like.
Alternatively, the service device may also acquire a captured image of the seat area at the time of capturing the interactive gesture by the capture device, extract a seat with the user based on the captured image of the seat area, determine the seat as a seat associated with the user, and randomly select one or more seat identifiers from the seat associated with the user as the interactive seat identifiers.
In other embodiments, when there are more than two interactive seats, the users corresponding to the more than two interactive seats may be the interactive users who finally participate in the interaction. Alternatively, the user associated with the interactive seat may be used as an alternative interactive user. The performer may select one of the alternative interactive users on site as the interactive user that ultimately participates in the interaction. Or, the service device may send an interactive preemption prompt to the terminal device of the alternative interactive user; and under the condition that an interaction selection instruction fed back by the terminal equipment based on the interaction preemptive prompt is received, based on the sequence of the received interaction selection instructions, selecting the alternative interaction user corresponding to the interaction selection instruction received firstly as the interaction user participating in the interaction finally.
For example, the cloud service device may feed back, to the local service device, terminal device information of the interactive user corresponding to the interactive seat identifier, and the local service device sends, based on the received terminal device information, a notification of participation in the interaction and an interactive preemption prompt to a ticketing application program in the terminal device of the alternative interactive user. The terminal device information may refer to information for performing communication positioning on the terminal device. The terminal device information may include, for example, an IP address, a port number, and the like of the terminal device, so that the service device sends a notification of participation in the interaction to a ticketing application of the terminal device. Of course, the terminal device information may also be a user identifier, a terminal device identifier, and the like, so that the local service device calls a pre-stored terminal device IP address, port number, and the like based on the user identifier, the terminal device identifier, and the like, and sends a notification of participating in the interaction to the ticket application program of the corresponding terminal device. After receiving the notification of participating in the interaction and the interactive preemptive prompt, the ticketing application program of the terminal device can display the notification and the interactive preemptive prompt to the alternative interactive user. For example, the ticketing application program of the terminal device may display an "interactive preemptive prompt" button to the alternative interactive user, and the terminal device of the alternative interactive person may send the interactive selection instruction to the local service device when the ticketing application program detects that the alternative interactive user triggers the "interactive preemptive prompt" button. The local service equipment can select the alternative interactive user corresponding to the interaction selection instruction received first as the interactive user participating in interaction finally based on the sequence of the received interaction selection instructions. The local service equipment can send an interaction selected notification to the terminal equipment of the interactive user so as to prompt the interactive personnel to participate in the interaction. Or the local service equipment can also control a field indicator lamp to illuminate the interactive seat corresponding to the interactive personnel so as to prompt the interactive user to participate in interaction.
Accordingly, in some embodiments, the service device may control a spot indicator light to illuminate the interactive seat to prompt a user corresponding to the interactive seat to participate in an interaction; or, the service device can control the spot indicator light to illuminate the interactive seat corresponding to the interactive user, so that the interactive user participates in the interaction. Or the service equipment can send interaction prompt information to the terminal equipment of the user associated with the interactive seat to prompt the user corresponding to the interactive seat to participate in interaction; the service equipment can send interaction prompt information to the terminal equipment of the interactive user to prompt the interactive user to participate in interaction.
Of course, the above processing method is only a preferred example, and other modifications may be made in the specific implementation, but the functions and effects achieved by the method are all covered within the scope of protection of the present specification as long as they are the same or similar. For example, the viewing area may not include seats, but only includes a plurality of divided sub-viewing areas, and the service device may further extract the sub-viewing area corresponding to the above-mentioned positioning direction, and the performer selects one of the viewers from the sub-viewing area on the spot to participate in the interaction. Or, the user may watch the performance in a line mode, the designated location may further include a plurality of viewing screens, and a video connection interface of the designated plurality of viewers is displayed on each viewing screen, and correspondingly, the service device may further extract the viewing screen corresponding to the positioning direction, so as to prompt the viewers displayed in the viewing screens to participate in the interaction online. Of course, in other application scenarios, the user may also be other types of users, such as an employee (the designated place is a company meeting place), and the like.
The performers can interact with the users involved in the interaction on site, such as to sing together, to attend a campaign, etc. Certainly, the on-site service personnel can also associate the interactive activity information to be participated with the performance place identifier and then send the interactive activity information to be participated with the performance place identifier to the service device through the ticketing application program of the terminal device. The service equipment can control the interactive activity information to be displayed on a stage screen, or can also send the interactive activity information to the terminal equipment of the audience participating in the interaction so as to be displayed on the terminal equipment. The specific interactive implementation mode can be configured according to the requirement, and is not limited here.
For example, the local service device may be at least a service device that establishes a communication connection with a ticketing application in the user's terminal device. After the interactive seat identification is determined, the cloud service equipment can feed back the terminal equipment information of the user corresponding to the interactive seat identification to the local service equipment. And the local service equipment sends a notification of participating in the interaction to a ticketing application program in the terminal equipment of the corresponding user based on the received terminal equipment information. Accordingly, after receiving the notification, the ticketing application in the terminal device may present the notification to the user to prompt the user to participate in the interaction.
The ticketing application of the terminal device may also present a "record" button to the user after receiving notification of participation in the interaction. When the ticketing application of the terminal device detects that the user triggers the recording button, the user can be prompted to establish communication connection with the sound box device at the performance site. If the user can start the Bluetooth, the WIFI and the like in the terminal equipment, so that the communication connection is established between the terminal equipment and the sound box equipment on the performance site. Accordingly, the user's terminal device may act as a microphone to enable the user to sing with the performers using the terminal device. Or, the user can also pre-start bluetooth, WIFI and the like in the terminal device, so that communication connection is established between the terminal device and the sound box device at the performance site; correspondingly, after the ticket application program of the terminal equipment receives the notification of participating in the interaction, a 'recording' button can be displayed to the user, and the user can trigger the 'recording' button so that the terminal equipment can be connected to the sound box equipment to serve as a microphone. It is particularly required that the "record" button in the ticketing application of the terminal device is in a hidden state before the terminal device receives the notification of participating in the interaction; after the terminal equipment receives the notification of participating in the interaction, the notification is displayed to the user, so that the user selected to participate in the interaction finally participates in chorus, other users cannot participate in chorus, and the interactive experience of the user is improved.
Accordingly, in some embodiments, the service device may send a microphone opening instruction to the terminal device of the user associated with the interactive seat, so that the corresponding terminal device displays a microphone access control. The microphone access control may be, for example, a "record" button displayed in a ticketing application of the terminal device, or may be another type of control. The terminal device may access the sound box device at the performance site based on the trigger of the user on the microphone access control, and the specific access manner may be as described in the above exemplary scenario. The user can then chorus with the performers using his terminal device. After the users participating in the interaction are selected, the terminal equipment of the corresponding user is further controlled to be converted into the microphone of the user, so that the simplicity and the interaction effect of the interaction participation of the user can be further improved, and the interaction experience of the user is improved.
Under the condition that the interactive seat marks are more than two, the corresponding more than two users can be simultaneously used as interactive users and utilize respective terminal equipment to sing with performers; certainly, the performer can also designate one user from the users associated with the interactive seat identifier as an interactive user to participate in chorus; or, the interactive user who finally participates in the interaction can be selected through the interactive preemptive mode, and the service device sends a microphone opening instruction to the terminal device of the interactive user, so that the interactive user participates in chorus by using the terminal device.
The interactive prompting method provided by the embodiment can at least utilize the site space information including the specified site and the site image of the limb posture of the target person to construct the three-dimensional site model including the limb posture of the target person. Correspondingly, the constructed three-dimensional site model comprises three-dimensional space characteristics of the limb posture of the target person in the specified site. Then, the three-dimensional place model can be used for positioning the direction pointed by the limb posture of the target person in the three-dimensional space corresponding to the designated place, and further, the user corresponding to the direction in the designated place can be accurately selected to participate in interaction. By using the method provided by each embodiment of the specification, the users can be selected from the site to participate in the interaction more accurately and intelligently, so that the site interaction is more intelligent, the interaction cost selected by the interaction users is reduced, and the interaction experience of the users is improved.
Based on the interactive prompting method provided by the above embodiment, the embodiment of the present specification further provides an interactive prompting device applied to the service device. As shown in fig. 4, the interactive prompting device includes the following modules: an image obtaining module 402, configured to obtain a location image at least including location space information of a specified location and a limb posture of a target person; a model construction module 404, configured to construct a three-dimensional site model including a limb pose of the target person using the site images acquired by at least two acquisition devices; a direction positioning module 406, configured to position, based on the three-dimensional site model, a direction in the specified site to which the limb pose of the target person is pointed, so as to prompt a user in the specified site corresponding to the direction to participate in interaction. In the apparatus provided in this embodiment, functions and effects implemented by the related functional modules may be explained in contrast to other embodiments, and are not described again.
Based on the interaction prompting method provided by the embodiment, the embodiment of the specification further provides an interaction prompting system, and the system comprises cloud service equipment, local service equipment and acquisition equipment which are arranged in the specified place. The communication mode and the interaction mode among the cloud service device, the local service device, and the acquisition device may refer to the above scenario examples and embodiments, which are not described herein again.
In some embodiments, the local service device may be configured to obtain a location image including at least location space information of a specified location and a limb posture of a target person, and send the location image to the cloud service device. The cloud service device may be configured to construct a three-dimensional site model including a limb posture of the target person using the site images acquired by the at least two acquisition devices; and positioning the direction pointed by the limb posture of the target person in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
In other embodiments, the designated location includes a viewing area in which the user is located, and the viewing area has a plurality of seats disposed therein. The cloud service equipment is further used for extracting a part of seats corresponding to the direction in the viewing area so as to prompt users corresponding to the part of seats to participate in interaction.
In other embodiments, the cloud service device is further configured to screen at least one seat associated with the user from the partial seats as an interactive seat, so as to prompt the user corresponding to the interactive seat to participate in interaction.
In other embodiments, the cloud service device is further configured to feed back seat position information of the interactive seat to the local service device. The seat position information of the interactive seat can be represented by a seat identifier, a sub-seat area to which the interactive seat belongs and the relative position of the interactive seat in the sub-seat area, or can be represented in other manners, so as to quickly locate the relative position information of the interactive seat in a specified place. The local service equipment is further used for receiving seat position information of the interactive seat and controlling a field indicator lamp to illuminate the interactive seat based on the seat position information so as to prompt a user corresponding to the interactive seat to participate in interaction.
In other embodiments, the cloud service device is further configured to extract terminal device information of a user associated with the interactive seat, and send the extracted terminal device information to the local service device. The local terminal equipment is also used for receiving the terminal equipment information and sending interaction prompt information to the terminal equipment corresponding to the terminal equipment information so as to prompt the user corresponding to the interaction seat to participate in interaction.
In the system provided by this embodiment, the functions and effects implemented by the related functional modules may be explained in comparison with other embodiments, and are not described in detail. The data processing executed by each device in the interactive prompting system and the interaction mode among the devices may also be changed with reference to the above embodiments, but as long as the functions and effects achieved by the devices are the same or similar, the devices are all covered in the protection scope of the present specification.
By adopting the mode of combining the cloud service equipment and the local service equipment, the stability of data transmission can be ensured under the condition that the remote communication of the performance site is not particularly good. Meanwhile, the cloud service equipment executes data processing such as three-dimensional space model construction, the requirement on the data processing capacity of the local service equipment can be lowered, and the local equipment is simpler to arrange. And the interactive user selection can be associated with real-time information such as online ticket buying, so that the interactive user selection is more suitable for complex and changeable actual scenes.
As shown in fig. 5, the present specification further provides a service device, which may include at least one processor and a memory storing computer-executable instructions, where the processor executes the instructions to implement the steps of the method according to any one or more of the above embodiments. The Memory includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk Drive (HDD), or a Memory Card (Memory Card). In this embodiment, specific functions implemented by the computer program instructions in the service device may be explained with reference to other embodiments.
The foregoing description of various embodiments of the present specification is provided for the purpose of illustration to those skilled in the art. It is not intended to be exhaustive or to limit the invention to a single disclosed embodiment. As described above, various alternatives and modifications of the present specification will be apparent to those skilled in the art to which the above-described technology pertains. Thus, while some alternative embodiments have been discussed in detail, other embodiments will be apparent or relatively easy to derive by those of ordinary skill in the art. This specification is intended to embrace all alternatives, modifications, and variations of the present invention that have been discussed herein, as well as other embodiments that fall within the spirit and scope of the above-mentioned application.
The description is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer service devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. While the specification has been described with examples, those skilled in the art will appreciate that there are numerous variations and permutations of the specification that do not depart from the spirit of the specification, and it is intended that the appended claims include such variations and modifications that do not depart from the spirit of the specification.

Claims (18)

1. An interactive prompting method is characterized in that the method is applied to service equipment of an interactive prompting system; the interactive prompt system further comprises acquisition equipment arranged in a specified place, and the method comprises the following steps:
acquiring a place image at least comprising place space information of a specified place and a limb posture of a target person;
constructing a three-dimensional site model containing the limb postures of the target personnel by using the site images acquired by at least two acquisition devices;
and positioning the direction in which the limb gesture of the target person is pointed in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
2. The method of claim 1, wherein the obtaining a venue image containing at least venue space information for a specified venue and a limb pose of a target person comprises:
under the condition that an interactive starting instruction is detected, selecting a preset time interval containing the sending moment of the interactive starting instruction;
and acquiring the site space information which is acquired within the preset time interval and contains the specified site and the site image of the limb posture of the target person.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
under the condition that an interaction selection instruction is detected, extracting a place image at the moment when the interaction selection instruction is sent from the obtained place image to be used as a target place image;
correspondingly, a three-dimensional site model containing the limb postures of the target personnel is constructed by utilizing the target site image.
4. The method of claim 1, wherein after the building the three-dimensional venue model containing the limb pose of the target person, the method further comprises:
judging whether the limb posture of the target person in the three-dimensional place model is an interactive posture or not;
and under the condition that the limb posture of the target person is an interactive posture, positioning the direction pointed by the limb posture of the target person in the specified place based on the three-dimensional place model.
5. The method of claim 4, wherein the interactive gesture comprises at least an extension of an arm of the target person to an area of the designated location where the user is located.
6. The method of claim 1, wherein the designated place includes a viewing area in which the user is located, the viewing area having a plurality of seats disposed therein; the method further comprises the following steps:
and extracting a part of seats corresponding to the direction in the viewing area to prompt users corresponding to the part of seats to participate in interaction.
7. The method of claim 6, further comprising:
and screening out at least one seat associated with the user from the partial seats to serve as an interactive seat so as to prompt the user corresponding to the interactive seat to participate in interaction.
8. The method according to claim 7, wherein, in the case that there are more than two interactive seats, the user associated with the interactive seat is taken as an alternative interactive user; the method further comprises the following steps:
sending an interactive preemptive prompt to the terminal equipment of the alternative interactive user;
and under the condition that an interaction selection instruction fed back by the terminal equipment based on the interaction preemptive prompt is received, selecting the alternative interaction user corresponding to the interaction selection instruction received firstly as the interaction user based on the sequence of the received interaction selection instruction.
9. The method of claim 7, further comprising:
and controlling a field indicator lamp to illuminate the interactive seat so as to prompt a user corresponding to the interactive seat to participate in interaction.
10. The method of claim 7, further comprising:
and sending interaction prompt information to the terminal equipment of the user associated with the interactive seat to prompt the user corresponding to the interactive seat to participate in interaction.
11. The method of claim 7, further comprising:
and sending a microphone opening instruction to the terminal equipment of the user associated with the interactive seat so as to enable the corresponding terminal equipment to display a microphone access control.
12. An interactive prompting device is characterized in that the device is applied to service equipment of an interactive prompting system; the interactive prompt system further comprises acquisition equipment arranged in a designated place, and the device comprises:
the system comprises an image acquisition module, a position acquisition module and a position acquisition module, wherein the image acquisition module is used for acquiring a position space information at least containing a designated position and a position image of the limb posture of a target person;
the model building module is used for building a three-dimensional site model containing the limb postures of the target personnel by utilizing the site images acquired by at least two acquisition devices;
and the direction positioning module is used for positioning the direction pointed by the limb posture of the target person in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
13. The utility model provides an interactive reminder system which characterized in that, the system includes high in the clouds service equipment and lays in the local service equipment, the collection equipment in appointed place:
the local service equipment is used for acquiring a place image at least comprising place space information of a specified place and the limb posture of a target person and sending the place image to the cloud service equipment;
the cloud service equipment is used for constructing a three-dimensional place model containing the limb postures of the target personnel by utilizing the place images acquired by at least two acquisition equipment; and positioning the direction pointed by the limb posture of the target person in the specified place based on the three-dimensional place model so as to prompt the user corresponding to the direction in the specified place to participate in interaction.
14. The system of claim 13, wherein the designated location includes a viewing area in which the user is located, the viewing area having a plurality of seats disposed therein; the cloud service equipment is further used for extracting a part of seats corresponding to the direction in the viewing area so as to prompt users corresponding to the part of seats to participate in interaction.
15. The system of claim 14, wherein the cloud service device is further configured to screen at least one seat associated with the user from the partial seats as an interactive seat to prompt the user corresponding to the interactive seat to participate in the interaction.
16. The system of claim 15, wherein the cloud service device is further configured to feed back seat location information of the interactive seat to the local service device;
the local service equipment is further used for receiving seat position information of the interactive seat and controlling a field indicator lamp to illuminate the interactive seat based on the seat position information so as to prompt a user corresponding to the interactive seat to participate in interaction.
17. The system of claim 15, wherein the cloud service device is further configured to extract terminal device information of a user associated with the interactive seat, and send the extracted terminal device information to a local service device;
the local terminal equipment is also used for receiving the terminal equipment information and sending interaction prompt information to the terminal equipment corresponding to the terminal equipment information so as to prompt the user corresponding to the interaction seat to participate in interaction.
18. A service device, characterized in that the device comprises at least one processor and a memory storing computer-executable instructions, which when executed by the processor implement the steps of the method of any one of claims 1 to 11.
CN202111515467.6A 2021-12-13 2021-12-13 Interactive prompting method, device and equipment Pending CN114397959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111515467.6A CN114397959A (en) 2021-12-13 2021-12-13 Interactive prompting method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111515467.6A CN114397959A (en) 2021-12-13 2021-12-13 Interactive prompting method, device and equipment

Publications (1)

Publication Number Publication Date
CN114397959A true CN114397959A (en) 2022-04-26

Family

ID=81227669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111515467.6A Pending CN114397959A (en) 2021-12-13 2021-12-13 Interactive prompting method, device and equipment

Country Status (1)

Country Link
CN (1) CN114397959A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287767A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Interaction control method, device, storage medium and electronic equipment
CN112667085A (en) * 2020-12-31 2021-04-16 北京高途云集教育科技有限公司 Classroom interaction method and device, computer equipment and storage medium
CN112861591A (en) * 2019-11-28 2021-05-28 京东方科技集团股份有限公司 Interactive identification method, interactive identification system, computer equipment and storage medium
CN112887777A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Interactive prompting method and device for interactive video, electronic equipment and storage medium
CN113419634A (en) * 2021-07-09 2021-09-21 郑州旅游职业学院 Display screen-based tourism interaction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861591A (en) * 2019-11-28 2021-05-28 京东方科技集团股份有限公司 Interactive identification method, interactive identification system, computer equipment and storage medium
CN112887777A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Interactive prompting method and device for interactive video, electronic equipment and storage medium
CN112287767A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Interaction control method, device, storage medium and electronic equipment
CN112667085A (en) * 2020-12-31 2021-04-16 北京高途云集教育科技有限公司 Classroom interaction method and device, computer equipment and storage medium
CN113419634A (en) * 2021-07-09 2021-09-21 郑州旅游职业学院 Display screen-based tourism interaction method

Similar Documents

Publication Publication Date Title
CN100399240C (en) Communication and collaboration system using rich media environments
CN110139062B (en) Video conference record creating method and device and terminal equipment
RU2518940C2 (en) Method, apparatus and system for interlinking video image and virtual network environment
CN107278374A (en) Interactive advertisement display method, terminal and smart city interactive system
CN101198945B (en) Management system for rich media environments
US20120192088A1 (en) Method and system for physical mapping in a virtual world
CN110472099B (en) Interactive video generation method and device and storage medium
CN106843460A (en) The capture of multiple target position alignment system and method based on multi-cam
KR20120019007A (en) System and method for providing virtual reality linking service
CN110166848B (en) Live broadcast interaction method, related device and system
CN109427219B (en) Disaster prevention learning method and device based on augmented reality education scene conversion model
CN111242704B (en) Method and electronic equipment for superposing live character images in real scene
CN110210045B (en) Method and device for estimating number of people in target area and storage medium
KR20200097637A (en) Simulation sandbox system
KR20160139633A (en) An system and method for providing experiential contents using augmented reality
CN112601022B (en) On-site monitoring system and method based on network camera
CN110547756A (en) Vision test method, device and system
CN106341380A (en) Method, device and system for performing remote identity authentication on user
CN108228124A (en) VR visual tests method, system and equipment
CN110324653A (en) Game interaction exchange method and system, electronic equipment and the device with store function
CN112188223B (en) Live video playing method, device, equipment and medium
CN114397959A (en) Interactive prompting method, device and equipment
CN108537990A (en) All-in-one machine cheats judgment method, device, equipment and computer readable storage medium
CN112218111A (en) Image display method and device, storage medium and electronic equipment
CN105119953B (en) The method and device of APP binding audio-video processing terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination