CN112887418A - Information processing method and system based on Internet of things interaction and intelligent communication - Google Patents

Information processing method and system based on Internet of things interaction and intelligent communication Download PDF

Info

Publication number
CN112887418A
CN112887418A CN202110176177.7A CN202110176177A CN112887418A CN 112887418 A CN112887418 A CN 112887418A CN 202110176177 A CN202110176177 A CN 202110176177A CN 112887418 A CN112887418 A CN 112887418A
Authority
CN
China
Prior art keywords
interaction
internet
things
interactive
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110176177.7A
Other languages
Chinese (zh)
Inventor
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110176177.7A priority Critical patent/CN112887418A/en
Publication of CN112887418A publication Critical patent/CN112887418A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides an information processing method and system based on Internet of things interaction and intelligent communication, wherein each interactive rendering stream segment to be extracted in the Internet of things interactive rendering stream is obtained, a position transfer state vector of at least one interaction area in the interactive rendering stream segments to be extracted is obtained, then at least one first interactive Internet of things object and at least one second interactive Internet of things object which correspond to the at least one interaction area respectively are determined, and a communication switching interaction area in the at least one interaction area in the interactive rendering stream segments is determined based on the position transfer state vectors of the interaction area, the at least one first interactive Internet of things object and the at least one second interactive Internet of things object, so that the communication switching interaction area in the interaction area can be effectively determined according to the behavior of communication switching interaction, and the communication switching interaction area is marked in the interactive rendering stream in a reminding manner, thereby improving the comprehensiveness of the user experience.

Description

Information processing method and system based on Internet of things interaction and intelligent communication
Technical Field
The invention relates to the technical field of Internet of things, in particular to an information processing method and system based on Internet of things interaction and intelligent communication.
Background
With the rapid development of the internet of things technology, the internet of things plays an increasingly important role, the virtual reality technology is one of the leading-edge technologies which are most concerned nowadays, the virtual reality technology is rapidly developing, and various virtual reality products including virtual reality hardware devices and virtual reality content applications gradually enter the consumer market. For example, in a virtual reality experience process in an internet of things interaction process, a chartlet drawing flow of each video playing chartlet (e.g., a human-computer interaction terminal, a security terminal, and a mobile application terminal) is usually drawn in advance, so as to facilitate subsequent dynamic virtual experience.
In the virtual reality experience process of the objects of the internet of things, the objects of the internet of things are usually rendered through the real scenes of communication interaction, so that a user can experience complete functions of the internet of things conveniently. However, at present, no design scheme is available to effectively determine the communication switching interaction region in the interaction region according to the behavior of the existence of the communication switching interaction, so that the comprehensiveness of the user experience is affected.
Disclosure of Invention
In order to overcome at least the above disadvantages in the prior art, an object of the present invention is to provide an information processing method and system based on internet of things interaction and intelligent communication, which can effectively determine a communication switching interaction region in an interaction region for a behavior of communication switching interaction, and label the communication switching interaction region in the internet of things interaction rendering stream in a form of reminding, thereby improving the comprehensiveness of user experience.
In a first aspect, the invention provides an information processing method based on internet of things interaction and intelligent communication, which is applied to a cloud computing platform, wherein the cloud computing platform is in communication connection with a plurality of human-computer interaction equipment terminals, and the method comprises the following steps:
extracting complete virtual reality drawing information between video playing maps with drawing superposition relations from each human-computer interaction equipment terminal, and rendering each complete virtual reality drawing information in the same corresponding candidate Internet of things interaction scene to obtain Internet of things interaction rendering stream;
acquiring each interactive rendering stream fragment to be extracted in the internet of things interactive rendering stream, and acquiring a position transfer state vector of at least one interactive region in the interactive rendering stream fragments to be extracted, wherein the interactive rendering stream fragments are rendering process fragments to be extracted, and each interactive rendering stream fragment corresponds to an internet of things complete interactive process respectively;
determining at least one first interactive internet of things object and at least one second interactive internet of things object which respectively correspond to the at least one interactive area, wherein the first interactive internet of things object of any interactive area is an internet of things object of all interactive areas included in an interactive geometric space which takes the interactive area as a boundary and takes a target length as an interactive range, and the second interactive internet of things object is an internet of things object which interacts with the first interactive internet of things object;
determining a communication switching interaction area in the at least one interaction area in the interaction rendering stream fragment based on the position transition state vectors of the interaction area, the at least one first interaction internet of things object and the at least one second interaction internet of things object, and marking the communication switching interaction area in the internet of things interaction rendering stream in a reminding manner.
In a possible implementation manner of the first aspect, the step of determining, based on the position transition state vectors of the interaction region, the at least one first interaction internet-of-things object, and the at least one second interaction internet-of-things object, a communication switching interaction region in the at least one interaction region in the interactive rendering stream segment includes:
extracting a state vector with communication interaction switching characteristics in the position transfer state vectors of the interaction areas based on the position transfer state vectors of the interaction areas, a first interaction Internet of things object corresponding to the interaction areas and a second interaction Internet of things object corresponding to the interaction areas, wherein the state vector is used for representing state change information of the at least one interaction area in the interaction rendering stream segment;
performing prediction processing on the extracted state vector to obtain at least one confidence coefficient of the at least one interaction region, wherein one confidence coefficient is used for representing the possibility that one interaction region belongs to one communication switching interaction region;
and determining a communication switching interaction area in the at least one interaction area in the interactive rendering stream segment based on the at least one confidence level.
In a possible implementation manner of the first aspect, the step of extracting, based on the position transition state vectors of the interaction region, the first interaction internet-of-things object corresponding to the interaction region, and the second interaction internet-of-things object corresponding to the interaction region, a state vector having a communication interaction switching feature in the position transition state vectors of the interaction region includes:
respectively constructing a global state vector of the interaction region and at least one local state vector between the interaction region and at least one linkage Internet of things object based on the position transition state vectors of the interaction region, a first interaction Internet of things object corresponding to the interaction region and a second interaction Internet of things object corresponding to the interaction region, wherein the global state vector is used for representing the spatial state change information of the interaction region in an interaction rendering stream segment, and one local state vector is used for representing the relative state change information between the interaction region and one linkage Internet of things object;
and acquiring the state vector of the interaction area based on the global state vector and the at least one local state vector.
In a possible implementation manner of the first aspect, the step of obtaining the state vector of the interaction region based on the global state vector and the at least one local state vector includes:
and fusing the global state vector with the at least one local state vector respectively to obtain the state vector of the interaction region.
In a possible implementation manner of the first aspect, the step of performing a prediction process on the extracted state vector to obtain at least one confidence level of the at least one interaction region includes:
extracting a plurality of candidate state vector unit information to be predicted from the extracted state vector, and acquiring a plurality of associated first communication behavior data and second communication behavior data from the plurality of candidate state vector unit information to be predicted;
determining a communication switching perception connection diagram according to the first communication behavior data and the second communication behavior data, and acquiring a plurality of switching suspected nodes of the communication switching perception connection diagram, wherein a starting point of the communication switching perception connection diagram refers to a first communication behavior of the first communication behavior data, an end point of the communication switching perception connection diagram refers to a second communication behavior of the second communication behavior data, and each switching suspected node and each communication switching interaction area have a one-to-one correspondence relation;
acquiring a prediction artificial intelligence model corresponding to the communication switching perception connected graph, wherein the prediction artificial intelligence model comprises prediction nodes corresponding to a plurality of switching suspected nodes of the communication switching perception connected graph, an output value of the prediction artificial intelligence model is determined by the communication switching perception connected graph, and the output value of the prediction artificial intelligence model is used for representing the confidence degree that the communication switching parameters of the plurality of switching suspected nodes meet communication switching prediction conditions;
and predicting the communication switching perception connected graph according to each prediction node of the prediction artificial intelligence model to obtain a prediction confidence corresponding to each prediction node so as to obtain at least one confidence of the at least one interaction region.
In a possible implementation manner of the first aspect, the step of determining, based on the at least one confidence level, a communication switching interaction region in the at least one interaction region in the interactive rendering stream segment includes:
and determining all communication switching interaction areas with the confidence degrees larger than the set confidence degree as communication switching interaction areas in the at least one interaction area.
In a possible implementation manner of the first aspect, the step of extracting complete virtual reality drawing information between each video playback map with a drawing and superimposing relationship from each human-computer interaction device side includes:
acquiring a virtual reality three-dimensional map of a candidate Internet of things interaction scene under a drawing pixel segment of each drawing layered component from each human-computer interaction equipment terminal, and dividing the virtual reality three-dimensional map under each drawing pixel segment in an interaction mode according to a preset interaction Internet of things form to respectively generate a map dividing sequence of each interaction Internet of things form;
aiming at each interactive Internet of things form, obtaining a mapping drawing stream corresponding to each video playing mapping in a mapping dividing sequence of the interactive Internet of things form, and performing virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping;
judging whether drawing superposition information for representing that video playing maps have drawing superposition exists or not in the virtual reality drawing process, and extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having drawing superposition relation with the first video playing map when the drawing superposition information is detected;
and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
In a possible implementation manner of the first aspect, the step of obtaining a mapping rendering stream corresponding to each video playing mapping in the mapping partitioning sequence in the form of the interactive internet of things includes:
judging whether an Internet of things interaction relationship is established with each video playing map or not; the interactive relation of the internet of things is used for setting drawing services of a map drawing stream corresponding to video playing maps, each video playing map corresponds to one interactive relation of the internet of things, and the interactive modes of different interactive relations of the internet of things are different;
if the Internet of things interaction relation corresponding to each video playing map association is not obtained, map drawing source information of each video playing map is obtained; the mapping source information comprises a mapping source label corresponding to the video playing mapping, and the mapping source label is a mapping source label corresponding to a mapping stream generated by the video playing mapping;
analyzing and identifying each mapping source information according to the vertex mapping character corresponding to each mapping source information to obtain at least a plurality of vertex mapping partitions corresponding to each mapping source information, and determining a target vertex mapping partition with displacement transformation information from the vertex mapping partitions corresponding to each mapping source information; the displacement transformation information is a displacement transformation node of a vertex mapping partition corresponding to the source label of the mapping map and representing the vertex mapping partition;
associating a depth map in a target vertex mapping partition corresponding to each video playing map with an internet of things interaction relation corresponding to each video playing map, wherein the internet of things interaction relation is determined according to the internet of things interaction relation corresponding to each depth virtual camera in the depth map in the target vertex mapping partition;
and acquiring a mapping drawing stream corresponding to each video playing mapping from a pre-configured mapping drawing stream library according to the Internet of things interaction relation corresponding to each video playing mapping, wherein the mapping drawing stream library comprises mapping drawing streams of each video playing mapping under different Internet of things interaction relations.
In a second aspect, an embodiment of the present invention further provides an information processing apparatus based on internet of things interaction and intelligent communication, which is applied to a cloud computing platform, where the cloud computing platform is in communication connection with multiple human-computer interaction device terminals, and the apparatus includes:
the extraction module is used for extracting complete virtual reality drawing information between video playing maps with drawing superposition relations from each human-computer interaction equipment terminal, and rendering each complete virtual reality drawing information in the same corresponding candidate Internet of things interaction scene to obtain Internet of things interaction rendering streams;
the acquisition module is used for acquiring each interactive rendering stream fragment to be extracted in the Internet of things interactive rendering stream and acquiring a position transfer state vector of at least one interactive area in the interactive rendering stream fragments to be extracted, wherein the interactive rendering stream fragments are rendering process fragments to be extracted, and each interactive rendering stream fragment corresponds to a complete interactive process of the Internet of things;
the first determining module is used for determining at least one first interactive internet of things object and at least one second interactive internet of things object which correspond to the at least one interactive area respectively, wherein the first interactive internet of things object in any interactive area is an internet of things object in all interactive areas included in an interactive geometric space which takes the interactive area as a boundary and takes a target length as an interactive range, and the second interactive internet of things object is an internet of things object which interacts with the first interactive internet of things object;
a second determining module, configured to determine, based on the position transition state vector of the interaction region, the at least one first interaction internet-of-things object, and the at least one second interaction internet-of-things object, a communication switching interaction region in the at least one interaction region in the interaction rendering stream segment, and mark the communication switching interaction region in the internet-of-things interaction rendering stream in a form of a reminder.
In a third aspect, an embodiment of the present invention further provides an information processing system based on internet of things interaction and intelligent communication, where the information processing system based on internet of things interaction and intelligent communication includes a cloud computing platform and a plurality of human-computer interaction device terminals in communication connection with the cloud computing platform;
the cloud computing platform is used for extracting complete virtual reality drawing information between video playing maps with drawing and overlapping relations from each human-computer interaction equipment terminal, and rendering each complete virtual reality drawing information in the same corresponding candidate Internet of things interaction scene to obtain Internet of things interaction rendering streams;
the cloud computing platform is used for acquiring each interactive rendering stream fragment to be extracted in the Internet of things interactive rendering stream and acquiring a position transfer state vector of at least one interaction area in the interactive rendering stream fragments to be extracted, wherein the interactive rendering stream fragments are rendering process fragments to be extracted, and each interactive rendering stream fragment corresponds to a complete interaction process of the Internet of things;
the cloud computing platform is used for determining at least one first interactive internet of things object and at least one second interactive internet of things object which correspond to the at least one interactive area respectively, wherein the first interactive internet of things object in any interactive area is an internet of things object in all interactive areas included in an interactive geometric space which takes the interactive area as a boundary and takes a target length as an interactive range, and the second interactive internet of things object is an internet of things object which interacts with the first interactive internet of things object;
the cloud computing platform is used for determining a communication switching interaction area in the interaction rendering stream fragment in the at least one interaction area based on the position transition state vector of the interaction area, the at least one first interaction internet of things object and the at least one second interaction internet of things object, and marking the communication switching interaction area in the internet of things interaction rendering stream in a reminding manner.
In a fourth aspect, an embodiment of the present invention further provides a cloud computing platform, where the cloud computing platform includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is used for being communicatively connected with at least one human-computer interaction device, the machine-readable storage medium is used for storing a program, an instruction, or a code, and the processor is used for executing the program, the instruction, or the code in the machine-readable storage medium to perform the information processing method based on the internet of things interaction and the intelligent communication in the first aspect or any one of possible designs of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, where instructions are stored, and when executed, cause a computer to perform an information processing method based on internet of things interaction and intelligent communication in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, the invention obtains each interactive rendering stream segment to be extracted in the internet of things interactive rendering stream, obtains the position transfer state vector of at least one interaction region in the interactive rendering stream segments to be extracted, then determines at least one first interactive internet of things object and at least one second interactive internet of things object corresponding to the at least one interaction region respectively, and determines the communication switching interaction region in the at least one interaction region in the interactive rendering stream segments based on the position transfer state vectors of the interaction region, the at least one first interactive internet of things object and the at least one second interactive internet of things object, thereby effectively determining the communication switching interaction region in the interaction region aiming at the behavior of communication switching interaction, and labeling the communication switching interaction region in the internet of things interactive rendering stream in a form of reminding, thereby improving the comprehensiveness of the user experience.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of an application scenario of an information processing system based on internet of things interaction and intelligent communication according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an information processing method based on internet of things interaction and intelligent communication according to an embodiment of the present invention;
fig. 3 is a schematic functional module diagram of an information processing apparatus based on internet of things interaction and intelligent communication according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of structural components of a cloud computing platform for implementing the information processing method based on internet of things interaction and intelligent communication according to the embodiment of the present invention.
Detailed Description
The present invention is described in detail below with reference to the drawings, and the specific operation methods in the method embodiments can also be applied to the apparatus embodiments or the system embodiments.
Fig. 1 is an interaction diagram of an information processing system 10 based on internet of things interaction and intelligent communication according to an embodiment of the present invention. The information processing system 10 based on internet of things interaction and intelligent communication can comprise a cloud computing platform 100 and a human-computer interaction device end 200 in communication connection with the cloud computing platform 100. The information processing system 10 based on the internet of things interaction and intelligent communication shown in fig. 1 is only one possible example, and in other possible embodiments, the information processing system 10 based on the internet of things interaction and intelligent communication may also include only one of the components shown in fig. 1 or may also include other components.
In this embodiment, the human-computer interaction device end 200 may include a mobile device, a tablet computer, a laptop computer, or any combination thereof. In some embodiments, the mobile device may include an internet of things device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the internet of things device may include a control device of a smart appliance device, a smart monitoring device, a smart television, a smart camera, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, and the like, or any combination thereof. In some embodiments, the virtual reality device and the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, virtual reality devices and augmented reality devices may include various virtual reality products and the like.
In this embodiment, the cloud computing platform 100 and the human-computer interaction device 200 in the information processing system 10 based on the internet of things interaction and intelligent communication may execute the information processing method based on the internet of things interaction and intelligent communication described in the following method embodiment in a matching manner, and the detailed description of the following method embodiment may be referred to in the specific steps executed by the cloud computing platform 100 and the human-computer interaction device 200.
In this embodiment, the information processing system 10 based on internet of things interaction and intelligent communication may be implemented in various application scenarios, for example, a block chain application scenario, an intelligent home application scenario, an intelligent control application scenario, and the like.
In order to solve the technical problem in the foregoing background, fig. 2 is a schematic flow chart of an information processing method based on internet of things interaction and intelligent communication according to an embodiment of the present invention, where the information processing method based on internet of things interaction and intelligent communication according to the embodiment may be executed by the cloud computing platform 100 shown in fig. 1, and the information processing method based on internet of things interaction and intelligent communication is described in detail below.
Step S110, extracting complete virtual reality drawing information between each video playing map having a drawing and superimposing relationship from each human-computer interaction device end 200, and rendering each complete virtual reality drawing information in the same corresponding candidate internet of things interaction scene to obtain an internet of things interaction rendering stream.
Step S120, each interactive rendering stream segment to be extracted in the Internet of things interactive rendering stream is obtained, and a position transfer state vector of at least one interactive area in the interactive rendering stream segments to be extracted is obtained.
Step S130, determining at least one first interactive Internet of things object and at least one second interactive Internet of things object corresponding to at least one interactive area respectively.
Step S140, determining a communication switching interaction area in at least one interaction area in the interaction rendering stream fragment based on the position transfer state vectors of the interaction area, the at least one first interaction Internet of things object and the at least one second interaction Internet of things object, and labeling the communication switching interaction area in the Internet of things interaction rendering stream in a reminding manner.
In this embodiment, for step S120, the interactive rendering stream segments may be rendering process segments to be extracted, and each interactive rendering stream segment corresponds to one complete interactive process of the internet of things. For example, an internet of things complete interaction process may refer to a process from the beginning to the end of an internet of things interaction service. The position transition state vector may refer to a transition state change of a corresponding position region in the rendering process, for example, a transition state change of a rendering state in a process of transferring from one position region to another position region, and an acquisition process of this part is the prior art and is not described herein again.
In this embodiment, in step S130, the first interactive internet of things object in any interactive region is an internet of things object in all interactive regions included in the interactive geometric space with the interactive region as a boundary and the target length as an interactive range, and the second interactive internet of things object is an internet of things object interacting with the first interactive internet of things object. Each internet of things object can be understood as a system formed by one internet of things device or a plurality of internet of things devices.
In this embodiment, in step S140, in the process of labeling the communication switching interaction region in the internet of things interaction rendering stream in the form of a reminder, the reminder may be given based on a rendering reminder mode preset by the user, so that the user can visually observe the communication switching interaction process in the subsequent virtual reality experience process, so as to know each object of the internet of things.
Based on the above design, in the embodiment, by obtaining each interactive rendering stream segment to be extracted in the internet of things interactive rendering stream, obtaining a position transition state vector of at least one interaction region in the interactive rendering stream segments to be extracted, then determining at least one first interactive internet of things object and at least one second interactive internet of things object corresponding to the at least one interaction region respectively, and determining a communication switching interaction region in the at least one interaction region in the interactive rendering stream segments based on the position transition state vectors of the interaction region, the at least one first interactive internet of things object and the at least one second interactive internet of things object, it is possible to effectively determine a communication switching interaction region in the interaction region for a behavior in which communication switching interaction exists, and label the communication switching interaction region in the internet of things interactive rendering stream in a form of reminding, thereby improving the comprehensiveness of the user experience.
In a possible implementation manner, for step S110, the inventor researches and discovers that, in a conventional scheme, differences of different interactive internet of things forms are not generally considered, so that drawing conflicts are easily caused in a drawing process, and in the drawing process, some superimposed drawing effects may exist between different video playing maps, and these superimposed drawing effects may further enrich experience forms of the internet of things, for example, for a plurality of internet of things devices in the same functional form, the user may be helped to further experience a real-time interactive process of the internet of things. However, an independent processing scheme for the drawing result of each individual associated video playing map is absent at present, so that subsequent dynamic virtual experience cannot be performed with the associated video playing map as an independent experience target in a targeted manner in the actual virtual reality experience process.
Based on this, step S110 may be implemented by the following exemplary substeps, described in detail below.
And a substep S111, obtaining a virtual reality three-dimensional map of the candidate Internet of things interactive scene under the drawing pixel segment of each drawing layered component from each human-computer interactive equipment end 200, performing interactive mode division on the virtual reality three-dimensional map under each drawing pixel segment according to a preset interactive Internet of things form, and respectively generating a map division sequence of each interactive Internet of things form.
The drawing layer component may refer to a rendering scene, which generally includes a plurality of drawing layers, such as a real scene layer, a menu option layer, and the like, and each drawing layer may be continuously controlled and executed by the corresponding drawing layer component.
The virtual reality three-dimensional map can be used for representing an entity rendering model of a specific display of the candidate internet of things interactive scene under the rendering pixel segment of each rendering hierarchical component. For example, a virtual world may be created in a computer using three-dimensional animation software (such as 3ds Max, Maya, or Houdini). Then, three-dimensional models such as scenes and three-dimensional cartoon characters are added in the virtual three-dimensional world. And finally, setting an animation curve of the model, a motion track of the virtual camera and other animation parameters, rendering to obtain dynamic maps, and collecting the dynamic maps so as to facilitate the calling in the subsequent virtual reality drawing process.
In this embodiment, the predetermined interactive internet of things form can be flexibly selected according to actual design requirements, for example, an office cooperation internet of things form, a shopping mall experience internet form, and the like, which is not limited in detail herein.
Step S112, aiming at each interactive Internet of things form, obtaining a mapping drawing stream corresponding to each video playing mapping in the mapping dividing sequence of the interactive Internet of things form, and performing virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping.
Step S113, judging whether drawing superposition information used for indicating that the video playing maps have drawing superposition exists in the virtual reality drawing process, and extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having the drawing superposition relation with the first video playing map when the drawing superposition information is detected.
And step S114, determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
Based on the above steps, the embodiment divides the virtual reality three-dimensional mapping under each rendering pixel segment based on the predetermined interactive internet of things form, thereby considering the difference of different interactive internet of things forms, and improving the situation of rendering conflict in the rendering process.
In one possible implementation, regarding step S112, it is considered that a portion of the video playing map may be added after adjustment, and therefore, regarding step S112, the following exemplary sub-steps may also be implemented, which are described in detail below.
And a substep S1121, judging whether an Internet of things interaction relationship is established with each video playing map.
In this embodiment, the internet of things interaction relationship may be used to set a drawing service of a map drawing stream corresponding to a video playing map, where each video playing map corresponds to one internet of things interaction relationship, and the interaction modes of different internet of things interaction relationships are different.
In the substep S1122, if the internet of things interaction relationship corresponding to each video playing map association is not found, the map drawing source information of each video playing map is obtained.
In this embodiment, the mapping source information includes a mapping source tag corresponding to the video playing mapping, where the mapping source tag is a mapping source tag corresponding to a mapping stream generated by the video playing mapping.
And step S1123, analyzing and identifying each mapping source information according to the vertex mapping character corresponding to each mapping source information to obtain at least a plurality of vertex mapping partitions corresponding to each mapping source information, and determining a target vertex mapping partition with displacement transformation information from the vertex mapping partitions corresponding to each mapping source information.
In this embodiment, the displacement transformation information is a displacement transformation node that characterizes a vertex mapping partition corresponding to the source label of the map.
And a substep S1124 of mapping, according to the target vertex corresponding to each video playing map, the depth map in the partition and the internet of things interaction relationship corresponding to each video playing map.
In this embodiment, the internet of things interaction relationship is determined according to the internet of things interaction relationship corresponding to each depth virtual camera in the depth map in the target vertex mapping partition.
And a substep S1125 of obtaining a chartlet drawing stream corresponding to each video playing chartlet from a pre-configured chartlet drawing stream library according to the Internet of things interaction relation corresponding to each video playing chartlet association.
In this embodiment, the mapping stream library includes mapping streams of each video playing mapping under different internet of things interaction relationships.
In a possible implementation manner, for step S113, in the present embodiment, in the process of extracting the first map rendering stream of the first video playing map corresponding to the drawing and overlaying information drawn by the virtual reality and the second map rendering stream of the at least one second video playing map having the drawing and overlaying relationship with the first video playing map, the first map rendering stream of the first video playing map corresponding to the drawing and overlaying information drawn by the virtual reality and the second map rendering stream of the at least one second video playing map having the drawing and overlaying relationship with the first video playing map may be extracted from the virtual reality drawing record information generated in the virtual reality drawing process. The at least one second video playing map having a drawing and overlapping relationship with the first video playing map may refer to a second video playing map having a linkage effect associated with the first video playing map.
For example, if a certain video playing map needs to be drawn in an overlapping manner in the drawing process of a first video playing map, the video playing map can be understood as a second video playing map which has a drawing overlapping relationship with the first video playing map.
In one possible implementation, step S114 can be implemented by the following exemplary sub-steps, which are described in detail below.
In the substep S1141, the first and second mapping flows are added to a preset immersive superimposition rendering queue, and a plurality of first immersive superimposition rendering parameters of the first mapping flow and a plurality of second immersive superimposition rendering parameters of the second mapping flow are established based on the immersive superimposition rendering queue.
Sub-step S1142, determining first lens distortion information for the first video playback map according to each first immersive overlay rendering parameter, and determining second lens distortion information for a second video playback map from each second immersive overlay rendering parameter, then mapping the first lens distortion information and the second lens distortion information to a preset projection matrix to obtain a first view field angle redrawn stream corresponding to the first lens distortion information and a second view field angle redrawn stream corresponding to the second lens distortion information, and determining a plurality of virtual imaging pictures in the preset projection matrix, summarizing the plurality of virtual imaging pictures to obtain at least a plurality of different classes of virtual imaging sequences, and for each virtual imaging sequence, and drawing a first view field angle redrawing stream and a second view field angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence in a preset virtual reality drawing process.
And a substep S1143 of splicing the drawing results of the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence according to the rendering sequence to generate a simulated drawing stream, restoring the simulated drawing stream generated by splicing according to a preset artificial intelligence model, and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map.
Therefore, subsequent dynamic virtual experience can be performed by taking the associated video playing map as an independent experience target in a targeted manner in the actual virtual reality experience process.
Exemplarily, in the sub-step S1141, the following detailed embodiments may be implemented, for example, as described below.
(1) Overlay rendering configuration information for the immersive overlay rendering queue is determined.
In this embodiment, the overlay drawing configuration information is used to represent an immersive overlay drawing unit allocated by the immersive overlay drawing queue when the immersive overlay drawing queue processes the sequentially added overlay drawing streams, and the immersive overlay drawing unit is used to represent drawing feature node information when the immersive overlay drawing queue draws the added overlay drawing streams.
(2) And determining to add the first chartlet drawing flow to first drawing feature node information corresponding to the immersive superimposed drawing queue and to add the second chartlet drawing flow to second drawing feature node information corresponding to the immersive superimposed drawing queue based on the superimposed drawing configuration information.
(3) Determining whether there is a drawing overlay when adding the first and second chartlet drawing streams to the immersive overlay drawing queue according to the first and second drawing feature node information.
In this embodiment, the drawing overlay may be used to characterize that there is overlay synchronization behavior for the drawing of the immersive overlay drawing queue.
(4) If it is determined that there is no drawing overlay when the first and second charting flows are added to the immersive overlay drawing queue, adjusting the second drawing feature node information to obtain third drawing feature node information, and adding the first and second charting flows to the immersive overlay drawing queue based on the first and third drawing feature node information.
In this embodiment, a feature difference between the third plotted feature node information and the second plotted feature node information is matched with a feature difference between the first plotted feature node information and the second plotted feature node information.
(5) If it is determined that there is a drawing overlay when the first and second charting flows are added to the immersive overlay drawing queue, the first and second charting flows are continuously added to the immersive overlay drawing queue using the first and second drawing feature node information.
In one possible implementation, still in sub-step S1141, in the process of establishing a plurality of first immersive superimposition drawing parameters of the first chartled drawing stream and a plurality of second immersive superimposition drawing parameters of the second chartled drawing stream based on the immersive superimposition drawing queue, the following detailed implementation may be implemented, for example, as described below.
(6) A first sequence of drawing nodes of the first chartlet drawing stream and a second sequence of drawing nodes of the second chartlet drawing stream are determined based on the immersive overlay drawing queue.
It should be noted that the drawing node sequence may be used to represent drawing interaction relationships of the chartlet drawing stream under different drawing nodes, for example, may represent a transition drawing interaction relationship, an overlay drawing interaction relationship, an add drawing interaction relationship, and the like, which is not specifically conceived herein.
(7) Establishing a plurality of first immersive superposition drawing parameters of the first chartlet drawing flow and a plurality of second immersive superposition drawing parameters of the second chartlet drawing flow in the immersive superposition drawing queue according to the first drawing node sequence and the second drawing node sequence respectively.
In a possible implementation manner, regarding step S1142, in order to ensure synchronicity and coherence and facilitate subsequent observation, the following detailed implementation manner may be implemented, for example, as described below.
(1) Determining a drawing node time sequence axis corresponding to each first immersive superposition drawing parameter according to a plurality of drawing nodes in each first immersive superposition drawing parameter and drawing model collision parameters between every two adjacent drawing nodes
(2) First lens distortion information for a first video playback map is determined based on a render node timing axis.
And each drawing node in the first immersive superposition drawing parameters is correspondingly provided with a drawing model collision cycle parameter, a matching parameter between the drawing model collision cycle parameter and the drawing model collision cycle parameter of any one drawing node is taken as a corresponding drawing model collision parameter, and the drawing model collision cycle parameter is determined according to a drawing track of the drawing node in the first immersive superposition drawing parameters.
(3) And listing the drawing node of each second immersive superposition drawing parameter and the drawing model collision cycle parameter corresponding to the drawing node to obtain a first projection drawing object and a second projection drawing object corresponding to each second immersive superposition drawing parameter.
For example, the first projection drawing object may be a projection drawing object corresponding to a drawing node of the second immersive superimposition drawing parameter, and the second projection drawing object may be a projection drawing object corresponding to a drawing model collision cycle parameter of the second immersive superimposition drawing parameter.
(4) A first three-dimensional spatial relationship of the first projected rendering object relative to the second projected rendering object and a second three-dimensional spatial relationship of the second projected rendering object relative to the second projected rendering object are determined.
(5) And acquiring at least three target three-dimensional positions with the same spatial point continuity in the first three-dimensional spatial relationship and the second three-dimensional spatial relationship, and determining second lens distortion information of the second immersive superposition drawing parameters according to the target three-dimensional positions.
Where, illustratively, spatial point continuity is used to characterize the render model collision cycle relationship between each two three-dimensional locations.
In a possible implementation manner, still referring to step S1142, in the process of aggregating a plurality of virtual imaging pictures to obtain at least a plurality of virtual imaging sequences of different categories, the following detailed implementation manner may be implemented, for example, as described below.
(6) And determining the number of the redrawing streams of the view field angle corresponding to each virtual imaging picture in the preset projection matrix.
(7) And determining a class drawing interval of the field angle redrawing flow corresponding to each virtual imaging picture.
The category drawing interval may be a coincidence ratio of a first field angle redrawing stream and a second field angle redrawing stream in the field angle redrawing stream corresponding to each virtual imaging picture.
(8) And determining vector stereo drawing information of the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture.
The vector solid rendering information may be obtained by calculating vector angle feature values (e.g., grayscale feature values, mean feature values of RGB color values, etc.) of a set number of viewing angle degree redrawing pictures corresponding to the first viewing angle redrawing stream and the second viewing angle redrawing stream.
(9) And determining a frame feature sequence of each virtual imaging picture according to the number of the field angle redrawing streams, the category drawing interval and the vector stereo drawing information corresponding to each virtual imaging picture (namely, a sequence formed by sequentially using the number of the field angle redrawing streams, the category drawing interval and the vector stereo drawing information).
(10) And summarizing each virtual imaging picture based on the frame feature sequence of each virtual imaging picture to obtain at least a plurality of different types of virtual imaging sequences.
For example, the virtual imaging frames with at least one same characteristic parameter in each characteristic parameter in the frame characteristic sequence can be summarized to the virtual imaging sequences of the category corresponding to the same characteristic parameter, so as to obtain at least a plurality of virtual imaging sequences of different categories.
In a possible implementation manner, still referring to step S1142, in the process of drawing the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging frame in the virtual imaging sequence in the preset virtual reality drawing process, the following detailed implementation manner may be implemented, for example, as described below.
(11) And determining the superposition drawing configuration information of the frame feature sequence corresponding to each virtual imaging picture in each virtual imaging sequence.
(12) And determining the immersive superposition drawing errors of the first view angle redrawing flow and the second view angle redrawing flow corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information.
The immersive superposition drawing error can be used for representing the drawing error condition of the first view field angle redrawing stream and the second view field angle redrawing stream corresponding to each virtual imaging picture.
(13) And judging whether the difference value of each immersive superposition drawing error and the reference drawing error corresponding to the virtual reality drawing process is within a preset difference value interval.
The preset difference value interval can be used for representing an interval where each immersive superposition drawing error is located when the virtual reality drawing process is in normal operation.
(13) When the difference between each immersive superimposition rendering error and the reference synchronization coefficient corresponding to the virtual reality rendering process falls within the preset difference interval, the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence may be run based on the virtual reality rendering process.
(14) Otherwise, when the difference value between each immersive superposition drawing error and the reference synchronization coefficient corresponding to the virtual reality drawing process does not fall into the preset difference value interval, modifying superposition drawing configuration information corresponding to the immersive superposition drawing error corresponding to the difference value not falling into the preset difference value interval according to the thread script of the virtual reality drawing process, and returning to the step of determining the immersive superposition drawing errors of the first view field angle redrawing stream and the second view field angle redrawing stream corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information.
In one possible implementation, step S140 may be implemented by the following exemplary sub-steps, which are described in detail below.
And a substep S141 of extracting a state vector with communication interaction switching characteristics from the position transfer state vectors of the interaction region based on the interaction region, the first interaction Internet of things object corresponding to the interaction region and the position transfer state vector of the second interaction Internet of things object corresponding to the interaction region.
In this embodiment, the state vector may be used to characterize state change information of at least one interactive region in the interactive rendering stream segment.
And a substep S142 of performing a prediction process on the extracted state vector to obtain at least one confidence of at least one interaction region.
In this embodiment, one confidence may be used to represent the possibility that one interaction region belongs to one communication switching interaction region, and the communication switching interaction region may be divided in a rendering scene according to a unit cell or a functional service cell in advance.
And a substep S143, determining a communication switching interaction region in at least one interaction region in the interactive rendering stream segment based on the at least one confidence level.
Exemplarily, the substep S141 may be implemented by the following detailed embodiments.
(1) Based on the position transfer state vectors of the interactive region, a first interactive internet of things object corresponding to the interactive region and a second interactive internet of things object corresponding to the interactive region, a global state vector of the interactive region and at least one local state vector between the interactive region and at least one linkage internet of things object are respectively constructed.
In this embodiment, the global state vector is used to represent spatial state change information of the interaction region in the interaction rendering stream segment, and one local state vector is used to represent relative state change information between the interaction region and one linkage internet of things object.
(2) And acquiring the state vector of the interaction area based on the global state vector and the at least one local state vector. For example, the global state vector may be fused with at least one local state vector, respectively, to obtain the state vector of the interaction region.
Exemplarily, the substep S142 may be implemented by the following detailed embodiments.
(1) And extracting a plurality of pieces of candidate state vector unit information to be predicted from the extracted state vector, and acquiring a plurality of pieces of associated first communication behavior data and second communication behavior data from the plurality of pieces of candidate state vector unit information to be predicted.
(2) And determining a communication switching perception connected graph according to the first communication behavior data and the second communication behavior data, and acquiring a plurality of switching suspected nodes of the communication switching perception connected graph.
For example, the starting point of the communication switching perception connectivity graph refers to a first communication behavior of the first communication behavior data, the end point refers to a second communication behavior of the second communication behavior data, and each switching suspected node and each communication switching interaction area have a one-to-one correspondence relationship.
(3) The method comprises the steps of obtaining a prediction artificial intelligence model corresponding to a communication switching perception connected graph, wherein the prediction artificial intelligence model comprises prediction nodes corresponding to a plurality of switching suspected nodes of the communication switching perception connected graph, the output value of the prediction artificial intelligence model is determined by the communication switching perception connected graph, and the output value of the prediction artificial intelligence model is used for representing the confidence degree that the communication switching parameters of the plurality of switching suspected nodes meet communication switching prediction conditions.
(4) And predicting the communication switching perception connected graph according to each prediction node of the prediction artificial intelligence model to obtain a prediction confidence corresponding to each prediction node so as to obtain at least one confidence of at least one interaction region.
For example, in the sub-step S143, all communication switching interaction regions with the confidence level greater than the set confidence level may be determined as communication switching interaction regions within at least one interaction region.
Fig. 3 is a schematic diagram of functional modules of the information processing apparatus 300 based on internet of things interaction and intelligent communication according to the embodiment of the present disclosure, in this embodiment, the information processing apparatus 300 based on internet of things interaction and intelligent communication may be divided into the functional modules according to the method embodiment executed by the cloud computing platform 100, that is, the following functional modules corresponding to the information processing apparatus 300 based on internet of things interaction and intelligent communication may be used to execute the method embodiments executed by the cloud computing platform 100. The information processing apparatus 300 based on the internet of things interaction and intelligent communication may include an extraction module 310, an acquisition module 320, a first determination module 330, and a second determination module 340, where functions of the functional modules of the information processing apparatus 300 based on the internet of things interaction and intelligent communication are described in detail below.
The extracting module 310 is configured to extract complete virtual reality drawing information between each video playing map having a drawing and superimposing relationship from each human-computer interaction device end 200, and render each complete virtual reality drawing information in the same corresponding candidate internet of things interaction scene to obtain an internet of things interaction rendering stream. The extracting module 310 may be configured to perform the step S110, and the detailed implementation of the extracting module 310 may refer to the detailed description of the step S110.
The obtaining module 320 is configured to obtain each interactive rendering stream segment to be extracted in the internet of things interactive rendering stream, and obtain a position transition state vector of at least one interaction region in the interactive rendering stream segments to be extracted, where the interactive rendering stream segment is a rendering process segment to be extracted, and each interactive rendering stream segment corresponds to one complete internet of things interactive process. The obtaining module 320 may be configured to perform the step S120, and the detailed implementation of the obtaining module 320 may refer to the detailed description of the step S120.
The first determining module 330 is configured to determine at least one first interactive internet of things object and at least one second interactive internet of things object corresponding to at least one interactive region, where the first interactive internet of things object in any interactive region is an internet of things object in all interactive regions included in an interactive geometric space with the interactive region as a boundary and a target length as an interaction range, and the second interactive internet of things object is an internet of things object interacting with the first interactive internet of things object. The first determining module 330 may be configured to perform the step S130, and the detailed implementation of the first determining module 330 may refer to the detailed description of the step S130
The second determining module 340 is configured to determine, based on the position transition state vectors of the interaction region, the at least one first interaction internet of things object, and the at least one second interaction internet of things object, a communication switching interaction region in the at least one interaction region in the interaction rendering stream fragment, and label the communication switching interaction region in the internet of things interaction rendering stream in a form of a reminder. The second determining module 340 may be configured to perform the step S140, and for a detailed implementation of the second determining module 340, reference may be made to the detailed description of the step S140.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules may all be implemented in software invoked by a processing element. Or may be implemented entirely in hardware. And part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the extraction module 310 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the extraction module 310. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Fig. 4 illustrates a hardware structure diagram of the cloud computing platform 100 for implementing the control device provided in the embodiment of the present disclosure, and as shown in fig. 4, the cloud computing platform 100 may include a processor 110, a machine-readable storage medium 120, a bus 130, and a transceiver 140.
In a specific implementation process, the at least one processor 110 executes computer-executable instructions stored in the machine-readable storage medium 120 (for example, the extraction module 310, the acquisition module 320, the first determination module 330, and the second determination module 340 included in the information processing apparatus 300 based on internet of things interaction and intelligent communication shown in fig. 3), so that the processor 110 may execute the information processing method based on internet of things interaction and intelligent communication according to the above method embodiment, where the processor 110, the machine-readable storage medium 120, and the transceiver 140 are connected through the bus 130, and the processor 110 may be configured to control the transceiver action of the transceiver 140, so as to perform data transceiving with the human-computer interaction device 200.
For a specific implementation process of the processor 110, reference may be made to the above-mentioned method embodiments executed by the cloud computing platform 100, and implementation principles and technical effects thereof are similar, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 4, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The machine-readable storage medium 120 may comprise high-speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus 130 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus 130 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In addition, the embodiment of the disclosure also provides a readable storage medium, in which computer execution instructions are stored, and when a processor executes the computer execution instructions, the information processing method based on the internet of things interaction and intelligent communication is implemented.
The readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (10)

1. An information processing method based on Internet of things interaction and intelligent communication is characterized by being applied to a cloud computing platform, wherein the cloud computing platform is in communication connection with a plurality of human-computer interaction equipment terminals, and the method comprises the following steps:
extracting complete virtual reality drawing information between video playing maps with drawing superposition relations from each human-computer interaction equipment terminal, and rendering each complete virtual reality drawing information in the same corresponding candidate Internet of things interaction scene to obtain Internet of things interaction rendering stream;
acquiring each interactive rendering stream fragment to be extracted in the internet of things interactive rendering stream, and acquiring a position transfer state vector of at least one interactive region in the interactive rendering stream fragments to be extracted, wherein the interactive rendering stream fragments are rendering process fragments to be extracted, and each interactive rendering stream fragment corresponds to an internet of things complete interactive process respectively;
determining at least one first interactive internet of things object and at least one second interactive internet of things object which respectively correspond to the at least one interactive area, wherein the first interactive internet of things object of any interactive area is an internet of things object of all interactive areas included in an interactive geometric space which takes the interactive area as a boundary and takes a target length as an interactive range, and the second interactive internet of things object is an internet of things object which interacts with the first interactive internet of things object;
determining a communication switching interaction area in the at least one interaction area in the interaction rendering stream fragment based on the position transfer state vectors of the interaction area, the at least one first interaction internet-of-things object and the at least one second interaction internet-of-things object, and labeling the communication switching interaction area in the internet-of-things interaction rendering stream in a reminding manner;
the complete interaction process of the Internet of things refers to a process from the beginning to the end of an interaction service of the Internet of things; the position transition state vector refers to transition state change of a corresponding position area in the rendering process, and comprises transition state change of the rendering state in the process of transferring from one position area to another position area;
each Internet of things object is a system formed by one Internet of things device or a plurality of Internet of things devices;
and in the process of marking the communication switching interaction area in the Internet of things interaction rendering stream in a prompting mode, prompting is carried out based on a rendering prompting mode preset by a user.
2. The method for processing information based on internet of things interaction and intelligent communication according to claim 1, wherein the step of determining a communication switching interaction region in the at least one interaction region in the interactive rendering stream segment based on the position transition state vectors of the interaction region, the at least one first interactive internet of things object and the at least one second interactive internet of things object comprises:
extracting a state vector with communication interaction switching characteristics in the position transfer state vectors of the interaction areas based on the position transfer state vectors of the interaction areas, a first interaction Internet of things object corresponding to the interaction areas and a second interaction Internet of things object corresponding to the interaction areas, wherein the state vector is used for representing state change information of the at least one interaction area in the interaction rendering stream segment;
performing prediction processing on the extracted state vector to obtain at least one confidence coefficient of the at least one interaction region, wherein one confidence coefficient is used for representing the possibility that one interaction region belongs to one communication switching interaction region;
and determining a communication switching interaction area in the at least one interaction area in the interactive rendering stream segment based on the at least one confidence level.
3. The information processing method based on the internet of things interaction and intelligent communication of claim 2, wherein the step of extracting the state vector with the communication interaction switching feature from the position transition state vector of the interaction region based on the position transition state vectors of the interaction region, the first interaction internet of things object corresponding to the interaction region and the second interaction internet of things object corresponding to the interaction region comprises:
respectively constructing a global state vector of the interaction region and at least one local state vector between the interaction region and at least one linkage Internet of things object based on the position transition state vectors of the interaction region, a first interaction Internet of things object corresponding to the interaction region and a second interaction Internet of things object corresponding to the interaction region, wherein the global state vector is used for representing the spatial state change information of the interaction region in an interaction rendering stream segment, and one local state vector is used for representing the relative state change information between the interaction region and one linkage Internet of things object;
and acquiring the state vector of the interaction area based on the global state vector and the at least one local state vector.
4. The method for processing information based on internet of things interaction and intelligent communication according to claim 3, wherein the step of obtaining the state vector of the interaction region based on the global state vector and the at least one local state vector comprises:
and fusing the global state vector with the at least one local state vector respectively to obtain the state vector of the interaction region.
5. The method for processing information based on internet of things interaction and intelligent communication according to claim 2, wherein the step of performing prediction processing on the extracted state vector to obtain at least one confidence level of the at least one interaction region comprises:
extracting a plurality of candidate state vector unit information to be predicted from the extracted state vector, and acquiring a plurality of associated first communication behavior data and second communication behavior data from the plurality of candidate state vector unit information to be predicted;
determining a communication switching perception connection diagram according to the first communication behavior data and the second communication behavior data, and acquiring a plurality of switching suspected nodes of the communication switching perception connection diagram, wherein a starting point of the communication switching perception connection diagram refers to a first communication behavior of the first communication behavior data, an end point of the communication switching perception connection diagram refers to a second communication behavior of the second communication behavior data, and each switching suspected node and each communication switching interaction area have a one-to-one correspondence relation;
acquiring a prediction artificial intelligence model corresponding to the communication switching perception connected graph, wherein the prediction artificial intelligence model comprises prediction nodes corresponding to a plurality of switching suspected nodes of the communication switching perception connected graph, an output value of the prediction artificial intelligence model is determined by the communication switching perception connected graph, and the output value of the prediction artificial intelligence model is used for representing the confidence degree that the communication switching parameters of the plurality of switching suspected nodes meet communication switching prediction conditions;
and predicting the communication switching perception connected graph according to each prediction node of the prediction artificial intelligence model to obtain a prediction confidence corresponding to each prediction node so as to obtain at least one confidence of the at least one interaction region.
6. The method for processing information based on internet of things interaction and intelligent communication according to claim 2, wherein the step of determining a communication switching interaction region in the at least one interaction region in the interaction rendering stream segment based on the at least one confidence level comprises:
and determining all communication switching interaction areas with the confidence degrees larger than the set confidence degree as communication switching interaction areas in the at least one interaction area.
7. The information processing method based on the internet of things interaction and intelligent communication as claimed in claim 1, wherein the step of extracting complete virtual reality drawing information between each video playing map with drawing superposition relationship from each human-computer interaction device side comprises:
acquiring a virtual reality three-dimensional map of a candidate Internet of things interaction scene under a drawing pixel segment of each drawing layered component from each human-computer interaction equipment terminal, and dividing the virtual reality three-dimensional map under each drawing pixel segment in an interaction mode according to a preset interaction Internet of things form to respectively generate a map dividing sequence of each interaction Internet of things form;
aiming at each interactive Internet of things form, obtaining a mapping drawing stream corresponding to each video playing mapping in a mapping dividing sequence of the interactive Internet of things form, and performing virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping;
judging whether drawing superposition information for representing that video playing maps have drawing superposition exists or not in the virtual reality drawing process, and extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having drawing superposition relation with the first video playing map when the drawing superposition information is detected;
and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
8. The information processing method based on the internet of things interaction and intelligent communication according to claim 7, wherein the step of obtaining the mapping drawing stream corresponding to each video playing mapping in the mapping division sequence in the form of the interactive internet of things comprises:
judging whether an Internet of things interaction relationship is established with each video playing map or not; the interactive relation of the internet of things is used for setting drawing services of a map drawing stream corresponding to video playing maps, each video playing map corresponds to one interactive relation of the internet of things, and the interactive modes of different interactive relations of the internet of things are different;
if the Internet of things interaction relation corresponding to each video playing map association is not obtained, map drawing source information of each video playing map is obtained; the mapping source information comprises a mapping source label corresponding to the video playing mapping, and the mapping source label is a mapping source label corresponding to a mapping stream generated by the video playing mapping;
analyzing and identifying each mapping source information according to the vertex mapping character corresponding to each mapping source information to obtain at least a plurality of vertex mapping partitions corresponding to each mapping source information, and determining a target vertex mapping partition with displacement transformation information from the vertex mapping partitions corresponding to each mapping source information; the displacement transformation information is a displacement transformation node of a vertex mapping partition corresponding to the source label of the mapping map and representing the vertex mapping partition;
associating a depth map in a target vertex mapping partition corresponding to each video playing map with an internet of things interaction relation corresponding to each video playing map, wherein the internet of things interaction relation is determined according to the internet of things interaction relation corresponding to each depth virtual camera in the depth map in the target vertex mapping partition;
and acquiring a mapping drawing stream corresponding to each video playing mapping from a pre-configured mapping drawing stream library according to the Internet of things interaction relation corresponding to each video playing mapping, wherein the mapping drawing stream library comprises mapping drawing streams of each video playing mapping under different Internet of things interaction relations.
9. A cloud computing platform, characterized in that the cloud computing platform comprises a processor, a machine-readable storage medium, and a network interface, the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is used for being connected with at least one human-computer interaction device in a communication manner, the machine-readable storage medium is used for storing programs, instructions, or codes, and the processor is used for executing the programs, instructions, or codes in the machine-readable storage medium to execute the information processing method based on the internet of things interaction and intelligent communication according to any one of claims 1 to 8.
10. An information processing system based on Internet of things interaction and intelligent communication is characterized by comprising a cloud computing platform and a plurality of human-computer interaction equipment terminals in communication connection with the cloud computing platform;
the cloud computing platform is used for extracting complete virtual reality drawing information between video playing maps with drawing and overlapping relations from each human-computer interaction equipment terminal, and rendering each complete virtual reality drawing information in the same corresponding candidate Internet of things interaction scene to obtain Internet of things interaction rendering streams;
the cloud computing platform is used for acquiring each interactive rendering stream fragment to be extracted in the Internet of things interactive rendering stream and acquiring a position transfer state vector of at least one interaction area in the interactive rendering stream fragments to be extracted, wherein the interactive rendering stream fragments are rendering process fragments to be extracted, and each interactive rendering stream fragment corresponds to a complete interaction process of the Internet of things;
the cloud computing platform is used for determining at least one first interactive internet of things object and at least one second interactive internet of things object which correspond to the at least one interactive area respectively, wherein the first interactive internet of things object in any interactive area is an internet of things object in all interactive areas included in an interactive geometric space which takes the interactive area as a boundary and takes a target length as an interactive range, and the second interactive internet of things object is an internet of things object which interacts with the first interactive internet of things object;
the cloud computing platform is used for determining a communication switching interaction area in the at least one interaction area in the interaction rendering stream fragment based on the position transfer state vector of the interaction area, the at least one first interaction internet-of-things object and the at least one second interaction internet-of-things object, and marking the communication switching interaction area in the internet-of-things interaction rendering stream in a reminding manner;
the complete interaction process of the Internet of things refers to a process from the beginning to the end of an interaction service of the Internet of things; the position transition state vector refers to transition state change of a corresponding position area in the rendering process, and comprises transition state change of the rendering state in the process of transferring from one position area to another position area;
each Internet of things object is a system formed by one Internet of things device or a plurality of Internet of things devices;
and in the process of marking the communication switching interaction area in the Internet of things interaction rendering stream in a prompting mode, prompting is carried out based on a rendering prompting mode preset by a user.
CN202110176177.7A 2020-06-21 2020-06-21 Information processing method and system based on Internet of things interaction and intelligent communication Withdrawn CN112887418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110176177.7A CN112887418A (en) 2020-06-21 2020-06-21 Information processing method and system based on Internet of things interaction and intelligent communication

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110176177.7A CN112887418A (en) 2020-06-21 2020-06-21 Information processing method and system based on Internet of things interaction and intelligent communication
CN202010569971.3A CN111787081B (en) 2020-06-21 2020-06-21 Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010569971.3A Division CN111787081B (en) 2020-06-21 2020-06-21 Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform

Publications (1)

Publication Number Publication Date
CN112887418A true CN112887418A (en) 2021-06-01

Family

ID=72757102

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110176177.7A Withdrawn CN112887418A (en) 2020-06-21 2020-06-21 Information processing method and system based on Internet of things interaction and intelligent communication
CN202010569971.3A Active CN111787081B (en) 2020-06-21 2020-06-21 Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform
CN202110176190.2A Withdrawn CN112887419A (en) 2020-06-21 2020-06-21 Information processing method, system and platform based on Internet of things interaction and intelligent communication

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202010569971.3A Active CN111787081B (en) 2020-06-21 2020-06-21 Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform
CN202110176190.2A Withdrawn CN112887419A (en) 2020-06-21 2020-06-21 Information processing method, system and platform based on Internet of things interaction and intelligent communication

Country Status (1)

Country Link
CN (3) CN112887418A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433655B (en) * 2020-12-04 2021-09-07 武汉迈异信息科技有限公司 Information flow interaction processing method based on cloud computing and cloud computing verification interaction center

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760106B (en) * 2016-03-08 2019-01-15 网易(杭州)网络有限公司 A kind of smart home device exchange method and device
CN107194832A (en) * 2017-05-02 2017-09-22 向军 A kind of intelligent tour scenic spot system based on mobile Internet and Internet of Things
US10338695B1 (en) * 2017-07-26 2019-07-02 Ming Chuan University Augmented reality edugaming interaction method
CN109448137B (en) * 2018-10-23 2023-01-10 网易(杭州)网络有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN110908556A (en) * 2019-10-24 2020-03-24 深圳传音控股股份有限公司 Interaction method, interaction device, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111787081B (en) 2021-03-23
CN112887419A (en) 2021-06-01
CN111787081A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111626817B (en) User portrait analysis method based on electronic commerce big data and artificial intelligence platform
CN109947967B (en) Image recognition method, image recognition device, storage medium and computer equipment
US10810430B2 (en) Augmented reality with markerless, context-aware object tracking
JP7475772B2 (en) IMAGE GENERATION METHOD, IMAGE GENERATION DEVICE, COMPUTER DEVICE, AND COMPUTER PROGRAM
JP7461478B2 (en) Method and Related Apparatus for Occlusion Handling in Augmented Reality Applications Using Memory and Device Tracking - Patent application
CN112016475A (en) Human body detection and identification method and device
CN111476875B (en) Smart building Internet of things object simulation method and building cloud server
CN111626816B (en) Image interaction information processing method based on e-commerce live broadcast and cloud computing platform
CN111787081B (en) Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform
CN114697703A (en) Video data generation method and device, electronic equipment and storage medium
CN111787080B (en) Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
CN107633498A (en) Image dark-state Enhancement Method, device and electronic equipment
CN114004953A (en) Method and system for realizing reality enhancement picture and cloud server
CN117830305B (en) Object measurement method, device, equipment and medium
CN117726963A (en) Picture data processing method and device, electronic equipment and medium
KR20180075222A (en) Electric apparatus and operation method thereof
CN118102033A (en) Video processing method, apparatus and computer readable storage medium
CN116977519A (en) Method and device for testing gaze point rendering, computer equipment and storage medium
CN115345971A (en) Model texture switching method and device
CN115439845A (en) Image extrapolation method and device based on graph neural network, storage medium and terminal
CN115880605A (en) Test processing method and device
CN117237514A (en) Image processing method and image processing apparatus
CN117931019A (en) Element selection processing method, element selection processing device, computer equipment and storage medium
CN116983622A (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210601