CN111787080B - Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform - Google Patents

Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform Download PDF

Info

Publication number
CN111787080B
CN111787080B CN202010569965.8A CN202010569965A CN111787080B CN 111787080 B CN111787080 B CN 111787080B CN 202010569965 A CN202010569965 A CN 202010569965A CN 111787080 B CN111787080 B CN 111787080B
Authority
CN
China
Prior art keywords
immersive
internet
rendering
stream
things
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010569965.8A
Other languages
Chinese (zh)
Other versions
CN111787080A (en
Inventor
潘少喜
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Youyi Internet Technology Co ltd
Original Assignee
Guangdong Youyi Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Youyi Internet Technology Co ltd filed Critical Guangdong Youyi Internet Technology Co ltd
Priority to CN202011504703.XA priority Critical patent/CN112532742A/en
Priority to CN202010569965.8A priority patent/CN111787080B/en
Priority to CN202011498992.7A priority patent/CN112565450A/en
Publication of CN111787080A publication Critical patent/CN111787080A/en
Application granted granted Critical
Publication of CN111787080B publication Critical patent/CN111787080B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment of the invention provides a data processing method and a cloud computing platform based on artificial intelligence and Internet of things interaction, wherein virtual reality three-dimensional maps under various rendering pixel segments are divided in an interactive mode based on a preset interactive Internet of things form, so that the situation of rendering conflict in the rendering process is improved by considering the difference of different interactive Internet of things forms, in addition, the rendering result generated after the immersive superposition rendering is carried out on each independent associated video playing map is independently processed, and the subsequent dynamic virtual experience can be carried out by pertinently taking the associated video playing maps as independent experience targets in the actual virtual reality experience process.

Description

Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform
Technical Field
The invention relates to the technical field of Internet of things, in particular to a data processing method and a cloud computing platform based on artificial intelligence and Internet of things interaction.
Background
With the rapid development of the internet of things technology, the internet of things plays an increasingly important role, the virtual reality technology is one of the leading-edge technologies which are most concerned nowadays, the virtual reality technology is rapidly developing, and various virtual reality products including virtual reality hardware devices and virtual reality content applications gradually enter the consumer market. For example, in a virtual reality experience process in an internet of things interaction process, a chartlet drawing flow of each video playing chartlet (e.g., a human-computer interaction terminal, a security terminal, and a mobile application terminal) is usually drawn in advance, so as to facilitate subsequent dynamic virtual experience.
In a traditional scheme, the difference of different interactive internet of things forms is usually not considered, so that the situation of drawing conflict in the drawing process is easily caused, in the drawing process, some overlapped drawing effects may exist among different video playing maps, and the overlapped drawing effects can further enrich the experience forms of the internet of things, for example, for a plurality of internet of things devices with the same function form, the real-time interactive process of the internet of things can be further experienced by a user. However, an independent processing scheme for the drawing result of each individual associated video playing map is absent at present, so that subsequent dynamic virtual experience cannot be performed with the associated video playing map as an independent experience target in a targeted manner in the actual virtual reality experience process.
Disclosure of Invention
In order to overcome at least the above defects in the prior art, the invention aims to provide a data processing method and a cloud computing platform based on artificial intelligence and internet of things interaction, wherein the virtual reality three-dimensional maps under each drawing pixel segment are divided in an interactive mode based on a preset interactive internet of things form, so that the difference of different interactive internet of things forms is considered, the drawing conflict condition in the drawing process is improved, in addition, the drawing results generated after the immersive superposition drawing is carried out on each independent associated video playing map are independently processed, and the subsequent dynamic virtual experience can be carried out by pertinently taking the associated video playing maps as independent experience targets in the actual virtual reality experience process.
In a first aspect, the invention provides a data processing method based on artificial intelligence and internet of things interaction, which is applied to a cloud computing platform, wherein the cloud computing platform is in communication connection with a plurality of human-computer interaction equipment terminals, and the method comprises the following steps:
acquiring a virtual reality three-dimensional map of a candidate Internet of things interaction scene under a drawing pixel segment of each drawing layered component from each human-computer interaction equipment terminal, and dividing the virtual reality three-dimensional map under each drawing pixel segment in an interaction mode according to a preset interaction Internet of things form to respectively generate a map dividing sequence of each interaction Internet of things form;
aiming at each interactive Internet of things form, obtaining a mapping drawing stream corresponding to each video playing mapping in a mapping dividing sequence of the interactive Internet of things form, and performing virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping;
judging whether drawing superposition information for representing that video playing maps have drawing superposition exists or not in the virtual reality drawing process, and extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having drawing superposition relation with the first video playing map when the drawing superposition information is detected;
and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
In a possible implementation manner of the first aspect, the step of obtaining a mapping rendering stream corresponding to each video playing mapping in the mapping partitioning sequence in the form of the interactive internet of things includes:
judging whether an Internet of things interaction relationship is established with each video playing map or not; the interactive relation of the internet of things is used for setting drawing services of a map drawing stream corresponding to video playing maps, each video playing map corresponds to one interactive relation of the internet of things, and the interactive modes of different interactive relations of the internet of things are different;
if the Internet of things interaction relation corresponding to each video playing map association is not obtained, map drawing source information of each video playing map is obtained; the mapping source information comprises a mapping source label corresponding to the video playing mapping, and the mapping source label is a mapping source label corresponding to a mapping stream generated by the video playing mapping;
analyzing and identifying each mapping source information according to the vertex mapping character corresponding to each mapping source information to obtain at least a plurality of vertex mapping partitions corresponding to each mapping source information, and determining a target vertex mapping partition with displacement transformation information from the vertex mapping partitions corresponding to each mapping source information; the displacement transformation information is a displacement transformation node of a vertex mapping partition corresponding to the source label of the mapping map and representing the vertex mapping partition;
associating a depth map in a target vertex mapping partition corresponding to each video playing map with an internet of things interaction relation corresponding to each video playing map, wherein the internet of things interaction relation is determined according to the internet of things interaction relation corresponding to each depth virtual camera in the depth map in the target vertex mapping partition;
and acquiring a mapping drawing stream corresponding to each video playing mapping from a pre-configured mapping drawing stream library according to the Internet of things interaction relation corresponding to each video playing mapping, wherein the mapping drawing stream library comprises mapping drawing streams of each video playing mapping under different Internet of things interaction relations.
In a possible implementation manner of the first aspect, the step of determining complete virtual reality drawing information between the first video playback map and the at least one second video playback map according to a preset artificial intelligence model includes:
adding the first and second chartlet rendering streams to a preset immersive overlay rendering queue and establishing a plurality of first immersive overlay rendering parameters for the first chartlet rendering stream and a plurality of second immersive overlay rendering parameters for the second chartlet rendering stream based on the immersive overlay rendering queue;
determining first lens distortion information for the first video playback map from each first immersive overlay rendering parameter, and determining second lens distortion information for the second video playback map from each second immersive overlay rendering parameter, then mapping the first lens distortion information and the second lens distortion information to a preset projection matrix to obtain a first view angle redrawing stream corresponding to the first lens distortion information and a second view angle redrawing stream corresponding to the second lens distortion information, and determining a plurality of virtual imaging pictures in the preset projection matrix, summarizing the plurality of virtual imaging pictures to obtain at least a plurality of different classes of virtual imaging sequences, and for each virtual imaging sequence, drawing a first view field angle redrawing flow and a second view field angle redrawing flow corresponding to each virtual imaging picture in the virtual imaging sequence in a preset virtual reality drawing process;
and splicing the drawing results of the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence according to a rendering sequence to generate a simulated drawing stream, restoring the simulated drawing stream generated by splicing according to a preset artificial intelligence model, and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map.
In a possible implementation manner of the first aspect, the step of adding the first and second chartlet rendering streams to a preset immersive superimposition rendering queue includes:
determining overlay rendering configuration information of the immersive overlay rendering queue; the overlay drawing configuration information is used for representing an immersion overlay drawing unit which is allocated when the immersion overlay drawing queue processes the added overlay drawing streams, and the immersion overlay drawing unit is used for representing drawing feature node information when the immersion overlay drawing queue draws the added overlay drawing streams;
determining, based on the overlay rendering configuration information, to add the first overlay rendering stream to first rendering feature node information corresponding to the immersive overlay rendering queue and to add the second overlay rendering stream to second rendering feature node information corresponding to the immersive overlay rendering queue;
determining from the first and second draw feature node information whether there is a draw overlay when adding the first and second chartlet draw streams to the immersive overlay draw queue; wherein the drawing superposition is used for representing that the drawing of the immersive superposition drawing queue has superposition synchronization behavior;
if not, adjusting the second drawing feature node information to obtain third drawing feature node information, and adding the first chartlet drawing flow and the second chartlet drawing flow to the immersive superposition drawing queue based on the first drawing feature node information and the third drawing feature node information, wherein a feature difference between the third drawing feature node information and the second drawing feature node information is matched with a feature difference between the first drawing feature node information and the second drawing feature node information;
and if so, continuously adopting the first drawing feature node information and the second drawing feature node information to add the first chartlet drawing flow and the second chartlet drawing flow to the immersive superposition drawing queue.
In one possible implementation manner of the first aspect, the step of establishing, based on the immersive overlay rendering queue, a plurality of first immersive overlay rendering parameters of the first overlay rendering stream and a plurality of second immersive overlay rendering parameters of the second overlay rendering stream includes:
determining a first sequence of drawing nodes of the first chartlet drawing stream and a second sequence of drawing nodes of the second chartlet drawing stream based on the immersive overlay drawing queue; the drawing node sequence is used for representing drawing interaction relations of the chartlet drawing flow under different drawing nodes;
establishing a plurality of first immersive superimposition rendering parameters of the first chartlet rendering stream and a plurality of second immersive superimposition rendering parameters of the second chartlet rendering stream in the immersive superimposition rendering queue according to the first sequence of rendering nodes and the second sequence of rendering nodes, respectively.
In a possible implementation manner of the first aspect, the step of determining first lens distortion information of the first video playback map according to each first immersive overlay rendering parameter and determining second lens distortion information of the second video playback map according to each second immersive overlay rendering parameter includes:
determining a drawing node time sequence axis corresponding to each first immersive superposition drawing parameter according to a plurality of drawing nodes in each first immersive superposition drawing parameter and drawing model collision parameters between every two adjacent drawing nodes;
determining first lens distortion information for the first video playback map based on the render node timing axis; each drawing node in the first immersive superposition drawing parameters is correspondingly provided with a drawing model collision cycle parameter, a matching parameter between the drawing model collision cycle parameter and the drawing model collision cycle parameter of any one drawing node serves as a corresponding drawing model collision parameter, and the drawing model collision cycle parameter is determined according to a drawing track of the drawing node in the first immersive superposition drawing parameters;
listing the drawing node of each second immersive superposition drawing parameter and the drawing model collision cycle parameter corresponding to the drawing node to obtain a first projection drawing object and a second projection drawing object corresponding to each second immersive superposition drawing parameter; the first projection drawing object is a projection drawing object corresponding to a drawing node of a second immersive superposition drawing parameter, and the second projection drawing object is a projection drawing object corresponding to a drawing model collision cycle parameter of the second immersive superposition drawing parameter;
determining a first three-dimensional spatial relationship of the first projected rendering object relative to the second projected rendering object and a second three-dimensional spatial relationship of the second projected rendering object relative to the second projected rendering object;
acquiring at least three target three-dimensional positions with the same spatial point continuity in the first three-dimensional spatial relationship and the second three-dimensional spatial relationship, and determining second lens distortion information of the second immersive superposition drawing parameter according to the target three-dimensional positions; wherein the spatial point continuity is used to characterize a render model collision cycle relationship between each two three-dimensional locations.
In a possible implementation manner of the first aspect, the step of summarizing the plurality of virtual imaging frames to obtain at least a plurality of different types of virtual imaging sequences includes:
determining the number of field angle redrawing streams corresponding to each virtual imaging picture in the preset projection matrix;
determining a class drawing interval of a field angle redrawing stream corresponding to each virtual imaging picture; the category drawing interval is the coincidence proportion of a first view field angle redrawing stream and a second view field angle redrawing stream in the view field angle redrawing streams corresponding to each virtual imaging picture;
determining vector three-dimensional drawing information of a first view field angle redrawing stream and a second view field angle redrawing stream corresponding to each virtual imaging picture; the vector stereo drawing information is obtained by calculating vector angle characteristic values of a set number of field angle degree redrawing pictures corresponding to the first field angle redrawing stream and the second field angle redrawing stream;
determining a frame characteristic sequence of each virtual imaging picture according to the number of field angle redrawing streams, the category drawing interval and the vector three-dimensional drawing information corresponding to each virtual imaging picture;
and summarizing each virtual imaging picture based on the frame feature sequence of each virtual imaging picture to obtain the at least a plurality of virtual imaging sequences of different categories.
In a possible implementation manner of the first aspect, the step of rendering, in a preset virtual reality rendering process, a first view angle redrawing stream and a second view angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence includes:
determining the superposition drawing configuration information of the frame characteristic sequence corresponding to each virtual imaging picture in each virtual imaging sequence;
determining an immersive superposition drawing error of a first view angle redrawing flow and a second view angle redrawing flow corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information; the immersive superposition drawing error is used for representing the drawing error conditions of the first view field angle redrawing stream and the second view field angle redrawing stream corresponding to each virtual imaging picture;
judging whether the difference value of each immersive superposition drawing error and the reference drawing error corresponding to the virtual reality drawing process is within a preset difference value interval; the preset difference value interval is used for representing the interval where each immersive superposition drawing error is located when the virtual reality drawing process is in normal operation;
when the difference value between each immersive superposition drawing error and the reference synchronous coefficient corresponding to the virtual reality drawing process falls into the preset difference value interval, running a first view field angle redrawing stream and a second view field angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence based on the virtual reality drawing process;
and otherwise, modifying the superposition drawing configuration information corresponding to the immersive superposition drawing error corresponding to the difference value which does not fall into the preset difference value interval according to the thread script of the virtual reality drawing process, and returning to the step of determining the immersive superposition drawing error of the first view field angle redrawing flow and the second view field angle redrawing flow corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information.
In a second aspect, an embodiment of the present invention further provides a data processing apparatus based on artificial intelligence and internet of things interaction, which is applied to a cloud computing platform, where the cloud computing platform is in communication connection with multiple human-computer interaction device terminals, and the apparatus includes:
the acquisition module is used for acquiring a virtual reality three-dimensional map of a candidate Internet of things interaction scene under a drawing pixel segment of each drawing layered component from each human-computer interaction equipment terminal, performing interaction mode division on the virtual reality three-dimensional map under each drawing pixel segment according to a preset interaction Internet of things form, and respectively generating a map division sequence of each interaction Internet of things form;
the drawing module is used for acquiring a chartlet drawing stream corresponding to each video playing chartlet in the chartlet dividing sequence of the interactive Internet of things form aiming at each interactive Internet of things form, and performing virtual reality drawing on the chartlet drawing stream corresponding to each video playing chartlet;
the extraction module is used for judging whether drawing superposition information used for indicating that the video playing maps have drawing superposition exists or not in the virtual reality drawing process, and extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map which has the drawing superposition relation with the first video playing map when the drawing superposition information is detected;
and the determining module is used for determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
In a third aspect, an embodiment of the present invention further provides a data processing system based on artificial intelligence and internet of things interaction, where the data processing system based on artificial intelligence and internet of things interaction includes a cloud computing platform and a plurality of human-computer interaction device terminals in communication connection with the cloud computing platform;
the human-computer interaction equipment terminal is used for sending a virtual reality three-dimensional map of the interaction scene of the candidate Internet of things under the drawing pixel segment of each drawing layered component to the cloud computing platform
The cloud computing platform is used for acquiring a virtual reality three-dimensional map of a candidate Internet of things interaction scene under a drawing pixel segment of each drawing layered component from each human-computer interaction equipment terminal, performing interaction mode division on the virtual reality three-dimensional map under each drawing pixel segment according to a preset interaction Internet of things form, and respectively generating a map division sequence of each interaction Internet of things form;
the cloud computing platform is used for acquiring a mapping drawing stream corresponding to each video playing mapping in a mapping dividing sequence of the interactive Internet of things form aiming at each interactive Internet of things form, and performing virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping;
the cloud computing platform is used for judging whether drawing superposition information used for indicating that the video playing maps have drawing superposition exists or not in the virtual reality drawing process, and extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having the drawing superposition relation with the first video playing map when the drawing superposition information is detected;
the cloud computing platform is used for determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
In a fourth aspect, an embodiment of the present invention further provides a cloud computing platform, where the cloud computing platform includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be in communication connection with at least one human-computer interaction device, the machine-readable storage medium is configured to store a program, an instruction, or a code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium, so as to execute the data processing method based on artificial intelligence and internet of things interaction in any one possible design of the first aspect or the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, where instructions are stored, and when executed, cause a computer to perform a data processing method based on artificial intelligence and internet of things interaction in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, the virtual reality three-dimensional maps under each drawing pixel segment are divided in an interactive mode based on the preset interactive internet of things form, so that the difference of different interactive internet of things forms is considered, the situation of drawing conflict in the drawing process is improved, in addition, the drawing result generated after the immersive overlapping drawing is carried out on each independent associated video playing map is independently processed, and the subsequent dynamic virtual experience can be carried out by pertinently taking the associated video playing maps as independent experience targets in the actual virtual reality experience process.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of an application scenario of a data processing system based on artificial intelligence and internet of things interaction according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a data processing method based on artificial intelligence and internet of things interaction according to an embodiment of the present invention;
fig. 3 is a schematic functional module diagram of a data processing apparatus based on artificial intelligence and internet of things interaction according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of structural components of a cloud computing platform for implementing the data processing method based on artificial intelligence and internet of things interaction according to the embodiment of the present invention.
Detailed Description
The present invention is described in detail below with reference to the drawings, and the specific operation methods in the method embodiments can also be applied to the apparatus embodiments or the system embodiments.
FIG. 1 is an interaction diagram of a data processing system 10 based on artificial intelligence and Internet of things interaction according to an embodiment of the present invention. The data processing system 10 based on artificial intelligence and internet of things interaction can comprise a cloud computing platform 100 and a human-computer interaction device end 200 in communication connection with the cloud computing platform 100. The data processing system 10 based on artificial intelligence and internet of things interaction shown in fig. 1 is only one possible example, and in other possible embodiments, the data processing system 10 based on artificial intelligence and internet of things interaction may also include only some of the components shown in fig. 1 or may also include other components.
In this embodiment, the human-computer interaction device end 200 may include a mobile device, a tablet computer, a laptop computer, or any combination thereof. In some embodiments, the mobile device may include an internet of things device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the internet of things device may include a control device of a smart appliance device, a smart monitoring device, a smart television, a smart camera, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, and the like, or any combination thereof. In some embodiments, the virtual reality device and the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, virtual reality devices and augmented reality devices may include various virtual reality products and the like.
In this embodiment, the cloud computing platform 100 and the human-computer interaction device end 200 in the data processing system 10 based on the artificial intelligence and the internet of things interaction may execute the data processing method based on the artificial intelligence and the internet of things interaction described in the following method embodiment in a matching manner, and the detailed description of the following method embodiment may be referred to in the execution step sections of the cloud computing platform 100 and the human-computer interaction device end 200.
In this embodiment, the data processing system 10 based on artificial intelligence and internet of things interaction may be implemented in various application scenarios, for example, a block chain application scenario, an intelligent home application scenario, an intelligent control application scenario, and the like.
In order to solve the technical problem in the foregoing background art, fig. 2 is a schematic flow chart of a data processing method based on artificial intelligence and internet of things interaction according to an embodiment of the present invention, where the data processing method based on artificial intelligence and internet of things interaction according to the embodiment may be executed by the cloud computing platform 100 shown in fig. 1, and the data processing method based on artificial intelligence and internet of things interaction is described in detail below.
Step S110, obtaining a virtual reality three-dimensional map of the candidate Internet of things interactive scene under the drawing pixel segment of each drawing layered component from each human-computer interactive equipment end 200, dividing the virtual reality three-dimensional map under each drawing pixel segment in an interactive mode according to a preset interactive Internet of things form, and respectively generating a map dividing sequence of each interactive Internet of things form.
The drawing layer component may refer to a rendering scene, which generally includes a plurality of drawing layers, such as a real scene layer, a menu option layer, and the like, and each drawing layer may be continuously controlled and executed by the corresponding drawing layer component.
The virtual reality three-dimensional map can be used for representing an entity rendering model of a specific display of the candidate internet of things interactive scene under the rendering pixel segment of each rendering hierarchical component. For example, a virtual world may be created in a computer using three-dimensional animation software (such as 3ds Max, Maya, or Houdini). Then, three-dimensional models such as scenes and three-dimensional cartoon characters are added in the virtual three-dimensional world. And finally, setting an animation curve of the model, a motion track of the virtual camera and other animation parameters, rendering to obtain dynamic maps, and collecting the dynamic maps so as to facilitate the calling in the subsequent virtual reality drawing process.
In this embodiment, the predetermined interactive internet of things form can be flexibly selected according to actual design requirements, for example, an office cooperation internet of things form, a shopping mall experience internet form, and the like, which is not limited in detail herein.
Step S120, aiming at each interactive Internet of things form, obtaining a mapping drawing stream corresponding to each video playing mapping in the mapping dividing sequence of the interactive Internet of things form, and performing virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping.
Step S130, in the virtual reality drawing process, determining whether drawing superposition information for indicating that the video playing maps have drawing superposition exists, and when the drawing superposition information is detected, extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having a drawing superposition relationship with the first video playing map.
Step S140, determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
Based on the above steps, the embodiment divides the virtual reality three-dimensional mapping under each rendering pixel segment based on the predetermined interactive internet of things form, thereby considering the difference of different interactive internet of things forms, and improving the situation of rendering conflict in the rendering process.
In a possible implementation manner, for step S110, in order to improve the dividing accuracy and reduce redundant information to improve the dividing accuracy of the interactive manner, in this embodiment, the interactive internet of things material corresponding to each predetermined interactive internet of things form may be obtained, the interactive internet of things material sequence of each predetermined interactive internet of things form is formed, and the associated interactive internet of things material information of each target interactive internet of things material of each drawing pixel segment and the interactive internet of things material of the interactive internet of things material sequence is obtained.
On the basis, the set interval of the key interactive internet of things materials in each target interactive internet of things form can be calculated according to the associated interactive internet of things material information of the interactive internet of things materials of the target interactive internet of things materials and the interactive internet of things material sequence, and the interactive internet of things materials are selected from the interactive internet of things material sequence according to the set interval of the key interactive internet of things materials in each target interactive internet of things form, so that the initial interactive internet of things material distribution space is obtained.
In one possible example, if the total material distribution set interval of the initial interactive internet of things material distribution space is greater than the maximum total material distribution set interval required by the total material distribution set interval, the first key interactive internet of things material in the initial interactive internet of things material distribution space is dispersed to the first distribution set interval, and the second key interactive internet of things material in the initial interactive internet of things material distribution space is dispersed to the first distribution set interval.
It should be noted that the second key interactive internet of things material may refer to a key interactive internet of things material in which the unit intensity of the key interactive internet of things material in the internet of things partition is less than the set intensity, the first key interactive internet of things material may refer to a key interactive internet of things material in which the unit intensity of the key interactive internet of things material in the internet of things partition is not less than the set intensity, the first distribution set interval may be set according to actual needs, but the first distribution set interval should not differ from the maximum total material distribution set interval required by the total material distribution set interval by too much.
And then, calculating a total material distribution set interval of the initial interaction internet of things material distribution space after the adjustment, and if the total material distribution set interval of the initial interaction internet of things material distribution space after the adjustment is larger than the maximum total material distribution set interval, executing the above processing on the initial interaction internet of things material distribution space after the adjustment again.
For another example, if the total material distribution set interval of the initial interactive internet-of-things material distribution space after the current adjustment is smaller than or equal to the maximum total material distribution set interval, the initial interactive internet-of-things material distribution space before the current adjustment can be used as a first adjustment distribution space, and the target interactive internet-of-things forms are sorted according to the order from low priority to high priority of the interactive internet-of-things forms, so as to obtain a target interactive internet-of-things form sequence.
On the basis, the virtual reality three-dimensional map under each drawing pixel segment can be divided in an interactive mode according to the target interactive Internet of things form sequence, and the map division sequence of each interactive Internet of things form is generated respectively.
For example, in detail, the target interactive internet of things forms may be clustered according to the target interactive internet of things form sequence, each cluster includes a first interactive internet of things form and a second interactive internet of things form which are related to the interaction range of the target interactive internet of things form sequence and have the same range difference with the interaction range, and the priority of the first interactive internet of things form is smaller than that of the second interactive internet of things form.
Then, according to the sequence from low priority to high priority of the range difference with the interaction range, sequentially taking each cluster as a target cluster, and performing the following second adjustment processing on the target cluster: and increasing the set number of key interactive internet-of-things materials in the first interactive internet-of-things form of the target clusters in the first adjustment distribution space, and reducing the set number of key interactive internet-of-things materials in the second interactive internet-of-things form of the target clusters in the first adjustment distribution space.
On the basis, whether the total material distribution set interval of the adjusted first adjustment distribution space is larger than the total material distribution set interval requirement or not can be judged, and if the total material distribution set interval of the adjusted first adjustment distribution space is larger than the total material distribution set interval requirement, the adjusted first adjustment distribution space is used as the final interaction internet of things material distribution space. And if the total material distribution set interval of the first adjusted distribution space after the current adjustment is not larger than the requirement of the total material distribution set interval, taking the next cluster as a new target cluster, and performing second adjustment processing on the new target cluster.
For another example, if the total material distribution set interval of the initial interactive internet of things material distribution space is smaller than the minimum total material distribution set interval required by the total material distribution set interval, the following third adjustment processing is performed on the initial interactive internet of things material distribution space: and increasing a first key interaction internet of things material in the initial interaction internet of things material distribution space by a first distribution set interval, and reducing a second key interaction internet of things material in the initial interaction internet of things material distribution space by the first distribution set interval.
On the basis, calculating a total material distribution set interval of the initial interactive internet of things material distribution space after the adjustment, and if the total material distribution set interval of the initial interactive internet of things material distribution space after the adjustment is smaller than the minimum total material distribution set interval, executing third adjustment processing on the initial interactive internet of things material distribution space after the adjustment again. Or if the total material distribution set interval of the initial interactive internet of things material distribution space after the adjustment is larger than or equal to the minimum total material distribution set interval, taking the initial interactive internet of things material distribution space before the adjustment as a second adjustment distribution space, and sequencing the target interactive internet of things forms according to the sequence from low priority to high priority of the interactive internet of things forms to obtain a target interactive internet of things form sequence.
Therefore, the target interaction Internet of things forms can be clustered according to the target interaction Internet of things form sequence, each cluster comprises a first interaction Internet of things form and a second interaction Internet of things form which are related to the interaction range of the target interaction Internet of things form sequence and consistent with the range difference of the interaction range, and the priority of the first interaction Internet of things form is smaller than that of the second interaction Internet of things form.
Then, according to the sequence from low priority to high priority of the range difference with the interaction range, sequentially taking each cluster as a target cluster, and performing the following fourth adjustment processing on the target cluster: and reducing the set number of key interactive internet-of-things materials in the first interactive internet-of-things form of the target clusters in the second adjustment distribution space, and increasing the set number of key interactive internet-of-things materials in the second interactive internet-of-things form of the target clusters in the second adjustment distribution space.
Further, this embodiment may determine whether the total material distribution set interval of the second adjusted distribution space after this adjustment is greater than the requirement of the total material distribution set interval, if the total material distribution set interval of the second adjusted distribution space after this adjustment is greater than the requirement of the total material distribution set interval, use the second adjusted distribution space after this adjustment as the final internet of things material distribution space, and if the total material distribution set interval of the second adjusted distribution space after this adjustment is not greater than the requirement of the total material distribution set interval, use the next cluster as a new target cluster, and perform the fourth adjustment processing on the new target cluster.
Therefore, the virtual reality three-dimensional map of each interactive internet of things material in the final interactive internet of things material distribution space of each target interactive internet of things form can be classified into the map division sequence of the interactive internet of things form respectively.
In one possible implementation, it is considered that a part of the video playing map may be added after adjustment for step S120, and therefore, the step S120 may also be implemented by the following exemplary sub-steps, which are described in detail below.
And a substep S121 of judging whether an Internet of things interaction relationship is established with each video playing map.
In this embodiment, the internet of things interaction relationship may be used to set a drawing service of a map drawing stream corresponding to a video playing map, where each video playing map corresponds to one internet of things interaction relationship, and the interaction modes of different internet of things interaction relationships are different.
And a substep S122, if the Internet of things interaction relation corresponding to each video playing map association is not available, obtaining the map drawing source information of each video playing map.
In this embodiment, the mapping source information includes a mapping source tag corresponding to the video playing mapping, where the mapping source tag is a mapping source tag corresponding to a mapping stream generated by the video playing mapping.
And a substep S123 of analyzing and identifying each mapping source information according to the vertex mapping character corresponding to each mapping source information to obtain at least a plurality of vertex mapping partitions corresponding to each mapping source information, and determining a target vertex mapping partition with displacement transformation information from the vertex mapping partitions corresponding to each mapping source information.
In this embodiment, the displacement transformation information is a displacement transformation node that characterizes a vertex mapping partition corresponding to the source label of the map.
And a substep S124 of associating the corresponding Internet of things interaction relationship with each video playing map according to the depth map in the target vertex mapping partition corresponding to each video playing map.
In this embodiment, the internet of things interaction relationship is determined according to the internet of things interaction relationship corresponding to each depth virtual camera in the depth map in the target vertex mapping partition.
And a substep S125, obtaining the chartlet drawing stream corresponding to each video playing chartlet from a pre-configured chartlet drawing stream library according to the Internet of things interaction relation corresponding to each video playing chartlet association.
In this embodiment, the mapping stream library includes mapping streams of each video playing mapping under different internet of things interaction relationships.
In a possible implementation manner, in step S130, in the present embodiment, in the process of extracting the first map rendering stream of the first video playing map corresponding to the drawing and overlaying information drawn by the virtual reality and the second map rendering stream of the at least one second video playing map having the drawing and overlaying relationship with the first video playing map, the first map rendering stream of the first video playing map corresponding to the drawing and overlaying information drawn by the virtual reality and the second map rendering stream of the at least one second video playing map having the drawing and overlaying relationship with the first video playing map may be extracted from the virtual reality drawing record information generated in the virtual reality drawing process. The at least one second video playing map having a drawing and overlapping relationship with the first video playing map may refer to a second video playing map having a linkage effect associated with the first video playing map.
For example, if a certain video playing map needs to be drawn in an overlapping manner in the drawing process of a first video playing map, the video playing map can be understood as a second video playing map which has a drawing overlapping relationship with the first video playing map.
In one possible implementation, step S140 may be implemented by the following exemplary sub-steps, which are described in detail below.
The substep S141 is to add the first and second chartlet rendering streams to a preset immersive superimposition rendering queue, and to establish a plurality of first immersive superimposition rendering parameters of the first chartlet rendering stream and a plurality of second immersive superimposition rendering parameters of the second chartlet rendering stream based on the immersive superimposition rendering queue.
Substep S142, determining first lens distortion information for the first video playback map from each of the first immersive overlay rendering parameters, and determining second lens distortion information for a second video playback map from each second immersive overlay rendering parameter, then mapping the first lens distortion information and the second lens distortion information to a preset projection matrix to obtain a first view field angle redrawn stream corresponding to the first lens distortion information and a second view field angle redrawn stream corresponding to the second lens distortion information, and determining a plurality of virtual imaging pictures in the preset projection matrix, summarizing the plurality of virtual imaging pictures to obtain at least a plurality of different classes of virtual imaging sequences, and for each virtual imaging sequence, and drawing a first view field angle redrawing stream and a second view field angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence in a preset virtual reality drawing process.
And a substep S143, splicing the drawing results of the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence according to the rendering sequence to generate a simulated drawing stream, restoring the simulated drawing stream generated by splicing according to a preset artificial intelligence model, and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map.
Therefore, subsequent dynamic virtual experience can be performed by taking the associated video playing map as an independent experience target in a targeted manner in the actual virtual reality experience process.
Exemplarily, in the sub-step S141, the method may be implemented by the following detailed embodiments, for example, as described below.
(1) Overlay rendering configuration information for the immersive overlay rendering queue is determined.
In this embodiment, the overlay drawing configuration information is used to represent an immersive overlay drawing unit allocated by the immersive overlay drawing queue when the immersive overlay drawing queue processes the sequentially added overlay drawing streams, and the immersive overlay drawing unit is used to represent drawing feature node information when the immersive overlay drawing queue draws the added overlay drawing streams.
(2) And determining to add the first chartlet drawing flow to first drawing feature node information corresponding to the immersive superimposed drawing queue and to add the second chartlet drawing flow to second drawing feature node information corresponding to the immersive superimposed drawing queue based on the superimposed drawing configuration information.
(3) Determining whether there is a drawing overlay when adding the first and second chartlet drawing streams to the immersive overlay drawing queue according to the first and second drawing feature node information.
In this embodiment, the drawing overlay may be used to characterize that there is overlay synchronization behavior for the drawing of the immersive overlay drawing queue.
(4) If it is determined that there is no drawing overlay when the first and second charting flows are added to the immersive overlay drawing queue, adjusting the second drawing feature node information to obtain third drawing feature node information, and adding the first and second charting flows to the immersive overlay drawing queue based on the first and third drawing feature node information.
In this embodiment, a feature difference between the third plotted feature node information and the second plotted feature node information is matched with a feature difference between the first plotted feature node information and the second plotted feature node information.
(5) If it is determined that there is a drawing overlay when the first and second charting flows are added to the immersive overlay drawing queue, the first and second charting flows are continuously added to the immersive overlay drawing queue using the first and second drawing feature node information.
In one possible implementation, still in sub-step S141, in establishing a plurality of first immersive superimposition drawing parameters of the first chartled drawing stream and a plurality of second immersive superimposition drawing parameters of the second chartled drawing stream based on the immersive superimposition drawing queue, the following detailed implementation may be implemented, for example, as described below.
(6) A first sequence of drawing nodes of the first chartlet drawing stream and a second sequence of drawing nodes of the second chartlet drawing stream are determined based on the immersive overlay drawing queue.
It should be noted that the drawing node sequence may be used to represent drawing interaction relationships of the chartlet drawing stream under different drawing nodes, for example, may represent a transition drawing interaction relationship, an overlay drawing interaction relationship, an add drawing interaction relationship, and the like, which is not specifically conceived herein.
(7) Establishing a plurality of first immersive superposition drawing parameters of the first chartlet drawing flow and a plurality of second immersive superposition drawing parameters of the second chartlet drawing flow in the immersive superposition drawing queue according to the first drawing node sequence and the second drawing node sequence respectively.
In a possible implementation manner, regarding step S142, in order to ensure synchronicity and coherence and facilitate subsequent observation, the following detailed implementation manner may be implemented, for example, as described below.
(1) Determining a drawing node time sequence axis corresponding to each first immersive superposition drawing parameter according to a plurality of drawing nodes in each first immersive superposition drawing parameter and drawing model collision parameters between every two adjacent drawing nodes
(2) First lens distortion information for a first video playback map is determined based on a render node timing axis.
And each drawing node in the first immersive superposition drawing parameters is correspondingly provided with a drawing model collision cycle parameter, a matching parameter between the drawing model collision cycle parameter and the drawing model collision cycle parameter of any one drawing node is taken as a corresponding drawing model collision parameter, and the drawing model collision cycle parameter is determined according to a drawing track of the drawing node in the first immersive superposition drawing parameters.
(3) And listing the drawing node of each second immersive superposition drawing parameter and the drawing model collision cycle parameter corresponding to the drawing node to obtain a first projection drawing object and a second projection drawing object corresponding to each second immersive superposition drawing parameter.
For example, the first projection drawing object may be a projection drawing object corresponding to a drawing node of the second immersive superimposition drawing parameter, and the second projection drawing object may be a projection drawing object corresponding to a drawing model collision cycle parameter of the second immersive superimposition drawing parameter.
(4) A first three-dimensional spatial relationship of the first projected rendering object relative to the second projected rendering object and a second three-dimensional spatial relationship of the second projected rendering object relative to the second projected rendering object are determined.
(5) And acquiring at least three target three-dimensional positions with the same spatial point continuity in the first three-dimensional spatial relationship and the second three-dimensional spatial relationship, and determining second lens distortion information of the second immersive superposition drawing parameters according to the target three-dimensional positions.
Where, illustratively, spatial point continuity is used to characterize the render model collision cycle relationship between each two three-dimensional locations.
In a possible implementation manner, still referring to step S142, in the process of aggregating a plurality of virtual imaging pictures to obtain at least a plurality of virtual imaging sequences of different categories, the following detailed implementation manner may be implemented, for example, as described below.
(6) And determining the number of the redrawing streams of the view field angle corresponding to each virtual imaging picture in the preset projection matrix.
(7) And determining a class drawing interval of the field angle redrawing flow corresponding to each virtual imaging picture.
The category drawing interval may be a coincidence ratio of a first field angle redrawing stream and a second field angle redrawing stream in the field angle redrawing stream corresponding to each virtual imaging picture.
(8) And determining vector stereo drawing information of the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture.
The vector solid rendering information may be obtained by calculating vector angle feature values (e.g., grayscale feature values, mean feature values of RGB color values, etc.) of a set number of viewing angle degree redrawing pictures corresponding to the first viewing angle redrawing stream and the second viewing angle redrawing stream.
(9) And determining a frame feature sequence of each virtual imaging picture according to the number of the field angle redrawing streams, the category drawing interval and the vector stereo drawing information corresponding to each virtual imaging picture (namely, a sequence formed by sequentially using the number of the field angle redrawing streams, the category drawing interval and the vector stereo drawing information).
(10) And summarizing each virtual imaging picture based on the frame feature sequence of each virtual imaging picture to obtain at least a plurality of different types of virtual imaging sequences.
For example, the virtual imaging frames with at least one same characteristic parameter in each characteristic parameter in the frame characteristic sequence can be summarized to the virtual imaging sequences of the category corresponding to the same characteristic parameter, so as to obtain at least a plurality of virtual imaging sequences of different categories.
In a possible implementation manner, still referring to step S142, in the process of drawing the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging frame in the virtual imaging sequence in the preset virtual reality drawing process, the following detailed implementation manner may be implemented, for example, as described below.
(11) And determining the superposition drawing configuration information of the frame feature sequence corresponding to each virtual imaging picture in each virtual imaging sequence.
(12) And determining the immersive superposition drawing errors of the first view angle redrawing flow and the second view angle redrawing flow corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information.
The immersive superposition drawing error can be used for representing the drawing error condition of the first view field angle redrawing stream and the second view field angle redrawing stream corresponding to each virtual imaging picture.
(13) And judging whether the difference value of each immersive superposition drawing error and the reference drawing error corresponding to the virtual reality drawing process is within a preset difference value interval.
The preset difference value interval can be used for representing an interval where each immersive superposition drawing error is located when the virtual reality drawing process is in normal operation.
(13) When the difference between each immersive superimposition rendering error and the reference synchronization coefficient corresponding to the virtual reality rendering process falls within the preset difference interval, the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence may be run based on the virtual reality rendering process.
(14) Otherwise, when the difference value between each immersive superposition drawing error and the reference synchronization coefficient corresponding to the virtual reality drawing process does not fall into the preset difference value interval, modifying superposition drawing configuration information corresponding to the immersive superposition drawing error corresponding to the difference value not falling into the preset difference value interval according to the thread script of the virtual reality drawing process, and returning to the step of determining the immersive superposition drawing errors of the first view field angle redrawing stream and the second view field angle redrawing stream corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information.
Fig. 3 is a schematic functional module diagram of a data processing apparatus 300 based on artificial intelligence and internet of things interaction according to an embodiment of the present disclosure, in this embodiment, functional modules of the data processing apparatus 300 based on artificial intelligence and internet of things interaction may be divided according to a method embodiment executed by the cloud computing platform 100, that is, the following functional modules corresponding to the data processing apparatus 300 based on artificial intelligence and internet of things interaction may be used to execute each method embodiment executed by the cloud computing platform 100. The data processing apparatus 300 based on artificial intelligence and internet of things interaction may include an obtaining module 310, a drawing module 320, an extracting module 330, and a determining module 340, where functions of the functional modules of the data processing apparatus 300 based on artificial intelligence and internet of things interaction are described in detail below.
The obtaining module 310 is configured to obtain, from each human-computer interaction device end 200, a virtual reality three-dimensional map of a candidate internet of things interaction scene under a drawing pixel segment of each drawing hierarchical component, perform interaction mode division on the virtual reality three-dimensional map under each drawing pixel segment according to a predetermined interaction internet of things form, and generate a map division sequence of each interaction internet of things form respectively. The obtaining module 310 may be configured to perform the step S110, and the detailed implementation of the obtaining module 310 may refer to the detailed description of the step S110.
The drawing module 320 is configured to, for each interactive internet of things form, obtain a mapping drawing stream corresponding to each video playing mapping in the mapping division sequence of the interactive internet of things form, and perform virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping. The drawing module 320 may be configured to perform the step S120, and the detailed implementation of the drawing module 320 may refer to the detailed description of the step S120.
The extracting module 330 is configured to determine whether drawing overlay information for indicating that the video playing maps have drawing overlays exists in a virtual reality drawing process, and extract, when the drawing overlay information is detected, a first map drawing stream of a first video playing map corresponding to the drawing overlay information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having a drawing overlay relationship with the first video playing map. The extracting module 330 may be configured to perform the step S130, and the detailed implementation of the extracting module 330 may refer to the detailed description of the step S130.
The determining module 340 is configured to determine, according to a preset artificial intelligence model, complete virtual reality drawing information between the first video playing map and the at least one second video playing map. The determining module 340 may be configured to perform the step S140, and the detailed implementation of the determining module 340 may refer to the detailed description of the step S140.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules may all be implemented in software invoked by a processing element. Or may be implemented entirely in hardware. And part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the obtaining module 310 may be a processing element separately set up, or may be implemented by being integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the processing element of the apparatus calls and executes the functions of the obtaining module 310. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Fig. 4 illustrates a hardware structure diagram of the cloud computing platform 100 for implementing the control device provided in the embodiment of the present disclosure, and as shown in fig. 4, the cloud computing platform 100 may include a processor 110, a machine-readable storage medium 120, a bus 130, and a transceiver 140.
In a specific implementation process, at least one processor 110 executes computer-executable instructions stored in the machine-readable storage medium 120 (for example, the data processing apparatus 300 based on artificial intelligence and internet of things interaction shown in fig. 3 includes an obtaining module 310, a drawing module 320, an extracting module 330, and a determining module 340), so that the processor 110 may execute the data processing method based on artificial intelligence and internet of things interaction according to the above method embodiment, where the processor 110, the machine-readable storage medium 120, and the transceiver 140 are connected through the bus 130, and the processor 110 may be configured to control transceiving actions of the transceiver 140, so as to transceive data with the aforementioned human-machine interaction device 200.
For a specific implementation process of the processor 110, reference may be made to the above-mentioned method embodiments executed by the cloud computing platform 100, and implementation principles and technical effects thereof are similar, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 4, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The machine-readable storage medium 120 may comprise high-speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus 130 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus 130 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In addition, the embodiment of the disclosure also provides a readable storage medium, in which computer execution instructions are stored, and when a processor executes the computer execution instructions, the data processing method based on artificial intelligence and internet of things interaction is implemented.
The readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A data processing method based on artificial intelligence and Internet of things interaction is applied to a cloud computing platform, the cloud computing platform is in communication connection with a plurality of human-computer interaction equipment terminals, and the method comprises the following steps:
acquiring a virtual reality three-dimensional map of a candidate Internet of things interaction scene under a drawing pixel segment of each drawing layered component from each human-computer interaction equipment terminal, and dividing the virtual reality three-dimensional map under each drawing pixel segment in an interaction mode according to a preset interaction Internet of things form to respectively generate a map dividing sequence of each interaction Internet of things form;
aiming at each interactive Internet of things form, obtaining a mapping drawing stream corresponding to each video playing mapping in a mapping dividing sequence of the interactive Internet of things form, and performing virtual reality drawing on the mapping drawing stream corresponding to each video playing mapping;
judging whether drawing superposition information for representing that video playing maps have drawing superposition exists or not in the virtual reality drawing process, and extracting a first map drawing stream of a first video playing map corresponding to the drawing superposition information drawn by the virtual reality and a second map drawing stream of at least one second video playing map having drawing superposition relation with the first video playing map when the drawing superposition information is detected;
and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model.
2. The data processing method based on artificial intelligence and internet of things interaction according to claim 1, wherein the step of obtaining the mapping rendering stream corresponding to each video playing mapping in the mapping division sequence in the form of the interactive internet of things comprises:
judging whether an Internet of things interaction relationship is established with each video playing map or not; the interactive relation of the internet of things is used for setting drawing services of a map drawing stream corresponding to video playing maps, each video playing map corresponds to one interactive relation of the internet of things, and the interactive modes of different interactive relations of the internet of things are different;
if the Internet of things interaction relation corresponding to each video playing map association is not obtained, map drawing source information of each video playing map is obtained; the mapping source information comprises a mapping source label corresponding to the video playing mapping, and the mapping source label is a mapping source label corresponding to a mapping stream generated by the video playing mapping;
analyzing and identifying each mapping source information according to the vertex mapping character corresponding to each mapping source information to obtain at least a plurality of vertex mapping partitions corresponding to each mapping source information, and determining a target vertex mapping partition with displacement transformation information from the vertex mapping partitions corresponding to each mapping source information; the displacement transformation information is a displacement transformation node of a vertex mapping partition corresponding to the source label of the mapping map and representing the vertex mapping partition;
associating a depth map in a target vertex mapping partition corresponding to each video playing map with an internet of things interaction relation corresponding to each video playing map, wherein the internet of things interaction relation is determined according to the internet of things interaction relation corresponding to each depth virtual camera in the depth map in the target vertex mapping partition;
and acquiring a mapping drawing stream corresponding to each video playing mapping from a pre-configured mapping drawing stream library according to the Internet of things interaction relation corresponding to each video playing mapping, wherein the mapping drawing stream library comprises mapping drawing streams of each video playing mapping under different Internet of things interaction relations.
3. The artificial intelligence and internet of things interaction based data processing method according to claim 1 or 2, wherein the step of determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map according to a preset artificial intelligence model comprises:
adding the first and second chartlet rendering streams to a preset immersive overlay rendering queue and establishing a plurality of first immersive overlay rendering parameters for the first chartlet rendering stream and a plurality of second immersive overlay rendering parameters for the second chartlet rendering stream based on the immersive overlay rendering queue;
determining first lens distortion information for the first video playback map from each first immersive overlay rendering parameter, and determining second lens distortion information for the second video playback map from each second immersive overlay rendering parameter, then mapping the first lens distortion information and the second lens distortion information to a preset projection matrix to obtain a first view angle redrawing stream corresponding to the first lens distortion information and a second view angle redrawing stream corresponding to the second lens distortion information, and determining a plurality of virtual imaging pictures in the preset projection matrix, summarizing the plurality of virtual imaging pictures to obtain at least a plurality of different classes of virtual imaging sequences, and for each virtual imaging sequence, drawing a first view field angle redrawing flow and a second view field angle redrawing flow corresponding to each virtual imaging picture in the virtual imaging sequence in a preset virtual reality drawing process;
and splicing the drawing results of the first view angle redrawing stream and the second view angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence according to a rendering sequence to generate a simulated drawing stream, restoring the simulated drawing stream generated by splicing according to a preset artificial intelligence model, and determining complete virtual reality drawing information between the first video playing map and the at least one second video playing map.
4. The artificial intelligence and internet of things interaction based data processing method according to claim 3, wherein the step of adding the first and second chartlet rendering streams to a preset immersive overlay rendering queue comprises:
determining overlay rendering configuration information of the immersive overlay rendering queue; the overlay drawing configuration information is used for representing an immersion overlay drawing unit which is allocated when the immersion overlay drawing queue processes the added overlay drawing streams, and the immersion overlay drawing unit is used for representing drawing feature node information when the immersion overlay drawing queue draws the added overlay drawing streams;
determining, based on the overlay rendering configuration information, to add the first overlay rendering stream to first rendering feature node information corresponding to the immersive overlay rendering queue and to add the second overlay rendering stream to second rendering feature node information corresponding to the immersive overlay rendering queue;
determining from the first and second draw feature node information whether there is a draw overlay when adding the first and second chartlet draw streams to the immersive overlay draw queue; wherein the drawing superposition is used for representing that the drawing of the immersive superposition drawing queue has superposition synchronization behavior;
if not, adjusting the second drawing feature node information to obtain third drawing feature node information, and adding the first chartlet drawing flow and the second chartlet drawing flow to the immersive superposition drawing queue based on the first drawing feature node information and the third drawing feature node information, wherein a feature difference between the third drawing feature node information and the second drawing feature node information is matched with a feature difference between the first drawing feature node information and the second drawing feature node information;
and if so, continuously adopting the first drawing feature node information and the second drawing feature node information to add the first chartlet drawing flow and the second chartlet drawing flow to the immersive superposition drawing queue.
5. The artificial intelligence and internet of things interaction based data processing method as claimed in claim 3, wherein the step of establishing a plurality of first immersive superimposition rendering parameters for the first chartlet rendering stream and a plurality of second immersive superimposition rendering parameters for the second chartlet rendering stream based on the immersive superimposition rendering queue comprises:
determining a first sequence of drawing nodes of the first chartlet drawing stream and a second sequence of drawing nodes of the second chartlet drawing stream based on the immersive overlay drawing queue; the drawing node sequence is used for representing drawing interaction relations of the chartlet drawing flow under different drawing nodes;
establishing a plurality of first immersive superimposition rendering parameters of the first chartlet rendering stream and a plurality of second immersive superimposition rendering parameters of the second chartlet rendering stream in the immersive superimposition rendering queue according to the first sequence of rendering nodes and the second sequence of rendering nodes, respectively.
6. The artificial intelligence and internet of things interaction based data processing method as claimed in claim 3, wherein the step of determining first lens distortion information of the first video playback map according to each first immersive overlay rendering parameter and determining second lens distortion information of the second video playback map according to each second immersive overlay rendering parameter comprises:
determining a drawing node time sequence axis corresponding to each first immersive superposition drawing parameter according to a plurality of drawing nodes in each first immersive superposition drawing parameter and drawing model collision parameters between every two adjacent drawing nodes;
determining first lens distortion information for the first video playback map based on the render node timing axis; each drawing node in the first immersive superposition drawing parameters is correspondingly provided with a drawing model collision cycle parameter, a matching parameter between the drawing model collision cycle parameter and the drawing model collision cycle parameter of any one drawing node serves as a corresponding drawing model collision parameter, and the drawing model collision cycle parameter is determined according to a drawing track of the drawing node in the first immersive superposition drawing parameters;
listing the drawing node of each second immersive superposition drawing parameter and the drawing model collision cycle parameter corresponding to the drawing node to obtain a first projection drawing object and a second projection drawing object corresponding to each second immersive superposition drawing parameter; the first projection drawing object is a projection drawing object corresponding to a drawing node of a second immersive superposition drawing parameter, and the second projection drawing object is a projection drawing object corresponding to a drawing model collision cycle parameter of the second immersive superposition drawing parameter;
determining a first three-dimensional spatial relationship of the first projected rendering object relative to the second projected rendering object and a second three-dimensional spatial relationship of the second projected rendering object relative to the second projected rendering object;
acquiring at least three target three-dimensional positions with the same spatial point continuity in the first three-dimensional spatial relationship and the second three-dimensional spatial relationship, and determining second lens distortion information of the second immersive superposition drawing parameter according to the target three-dimensional positions; wherein the spatial point continuity is used to characterize a render model collision cycle relationship between each two three-dimensional locations.
7. The data processing method based on artificial intelligence and internet of things interaction according to claim 6, wherein the step of summarizing the plurality of virtual imaging pictures to obtain at least a plurality of different types of virtual imaging sequences comprises:
determining the number of field angle redrawing streams corresponding to each virtual imaging picture in the preset projection matrix;
determining a class drawing interval of a field angle redrawing stream corresponding to each virtual imaging picture; the category drawing interval is the coincidence proportion of a first view field angle redrawing stream and a second view field angle redrawing stream in the view field angle redrawing streams corresponding to each virtual imaging picture;
determining vector three-dimensional drawing information of a first view field angle redrawing stream and a second view field angle redrawing stream corresponding to each virtual imaging picture; the vector stereo drawing information is obtained by calculating vector angle characteristic values of a set number of field angle degree redrawing pictures corresponding to the first field angle redrawing stream and the second field angle redrawing stream;
determining a frame characteristic sequence of each virtual imaging picture according to the number of field angle redrawing streams, the category drawing interval and the vector three-dimensional drawing information corresponding to each virtual imaging picture;
and summarizing each virtual imaging picture based on the frame feature sequence of each virtual imaging picture to obtain the at least a plurality of virtual imaging sequences of different categories.
8. The data processing method based on artificial intelligence and internet of things interaction of claim 7, wherein the step of drawing the first view field angle redrawing stream and the second view field angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence in a preset virtual reality drawing process comprises:
determining the superposition drawing configuration information of the frame characteristic sequence corresponding to each virtual imaging picture in each virtual imaging sequence;
determining an immersive superposition drawing error of a first view angle redrawing flow and a second view angle redrawing flow corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information; the immersive superposition drawing error is used for representing the drawing error conditions of the first view field angle redrawing stream and the second view field angle redrawing stream corresponding to each virtual imaging picture;
judging whether the difference value of each immersive superposition drawing error and the reference drawing error corresponding to the virtual reality drawing process is within a preset difference value interval; the preset difference value interval is used for representing the interval where each immersive superposition drawing error is located when the virtual reality drawing process is in normal operation;
when the difference value between each immersive superposition drawing error and the reference synchronous coefficient corresponding to the virtual reality drawing process falls into the preset difference value interval, running a first view field angle redrawing stream and a second view field angle redrawing stream corresponding to each virtual imaging picture in the virtual imaging sequence based on the virtual reality drawing process;
and otherwise, modifying the superposition drawing configuration information corresponding to the immersive superposition drawing error corresponding to the difference value which does not fall into the preset difference value interval according to the thread script of the virtual reality drawing process, and returning to the step of determining the immersive superposition drawing error of the first view field angle redrawing flow and the second view field angle redrawing flow corresponding to each virtual imaging picture in each summary according to the superposition drawing configuration information.
9. A cloud computing platform, characterized in that the cloud computing platform comprises a processor, a machine-readable storage medium, and a network interface, the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is used for being connected with at least one human-computer interaction device in a communication manner, the machine-readable storage medium is used for storing programs, instructions, or codes, and the processor is used for executing the programs, instructions, or codes in the machine-readable storage medium to execute the data processing method based on artificial intelligence and internet of things interaction in any one of claims 1 to 8.
10. A computer-readable storage medium, wherein the computer-readable storage medium is configured with a program, instructions or code, and when the program, instructions or code is executed, the data processing method based on artificial intelligence and internet of things interaction in any one of claims 1-8 is realized.
CN202010569965.8A 2020-06-21 2020-06-21 Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform Expired - Fee Related CN111787080B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011504703.XA CN112532742A (en) 2020-06-21 2020-06-21 Data processing method and system based on artificial intelligence and Internet of things interaction
CN202010569965.8A CN111787080B (en) 2020-06-21 2020-06-21 Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform
CN202011498992.7A CN112565450A (en) 2020-06-21 2020-06-21 Data processing method, system and platform based on artificial intelligence and Internet of things interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010569965.8A CN111787080B (en) 2020-06-21 2020-06-21 Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202011504703.XA Division CN112532742A (en) 2020-06-21 2020-06-21 Data processing method and system based on artificial intelligence and Internet of things interaction
CN202011498992.7A Division CN112565450A (en) 2020-06-21 2020-06-21 Data processing method, system and platform based on artificial intelligence and Internet of things interaction

Publications (2)

Publication Number Publication Date
CN111787080A CN111787080A (en) 2020-10-16
CN111787080B true CN111787080B (en) 2021-01-29

Family

ID=72756919

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011498992.7A Withdrawn CN112565450A (en) 2020-06-21 2020-06-21 Data processing method, system and platform based on artificial intelligence and Internet of things interaction
CN202010569965.8A Expired - Fee Related CN111787080B (en) 2020-06-21 2020-06-21 Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform
CN202011504703.XA Withdrawn CN112532742A (en) 2020-06-21 2020-06-21 Data processing method and system based on artificial intelligence and Internet of things interaction

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011498992.7A Withdrawn CN112565450A (en) 2020-06-21 2020-06-21 Data processing method, system and platform based on artificial intelligence and Internet of things interaction

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011504703.XA Withdrawn CN112532742A (en) 2020-06-21 2020-06-21 Data processing method and system based on artificial intelligence and Internet of things interaction

Country Status (1)

Country Link
CN (3) CN112565450A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013144807A1 (en) * 2012-03-26 2013-10-03 Primesense Ltd. Enhanced virtual touchpad and touchscreen
CN104423578A (en) * 2013-08-25 2015-03-18 何安莉 Interactive Input System And Method
CN107197385A (en) * 2017-05-31 2017-09-22 珠海金山网络游戏科技有限公司 A kind of real-time virtual idol live broadcasting method and system
CN107360160A (en) * 2017-07-12 2017-11-17 广州华多网络科技有限公司 live video and animation fusion method, device and terminal device
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
WO2018175986A1 (en) * 2017-03-23 2018-09-27 Rutgers, The State University Of New Jersey Systems and methods for modeling a protein parameter for understanding protein interactions and generating an energy map
CN108965791A (en) * 2018-04-04 2018-12-07 广州高新兴机器人有限公司 One kind passing through robot AR camera and internet of things equipment exchange method and system
CN109189217A (en) * 2018-08-16 2019-01-11 四川蓉科强工程管理咨询有限责任公司 A kind of acceptance of work analogy method based on VR technology
CN109218253A (en) * 2017-06-29 2019-01-15 武汉矽感科技有限公司 Multi-medium play method plays background server and mobile terminal
CN208572261U (en) * 2018-08-15 2019-03-01 董俊毅 A kind of superposing type Internet of Things interaction live broadcast system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895330B (en) * 2017-11-28 2018-10-26 特斯联(北京)科技有限公司 A kind of tourist's service platform for realizing scenario building towards smart travel
CN108600367A (en) * 2018-04-24 2018-09-28 上海奥孛睿斯科技有限公司 Internet of Things system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013144807A1 (en) * 2012-03-26 2013-10-03 Primesense Ltd. Enhanced virtual touchpad and touchscreen
CN104423578A (en) * 2013-08-25 2015-03-18 何安莉 Interactive Input System And Method
WO2018175986A1 (en) * 2017-03-23 2018-09-27 Rutgers, The State University Of New Jersey Systems and methods for modeling a protein parameter for understanding protein interactions and generating an energy map
CN107197385A (en) * 2017-05-31 2017-09-22 珠海金山网络游戏科技有限公司 A kind of real-time virtual idol live broadcasting method and system
CN109218253A (en) * 2017-06-29 2019-01-15 武汉矽感科技有限公司 Multi-medium play method plays background server and mobile terminal
CN107360160A (en) * 2017-07-12 2017-11-17 广州华多网络科技有限公司 live video and animation fusion method, device and terminal device
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108965791A (en) * 2018-04-04 2018-12-07 广州高新兴机器人有限公司 One kind passing through robot AR camera and internet of things equipment exchange method and system
CN208572261U (en) * 2018-08-15 2019-03-01 董俊毅 A kind of superposing type Internet of Things interaction live broadcast system
CN109189217A (en) * 2018-08-16 2019-01-11 四川蓉科强工程管理咨询有限责任公司 A kind of acceptance of work analogy method based on VR technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于增强现实的物联网物体识别与虚拟交互;沈克 等;《计算机工程》;20101029;第98-101页 *
虚拟现实增强技术综述;周忠 等;《中国科学》;20151231;第157-173页 *

Also Published As

Publication number Publication date
CN112532742A (en) 2021-03-19
CN112565450A (en) 2021-03-26
CN111787080A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN110276349B (en) Video processing method, device, electronic equipment and storage medium
JP2022528294A (en) Video background subtraction method using depth
WO2022161301A1 (en) Image generation method and apparatus, and computer device and computer-readable storage medium
CN112184872A (en) Game rendering optimization method based on big data and cloud computing center
CN109345637B (en) Interaction method and device based on augmented reality
CN111476875B (en) Smart building Internet of things object simulation method and building cloud server
CN111222571B (en) Image special effect processing method and device, electronic equipment and storage medium
CN114494566A (en) Image rendering method and device
CN111626816B (en) Image interaction information processing method based on e-commerce live broadcast and cloud computing platform
CN114697703A (en) Video data generation method and device, electronic equipment and storage medium
CN111787081B (en) Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform
CN111787080B (en) Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform
US11874956B2 (en) Displaying augmented reality responsive to an input
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
CN112069325B (en) Big data processing method based on block chain offline payment and cloud service pushing platform
CN113794846A (en) Video cloud clipping method and device and cloud clipping server
CN115082496A (en) Image segmentation method and device
CN114004953A (en) Method and system for realizing reality enhancement picture and cloud server
CN112954452A (en) Video generation method, device, terminal and storage medium
CN113128277A (en) Generation method of face key point detection model and related equipment
CN112288866A (en) Intelligent building three-dimensional model rendering method and building system
CN112464691A (en) Image processing method and device
CN116363017B (en) Image processing method and device
CN117726963A (en) Picture data processing method and device, electronic equipment and medium
CN116527983A (en) Page display method, device, equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Pan Shaoxi

Inventor after: Zhang Wei

Inventor before: Zhang Wei

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210112

Address after: 14 / F, block a, science and Technology Industrial Park, Foshan high tech Zone, No. 70, Guxin Road, Chancheng District, Foshan City, Guangdong Province, 528000

Applicant after: Guangdong Youyi Internet Technology Co.,Ltd.

Address before: 430014 daijiashan science and technology venture City, 888 Hanhuang Road, Jiang'an District, Wuhan City, Hubei Province

Applicant before: Zhang Wei

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210129

Termination date: 20210621