CN115190120A - Multimedia data interaction method, device and system - Google Patents

Multimedia data interaction method, device and system Download PDF

Info

Publication number
CN115190120A
CN115190120A CN202110300494.5A CN202110300494A CN115190120A CN 115190120 A CN115190120 A CN 115190120A CN 202110300494 A CN202110300494 A CN 202110300494A CN 115190120 A CN115190120 A CN 115190120A
Authority
CN
China
Prior art keywords
multimedia data
client
target
cloud server
edge cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110300494.5A
Other languages
Chinese (zh)
Other versions
CN115190120B (en
Inventor
杨振东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202110300494.5A priority Critical patent/CN115190120B/en
Publication of CN115190120A publication Critical patent/CN115190120A/en
Application granted granted Critical
Publication of CN115190120B publication Critical patent/CN115190120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention provides a multimedia data interaction method, a device and a system, wherein the method is applied to an edge cloud server and comprises the following steps: receiving a multimedia data interaction request sent by a client, wherein the multimedia data interaction request carries position information, judging whether the client is in a region range governed by an edge cloud server or not based on the position information, if so, determining corresponding target multimedia data from prestored subarea multimedia data according to the multimedia data interaction request, converting the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data, and sending the converted target multimedia data to the client for displaying. The embodiment reduces the limitation of the network transmission speed to the application of the AR technology, and further improves the use experience of the user.

Description

Multimedia data interaction method, device and system
Technical Field
The embodiment of the invention relates to the field of data processing, in particular to a multimedia data interaction method, device and system.
Background
Along with the development of the internet technology, the application of the AR technology is more and more popular, the AR technology can overlay virtual objects, scenes, system prompt information and animations generated by a computer into a real scene, and therefore reality enhancement is achieved, namely the AR technology can overlay information generated by the computer in the real world in real time to interact with a user, and further use experience of the user is improved.
In the prior art, the limitation of the speed of the communication network brings obstacles to the application of the AR technology. Specifically, the existing application scenario for enhancing experience by using the AR technology requires that the client downloads the application data in advance, however, due to the limitation of the network transmission speed and the limitation of the storage capability of the user terminal, the finally presented enhancement effect may not meet the expectations of the user, and the use experience of the user is affected.
Disclosure of Invention
The embodiment of the invention provides a multimedia data interaction method, device and system, aiming at improving the presentation enhancement effect and further improving the use experience of a user.
In a first aspect, an embodiment of the present invention provides a multimedia data interaction method applied to an edge cloud server, including:
receiving a multimedia data interaction request sent by a client, wherein the multimedia data interaction request carries position information;
judging whether the client is in the range of the area governed by the edge cloud server or not based on the position information;
if so, determining corresponding target multimedia data from prestored subarea multimedia data according to the multimedia data interaction request, and converting the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data;
and sending the converted target multimedia data to the client for display.
Optionally, the determining, based on the location information, whether the client is in a range of an area governed by the edge cloud server includes:
determining a pool of target IP addresses based on the location information;
judging whether the IP address of the client is in the target IP address pool or not;
and if so, determining that the client is in the area range governed by the edge cloud server.
Optionally, the edge cloud server includes an edge cloud server identifier, and before receiving the multimedia data interaction request sent by the client, the method further includes:
determining a target jurisdiction area corresponding to the target edge cloud server identification based on the pre-stored jurisdiction area and identification corresponding relation;
and acquiring the partitioned multimedia data corresponding to the target jurisdiction area from the central cloud server, and storing the partitioned multimedia data.
Optionally, the client includes a first client and a second client, and sending the converted target multimedia data to the client for display includes:
sending the converted first target multimedia data to the first client for display;
after the sending the converted first target multimedia data to the first client for displaying, the method further includes:
judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client;
and if so, acquiring the converted second target multimedia data corresponding to the second client, sending the converted second target multimedia data to the first client for displaying, and sending the converted first target multimedia data corresponding to the first client to the second client for displaying.
Optionally, the method further includes:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for displaying, acquires the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for displaying.
In a second aspect, an embodiment of the present invention provides a multimedia data interaction method, which is applied to a central cloud server, and includes:
determining the corresponding relation between the jurisdiction area and the identification;
determining the partition multimedia data corresponding to each target jurisdiction area based on the corresponding relation between the jurisdiction areas and the identifications;
and sending the partition multimedia data corresponding to each target jurisdiction area to the corresponding target edge cloud server so that each target edge cloud server stores the corresponding partition multimedia data in advance.
In a third aspect, an embodiment of the present invention provides a multimedia data interaction apparatus, including:
the system comprises a receiving module, a sending module and a receiving module, wherein the receiving module is used for receiving a multimedia data interaction request sent by a client, and the multimedia data interaction request carries position information;
the first processing module is used for judging whether the client side is in a region range governed by the edge cloud server or not based on the position information;
the first processing module is further configured to determine corresponding target multimedia data from pre-stored partitioned multimedia data according to the multimedia data interaction request and perform conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data if the target multimedia data is the pre-stored partitioned multimedia data;
the first processing module is further configured to send the converted target multimedia data to the client for display.
In a fourth aspect, an embodiment of the present invention provides a multimedia data interaction apparatus, including:
the second processing module is used for determining the corresponding relation between the jurisdiction area and the identification;
the second processing module is further configured to determine, based on the correspondence between the jurisdiction areas and the identifiers, partitioned multimedia data corresponding to each target jurisdiction area;
the second processing module is further configured to send the partition multimedia data corresponding to each target jurisdiction to the corresponding target edge cloud server, so that each target edge cloud server stores the corresponding partition multimedia data in advance.
In a fifth aspect, an embodiment of the present invention provides a multimedia data interaction system, including: the system comprises an edge cloud server, a client and a central cloud server;
wherein the edge cloud server is configured to perform the method according to any one of the first aspect, the center cloud server is configured to perform the method according to the second aspect, and the client is configured to display the converted target multimedia data.
In a sixth aspect, the embodiments of the present invention provide a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the media data interaction method according to any one of the first aspect is implemented.
In a seventh aspect, an embodiment of the present invention provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the media data interaction method according to the first aspect and various possible designs of the first aspect is implemented.
The embodiment of the invention provides a multimedia data interaction method, a device and a system, after the scheme is adopted, a multimedia data interaction request containing position information sent by a client can be received, whether the client is in a region range governed by an edge cloud server can be judged based on the position information, if yes, corresponding target multimedia data can be determined from prestored subarea multimedia data according to the multimedia data interaction request, the target multimedia data is converted according to a preset augmented reality processing rule to obtain the converted target multimedia data, the corresponding subarea multimedia data is prestored by the edge cloud server, and when the client is determined to be in the region range governed by the edge cloud server, the target multimedia data is determined from the prestored subarea multimedia data, the conversion mode is completed at the edge cloud server, the client does not need to download the multimedia data in advance, the limitation of network transmission speed on AR technology application is reduced, and the use experience of a user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram of an application system of a multimedia data interaction method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a multimedia data interaction method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a multimedia data interaction method according to another embodiment of the invention;
fig. 4 is a schematic structural diagram of a multimedia data interaction device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of including other sequential examples in addition to those illustrated or described. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art, the AR (Augmented Reality) technology is a technology for calculating the position and angle of a camera image in real time and combining corresponding images, and is a new technology for seamlessly integrating real world information and virtual world information, and the technology aims to superimpose a virtual world on a real world on a screen and perform interaction. Augmented reality is to add images to a real environment in real time, and to change the images to adapt to the rotation of the head and eyes of a user, so that the images are always in the visual angle range of the user, and the augmented reality is widely applied and permeated in the fields of marketing, games, social interaction, movie and television, home furnishing, traveling, education, military, medical treatment, engineering, transportation and the like. In the implementation process of the AR technology, because the AR technology has a high requirement on real-time performance, there is a certain requirement on network transmission speed, and specifically, an application scene in which the existing AR technology is used to enhance experience requires a client to download application data in advance.
Based on the problems, the method and the device have the advantages that the edge cloud server is used for pre-storing the corresponding subarea multimedia data, when the client is determined to belong to the area range governed by the edge cloud server, the target multimedia data are determined from the pre-stored subarea multimedia data, and the conversion is completed at the edge cloud server, so that the client is not required to download the multimedia data in advance, the limitation of the network transmission speed on the AR technology application is reduced, and the technical effect of the use experience of a user is improved.
Fig. 1 is a schematic diagram of an architecture of an application system of a multimedia data interaction method according to an embodiment of the present invention, and as shown in fig. 1, the application system includes: the system comprises an edge cloud server, a central cloud server and a client, wherein the edge cloud server is deployed in an edge cloud, and the central cloud server is deployed in a central cloud. In addition, the edge cloud may further include an UP (User Plane, core network forwarding Plane for mobile network) and an MSG-U (Multi Service access Gateway-User Plane, multi Service access Gateway forwarding Plane), where the UP and the MSG-U are deployed at an edge node, and all services of the mobile network and the fixed network are uniformly accessed by the MSG-U at a convergence node. An operator deploys a lightweight edge cloud at a convergence node, opens IaaS + PaaS capacity of the edge cloud to a third party, deploys various AR applications at the convergence layer edge cloud by the third party, shares cloud resources with multiple tenants, unloads mobile network and fixed network flow at the edge cloud, and accesses the application content of the third party nearby.
There may be one or more converged edge clouds, one or more clients communicating in each converged edge cloud, and one central cloud.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of a multimedia data interaction method according to an embodiment of the present invention, where the method according to the embodiment may be executed by an edge cloud server. As shown in fig. 2, the method of this embodiment may include:
s201: and receiving a multimedia data interaction request sent by a client, wherein the multimedia data interaction request carries position information.
In this embodiment, when a user experiences reality augmentation through AR technology through a client (for example, an AR game, virtual positioning, and the like), a multimedia data interaction request may be sent to an edge cloud server to acquire multimedia data.
Further, the aggregation edge cloud server is deployed on the aggregation edge cloud and used for temporarily storing the multimedia data of the AR application as required. Specifically, the central cloud server stores all detailed contents (such as a panoramic image) of the AR application, and the user downloads application contents (such as related business information of a sub-module corresponding to the geographic location area, such as related building, shop, restaurant, traffic, landmark building, detailed map information related to geographic identification) and related user contents (such as related application information belonging to the user, personalized information of the user, or game progress information of the user) to the local convergence edge cloud server as needed, so as to provide AR services for the local user.
S202: and judging whether the client is in the range of the area governed by the edge cloud server or not based on the position information.
In this embodiment, after receiving the multimedia data interaction request sent by the client, the edge cloud server may determine whether the client is in the area range governed by the edge cloud server according to the location information carried in the multimedia data interaction request.
Further, judging whether the client is in the area range governed by the edge cloud server based on the location information may specifically include:
a pool of target IP addresses is determined based on the location information.
And judging whether the IP address of the client is in the target IP address pool or not.
And if so, determining that the client is in the range of the area governed by the edge cloud server.
Specifically, the IP addresses may be classified in advance according to the location information, and the IP address pools corresponding to different location information are determined. Wherein, the IP address pool may include at least one IP address. Then, when determining whether the client is in the area range governed by the edge cloud server, whether the IP address of the client is in the target IP address pool corresponding to the position information of the client can be directly judged, and if yes, the client is determined to be in the area range governed by the edge cloud server. If not, determining that the client is not in the area range governed by the edge cloud server, and forwarding the multimedia data interaction request sent by the client to the center cloud server, so that the center cloud server determines the edge cloud server corresponding to the client.
In addition, no matter the client accesses the service end of the converged edge cloud to use the AR service through 5G or WIFI in the same convergence area, the same converged edge cloud AR service end can provide service through the configuration of the AR service scheduling server, so that the continuity of an AR service application process and the consistency of user experience can be ensured when a user switches between 5G and WIFI signals, and the use scene of AR application can be further expanded.
S203: if so, determining corresponding target multimedia data from the prestored subarea multimedia data according to the multimedia data interaction request, and converting the target multimedia data according to a preset augmented reality processing rule to obtain the converted target multimedia data.
In this embodiment, after it is determined that the client is within the area range governed by the edge cloud server, the corresponding target multimedia data may be determined from the pre-stored partitioned multimedia data according to the multimedia data interaction request, and then the target multimedia data is converted according to the preset augmented reality processing rule, so as to obtain the converted target multimedia data.
Further, the augmented reality processing rule may be preset, a specific processing mode may be that the convergence edge cloud server may perform calculation operations related to AR applications, such as animation and video picture rendering operations, rapidly process image information, and ensure a higher-quality rendering effect, and the scene, the object, the voice, and the accurate image recognition, comparison, and accurate matching (for a terminal, a more accurate image and voice recognition algorithm that require more computing power and storage resources may be deployed at the cloud of the convergence layer edge cloud, an algorithm based on Artificial Intelligence AI (Artificial Intelligence) and Machine Learning ML (Machine Learning) processes input information and output information of an AR client, releases computing power of the terminal), a construction of a plane or a 3D virtual object model at the cloud and a real-time combination with detected reality data, and operations such as AR game background operations. Specifically, the digital information of the same geographic scene is classified, and the information with different dimensions, such as historical humanistic information, catering and food information, shopping and entertainment, parking traffic information and the like related to the digital information can be respectively displayed according to different user preference settings, namely the converted target multimedia data, so as to meet the personalized requirements of different users. Illustratively, before a food enthusiast comes to a large shopping mall building, the AR application can visually present shopping store information of each floor of the building for a user according to the acquired user orientation information, and can also switch to visually present catering entertainment information of each floor of the building for the user, so that the user can quickly and accurately position a target, and time and energy are saved.
S204: and sending the converted target multimedia data to a client for display.
In this embodiment, after the converted target multimedia data is obtained, the target multimedia data may be sent to a corresponding client for display, where the client may be an AR terminal such as a smart phone, smart glasses, or a head display. The AR terminal client can perform real-time information interaction through 5G, wifi and the AR service end of the aggregation edge cloud to provide AR service, hardware configuration of the AR terminal side (such as terminal side server configuration cancellation and terminal storage configuration reduction) is greatly reduced, the local server does not need to be connected through a cable, a user can use the AR service in indoor and outdoor mobile scenes, and flexibility of the use scene of the AR service is improved.
In addition, the client can also upload the real scene information, the user interaction information and the geographic position information of the location of the user to the convergence edge cloud server through a 5G, fixed network broadband and private line (broadband and private line can be converted into Wifi) low-time-delay low-jitter network, so that the edge cloud server can rapidly identify the information such as the real scene through stronger computing power and storage resources, rapidly issue the digital information matched with the real scene, the user interaction and the geographic position information to the client for display, and the edge cloud server converging the edge cloud and the client under the real scene can perform information real-time interaction. Due to the fact that the aggregation edge cloud architecture is comprehensively borne by the fixed-mobile mode, the number of hops of transmission equipment in the uplink direction and the downlink direction of the network is greatly reduced only by the transmission access equipment and the aggregation equipment from the client side to the aggregation edge cloud server, end-to-end time delay and jitter are reduced, and therefore a strong cloud network platform base is provided for AR application real-time interaction, user experience is improved, and firm cloud network architecture support is provided for social contact, entertainment, information acquisition and other scene applications of the client side in 5G or Wifi covered urban space.
Under the new system architecture, the client can upload captured client input information to the edge cloud server through a low-delay and low-jitter network and output a cloud computing result to the client in a low-delay and low-jitter mode through the existing touch screen interaction, gesture interaction, eyeball tracking interaction, voice interaction, geographic position information interaction and the like, so that user experience friendliness is guaranteed.
After the scheme is adopted, the multimedia data interaction request containing the position information sent by the client can be received firstly, then whether the client is in the area range governed by the edge cloud server can be judged based on the position information, if yes, the corresponding target multimedia data can be determined from the prestored subarea multimedia data according to the multimedia data interaction request, the target multimedia data is converted according to the preset augmented reality processing rule to obtain the converted target multimedia data, the corresponding subarea multimedia data is prestored through the edge cloud server, the target multimedia data is determined from the prestored subarea multimedia data when the client is determined to be in the area range governed by the edge cloud server, the conversion mode is completed at the edge cloud server, the client does not need to download the multimedia data in advance, the limitation of the network transmission speed on the AR technology application is reduced, and the use experience of a user is improved.
Based on the method of fig. 2, the present specification also provides some specific embodiments of the method, which are described below.
In another embodiment, the edge cloud server includes an edge cloud server identifier, and before receiving the multimedia data interaction request sent by the client, the method further includes:
and determining a target jurisdiction area corresponding to the target edge cloud server identification based on the pre-stored jurisdiction area and identification corresponding relation.
And acquiring the partitioned multimedia data corresponding to the target jurisdiction area from the central cloud server, and storing the partitioned multimedia data.
In this embodiment, each edge cloud server may include an edge cloud server identifier, and when acquiring corresponding partition multimedia data from the central cloud server, the target jurisdiction corresponding to the target edge cloud server identifier may be determined based on a pre-stored jurisdiction and identifier correspondence, and then the partition multimedia data corresponding to the target jurisdiction may be acquired from the central cloud server and stored.
In addition, in another embodiment, the client may include a first client and a second client, and the sending the converted target multimedia data to the client for displaying may specifically include:
and sending the converted first target multimedia data to the first client for display.
After the sending the converted first target multimedia data to the first client for displaying, the method may further include:
and judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client.
And if so, acquiring the converted second target multimedia data corresponding to the second client, sending the second target multimedia data to the first client for displaying, and sending the converted first target multimedia data corresponding to the first client to the second client for displaying.
Further, the method may further include:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for displaying, acquires the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for displaying.
In this embodiment, when AR information interaction is performed between different clients in a convergence area of the same convergence edge cloud coverage, an edge cloud server on the convergence edge cloud may provide a service. For example, in an AR game scene, a first client may see a game scene and progress of a second client beside the first client, that is, the game information of the second client is sent to the first client through the edge cloud server. And the AR information interaction between different clients under different converged clouds can be provided by the central cloud server, for example, the first client can send the geographic information reality and the digital information scene of the area a to the second client under the area B through the central cloud server for service interaction.
In addition, the local aggregation edge cloud server can synchronize related application process data to the central cloud server for storage in a timed and upward mode, after AR application service is completed, resources of the aggregation edge cloud server can be emptied timely according to service requirements, so that the aggregation edge cloud resources are released to provide services for other services, through the cloud edge cooperation mechanism of the central cloud and the edge cloud, the fact that the aggregation edge cloud can provide low-delay AR application service is guaranteed, the aggregation edge cloud resources can be reasonably utilized and released timely, and the effect of high-efficiency cooperation advantage complementation of the cloud edge is achieved.
Fig. 3 is a schematic flowchart of a multimedia data interaction method according to another embodiment of the present invention, where the method of this embodiment may be executed by a central cloud server. As shown in fig. 3, the method of this embodiment may include:
s301: and determining the corresponding relation between the jurisdiction area and the identification.
S302: and determining the partition multimedia data corresponding to each target jurisdiction area based on the corresponding relation between the jurisdiction areas and the identifiers.
S303: and sending the partition multimedia data corresponding to each target jurisdiction area to the corresponding target edge cloud server so that each target edge cloud server stores the corresponding partition multimedia data in advance.
In this embodiment, before sending each partition multimedia data to the corresponding edge cloud server, the center cloud server may determine a corresponding relationship between each jurisdiction area and an identifier of the edge cloud server, then may determine the partition multimedia data corresponding to each target jurisdiction area based on the determined corresponding relationship between the jurisdiction area and the identifier, and send the partition multimedia data to the corresponding target edge cloud server, without sending all the multimedia data to the edge cloud server, so as to improve the transmission efficiency of the multimedia data.
Furthermore, the central cloud server is deployed on the central cloud and used for permanently storing the full information and all user data of the AR application within the life cycle of the AR application, and can be divided into different sub-modules according to the convergence areas covered by different convergence edge clouds and numbered. For example, a city has 30 convergence regions, which may be numbered from module 1 to module 30, each module stores the digital scene service information of the corresponding convergence region, and meanwhile, to avoid a boundary effect, the information of each module may include information of a boundary region, that is, the digital scene service information stored by the sub-modules corresponding to adjacent convergence regions are properly overlapped. The user data comprises user account information and user personalized data, such as the settings of interests, hobbies, occupation characteristics and the like of the user, and the currently provided information content can be flexibly configured and modified according to the user needs, such as switching from catering and food information to entertainment and shopping information and the like. When the client moves, the convergence edge cloud server of the area where the client is located provides services for the client, namely when the client is in the service area convergence area A of the edge cloud A, the client is provided with the services by the A, and when the AR client moves to the service area convergence area B of the edge cloud B, the client is provided with the services by the B. Specifically, by means of a convergence edge cloud framework based on the comprehensive fixed-mobile bearing and cooperation with the AR service scheduling server, the AR service scheduling server schedules the AR service end of the convergence edge cloud where the access user is located nearby to access the data stream of the AR application according to the IP address pool of the fixed-mobile terminal so as to provide the AR service matched with the real scene of the user, so that the central cloud server only needs to download the AR application data of the region to the corresponding convergence edge cloud server without downloading global data, and a large amount of storage resources and bandwidth of the convergence edge cloud are saved.
Based on the same idea, an embodiment of this specification further provides a device corresponding to the method, and fig. 4 is a schematic structural diagram of a multimedia data interaction device provided in an embodiment of the present invention, as shown in fig. 4, the device provided in this embodiment may include:
the receiving module 401 is configured to receive a multimedia data interaction request sent by a client, where the multimedia data interaction request carries location information.
A first processing module 402, configured to determine, based on the location information, whether the client is in a range governed by the edge cloud server.
In this embodiment, the first processing module 402 is further configured to:
a pool of target IP addresses is determined based on the location information.
And judging whether the IP address of the client is in the target IP address pool or not.
And if so, determining that the client is in the area range governed by the edge cloud server.
The first processing module 402 is further configured to, if yes, determine corresponding target multimedia data from pre-stored partitioned multimedia data according to the multimedia data interaction request, and perform conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data.
The first processing module 402 is further configured to send the converted target multimedia data to the client for displaying.
In another embodiment, the edge cloud server includes an edge cloud server identifier, and the first processing module 402 is further configured to:
and determining a target jurisdiction area corresponding to the target edge cloud server identification based on the pre-stored jurisdiction area and identification corresponding relation.
And acquiring the partitioned multimedia data corresponding to the target jurisdiction area from the central cloud server, and storing the partitioned multimedia data.
In another embodiment, the clients include a first client and a second client, and the first processing module 402 is further configured to:
and sending the converted first target multimedia data to the first client for display.
And judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client.
And if so, acquiring the converted second target multimedia data corresponding to the second client, sending the converted second target multimedia data to the first client for displaying, and sending the converted first target multimedia data corresponding to the first client to the second client for displaying.
In another embodiment, the first processing module 402 is further configured to:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for displaying, acquires the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for displaying.
In addition, in another embodiment, the multimedia data interaction apparatus may further include:
and the second processing module is used for determining the corresponding relation between the jurisdiction area and the identifier.
The second processing module is further configured to determine partition multimedia data corresponding to each target jurisdiction based on the corresponding relationship between the jurisdiction and the identifier.
The second processing module is further configured to send the partition multimedia data corresponding to each target jurisdiction to the corresponding target edge cloud server, so that each target edge cloud server stores the corresponding partition multimedia data in advance.
The apparatus provided in the embodiment of the present invention may implement the method in the embodiment shown in fig. 2, and the implementation principle and the technical effect are similar, which are not described herein again.
In addition, in another embodiment, a multimedia data interaction system is provided, which specifically includes: the system comprises an edge cloud server, a client and a central cloud server.
The edge cloud server is configured to perform the method according to any one of the first aspect, the center cloud server is configured to perform the method according to the second aspect, and the client is configured to display the converted target multimedia data.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer execution instruction is stored in the computer-readable storage medium, and when a processor executes the computer execution instruction, the multimedia data interaction method of the embodiment of the method is realized.
An embodiment of the present invention further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for multimedia data interaction as described above is implemented.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A multimedia data interaction method is applied to an edge cloud server and comprises the following steps:
receiving a multimedia data interaction request sent by a client, wherein the multimedia data interaction request carries position information;
judging whether the client is in a region range governed by the edge cloud server or not based on the position information;
if yes, determining corresponding target multimedia data from prestored partitioned multimedia data according to the multimedia data interaction request, and performing conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data;
and sending the converted target multimedia data to the client for display.
2. The method according to claim 1, wherein the determining whether the client is in a region under the jurisdiction of the edge cloud server based on the location information comprises:
determining a pool of target IP addresses based on the location information;
judging whether the IP address of the client is in the target IP address pool or not;
and if so, determining that the client is in the range of the area governed by the edge cloud server.
3. The method of claim 1, wherein the edge cloud server contains an edge cloud server identifier, and before the receiving the multimedia data interaction request sent by the client, the method further comprises:
determining a target jurisdiction area corresponding to the target edge cloud server identification based on the pre-stored jurisdiction area and identification corresponding relation;
and acquiring the partitioned multimedia data corresponding to the target jurisdiction area from the central cloud server, and storing the partitioned multimedia data.
4. The method according to any one of claims 1-3, wherein the client comprises a first client and a second client, and the sending the converted target multimedia data to the client for display comprises:
sending the converted first target multimedia data to the first client for display;
after the sending the converted first target multimedia data to the first client for displaying, the method further includes:
judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client;
and if so, acquiring the converted second target multimedia data corresponding to the second client, sending the second target multimedia data to the first client for displaying, and sending the converted first target multimedia data corresponding to the first client to the second client for displaying.
5. The method of claim 4, further comprising:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for displaying, acquires the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for displaying.
6. A multimedia data interaction method is applied to a central cloud server and comprises the following steps:
determining the corresponding relation between the jurisdiction area and the identification;
determining the partition multimedia data corresponding to each target jurisdiction area based on the corresponding relation between the jurisdiction areas and the identifiers;
and sending the partition multimedia data corresponding to each target jurisdiction area to the corresponding target edge cloud server so that each target edge cloud server stores the corresponding partition multimedia data in advance.
7. A multimedia data interaction apparatus, comprising:
the system comprises a receiving module, a sending module and a receiving module, wherein the receiving module is used for receiving a multimedia data interaction request sent by a client, and the multimedia data interaction request carries position information;
the first processing module is used for judging whether the client side is in a range of a region governed by the edge cloud server or not based on the position information;
the first processing module is further configured to determine corresponding target multimedia data from pre-stored partitioned multimedia data according to the multimedia data interaction request and perform conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data if the target multimedia data is the pre-stored partitioned multimedia data;
the first processing module is further configured to send the converted target multimedia data to the client for display.
8. A multimedia data interaction apparatus, comprising:
the second processing module is used for determining the corresponding relation between the jurisdiction area and the identification;
the second processing module is further configured to determine partition multimedia data corresponding to each target jurisdiction based on the correspondence between the jurisdiction and the identifier;
the second processing module is further configured to send the partition multimedia data corresponding to each target jurisdiction to the corresponding target edge cloud server, so that each target edge cloud server stores the corresponding partition multimedia data in advance.
9. A multimedia data interactive system, comprising: the system comprises an edge cloud server, a client and a central cloud server;
wherein the edge cloud server is used for executing the method of any one of claims 1-5, the central cloud server is used for executing the method of claim 6, and the client is used for displaying the converted target multimedia data.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and when executed by a processor, the computer-executable instructions implement the multimedia data interaction method according to any one of claims 1 to 5 or 6.
CN202110300494.5A 2021-03-22 2021-03-22 Multimedia data interaction method, device and system Active CN115190120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110300494.5A CN115190120B (en) 2021-03-22 2021-03-22 Multimedia data interaction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110300494.5A CN115190120B (en) 2021-03-22 2021-03-22 Multimedia data interaction method, device and system

Publications (2)

Publication Number Publication Date
CN115190120A true CN115190120A (en) 2022-10-14
CN115190120B CN115190120B (en) 2024-03-01

Family

ID=83511510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110300494.5A Active CN115190120B (en) 2021-03-22 2021-03-22 Multimedia data interaction method, device and system

Country Status (1)

Country Link
CN (1) CN115190120B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
CN109981523A (en) * 2017-12-27 2019-07-05 中国移动通信集团云南有限公司 A kind of multimedia file processing method, device and cloud computing platform
CN110290506A (en) * 2019-04-17 2019-09-27 中国联合网络通信集团有限公司 A kind of edge cloud motion management method and equipment
CN110989825A (en) * 2019-09-10 2020-04-10 中兴通讯股份有限公司 Augmented reality interaction implementation method and system, augmented reality device and storage medium
CN111612933A (en) * 2020-05-18 2020-09-01 上海齐网网络科技有限公司 Augmented reality intelligent inspection system based on edge cloud server
US10771569B1 (en) * 2019-12-13 2020-09-08 Industrial Technology Research Institute Network communication control method of multiple edge clouds and edge computing system
CN111738281A (en) * 2020-08-05 2020-10-02 鹏城实验室 Simultaneous positioning and mapping system, map soft switching method and storage medium thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
CN109981523A (en) * 2017-12-27 2019-07-05 中国移动通信集团云南有限公司 A kind of multimedia file processing method, device and cloud computing platform
CN110290506A (en) * 2019-04-17 2019-09-27 中国联合网络通信集团有限公司 A kind of edge cloud motion management method and equipment
CN110989825A (en) * 2019-09-10 2020-04-10 中兴通讯股份有限公司 Augmented reality interaction implementation method and system, augmented reality device and storage medium
US10771569B1 (en) * 2019-12-13 2020-09-08 Industrial Technology Research Institute Network communication control method of multiple edge clouds and edge computing system
CN111612933A (en) * 2020-05-18 2020-09-01 上海齐网网络科技有限公司 Augmented reality intelligent inspection system based on edge cloud server
CN111738281A (en) * 2020-08-05 2020-10-02 鹏城实验室 Simultaneous positioning and mapping system, map soft switching method and storage medium thereof

Also Published As

Publication number Publication date
CN115190120B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US10127736B1 (en) Method and system for performing interaction based on augmented reality
US9552675B2 (en) Display application and perspective views of virtual space
CN109562296A (en) The handover of cloud game equipment
EP3996378A1 (en) Method and system for supporting sharing of experiences between users, and non-transitory computer-readable recording medium
KR102434832B1 (en) Method for providing artificial intelligence based virtual travel service using real-time background segmentation and object synthesis model
US10130885B1 (en) Viewport selection system
WO2017133147A1 (en) Live-action map generation method, pushing method and device for same
CN114095557B (en) Data processing method, device, equipment and medium
US11265387B2 (en) Synchronizing multiple user devices in an immersive media environment using time-of-flight light patterns
TW202007123A (en) Low latency datagram-responsive computer network protocol
US11483533B2 (en) System and method for social immersive content rendering
JP2018513441A (en) Determination of region to be superimposed, image superimposition, image display method and apparatus
CN116867553A (en) Multi-session remote game rendering
CN110662119A (en) Video splicing method and device
CN116762090A (en) Method, system, and non-transitory computer-readable recording medium for supporting experience sharing between users
CN115190120B (en) Multimedia data interaction method, device and system
US20190339925A1 (en) Temporary use of an electronic billboard in an internet of things computing environment
CN108959311B (en) Social scene configuration method and device
US20210099547A1 (en) Collaborative and edge-enhanced augmented reality systems
US10143022B2 (en) Dynamic generation of geographically bound MANET IDs
US10891654B2 (en) Location-based advertising using hybrid radio
CN114071170B (en) Network live broadcast interaction method and device
US11418560B1 (en) Media and application aware network architecture
WO2021088973A1 (en) Live stream display method and apparatus, electronic device, and readable storage medium
CN115830274A (en) Mixed reality implementation method and device, edge cloud server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant