CN115190120B - Multimedia data interaction method, device and system - Google Patents

Multimedia data interaction method, device and system Download PDF

Info

Publication number
CN115190120B
CN115190120B CN202110300494.5A CN202110300494A CN115190120B CN 115190120 B CN115190120 B CN 115190120B CN 202110300494 A CN202110300494 A CN 202110300494A CN 115190120 B CN115190120 B CN 115190120B
Authority
CN
China
Prior art keywords
multimedia data
client
target
cloud server
edge cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110300494.5A
Other languages
Chinese (zh)
Other versions
CN115190120A (en
Inventor
杨振东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202110300494.5A priority Critical patent/CN115190120B/en
Publication of CN115190120A publication Critical patent/CN115190120A/en
Application granted granted Critical
Publication of CN115190120B publication Critical patent/CN115190120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a multimedia data interaction method, a device and a system, wherein the method is applied to an edge cloud server and comprises the following steps: and receiving a multimedia data interaction request sent by the client, wherein the multimedia data interaction request carries position information, judging whether the client is in the range of the region governed by the edge cloud server based on the position information, if so, determining corresponding target multimedia data from the pre-stored partitioned multimedia data according to the multimedia data interaction request, converting the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data, and sending the converted target multimedia data to the client for display. The embodiment reduces the limit of the network transmission speed to the application of the AR technology, and further improves the use experience of the user.

Description

Multimedia data interaction method, device and system
Technical Field
The embodiment of the invention relates to the field of data processing, in particular to a multimedia data interaction method, device and system.
Background
Along with the development of internet technology, the application of the AR technology is more and more popular, and the AR technology is a technology capable of superposing a virtual object, a scene, system prompt information and animation generated by a computer into a real scene, so that the reality enhancement is realized, namely, the AR technology can superpose the information generated by the computer in the real world in real time to interact with a user, and further, the use experience of the user is improved.
In the prior art, the limitation of the speed of the communication network presents an obstacle for the application of AR technology. Specifically, the existing application scenario of enhancing experience by using the AR technology requires that the client downloads application data in advance, however, due to limitation of network transmission speed and limitation of storage capacity of the user terminal, the finally presented enhancement effect may not meet the user's expectations, and user experience is affected.
Disclosure of Invention
The embodiment of the invention provides a multimedia data interaction method, device and system, which are used for improving the enhancement effect of presentation and further improving the use experience of a user.
In a first aspect, an embodiment of the present invention provides a multimedia data interaction method, applied to an edge cloud server, including:
receiving a multimedia data interaction request sent by a client, wherein the multimedia data interaction request carries position information;
judging whether the client is in the range of the area governed by the edge cloud server or not based on the position information;
if yes, determining corresponding target multimedia data from prestored partitioned multimedia data according to the multimedia data interaction request, and converting the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data;
and sending the converted target multimedia data to the client for display.
Optionally, the determining, based on the location information, whether the client is within the range governed by the edge cloud server includes:
determining a target IP address pool based on the location information;
judging whether the IP address of the client is in the target IP address pool or not;
if yes, determining that the client is in the range of the area governed by the edge cloud server.
Optionally, the edge cloud server includes an edge cloud server identifier, and before the receiving the multimedia data interaction request sent by the client, the method further includes:
determining a target jurisdiction corresponding to the target edge cloud server identifier based on a pre-stored jurisdiction and identifier correspondence;
and obtaining and storing the partitioned multimedia data corresponding to the target jurisdiction from the central cloud server.
Optionally, the client includes a first client and a second client, and the sending the converted target multimedia data to the client for display includes:
the converted first target multimedia data is sent to the first client for display;
after the converted first target multimedia data is sent to the first client for display, the method further comprises:
judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client;
if yes, the converted second target multimedia data corresponding to the second client is obtained and sent to the first client for display, and the converted first target multimedia data corresponding to the first client is sent to the second client for display.
Optionally, the method further comprises:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for display, obtains the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for display.
In a second aspect, an embodiment of the present invention provides a multimedia data interaction method, applied to a central cloud server, including:
determining the corresponding relation between the jurisdiction and the identifier;
determining the partitioned multimedia data corresponding to each target jurisdiction based on the correspondence between the jurisdiction and the identifier;
and sending the partitioned multimedia data corresponding to each target jurisdiction to the corresponding target edge cloud server so that each target edge cloud server stores the corresponding partitioned multimedia data in advance.
In a third aspect, an embodiment of the present invention provides a multimedia data interaction device, including:
the receiving module is used for receiving a multimedia data interaction request sent by the client, wherein the multimedia data interaction request carries position information;
the first processing module is used for judging whether the client is in the range of the area governed by the edge cloud server or not based on the position information;
the first processing module is further configured to determine corresponding target multimedia data from pre-stored partitioned multimedia data according to the multimedia data interaction request if yes, and perform conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data;
the first processing module is further configured to send the converted target multimedia data to the client for display.
In a fourth aspect, an embodiment of the present invention provides a multimedia data interaction device, including:
the second processing module is used for determining the corresponding relation between the jurisdiction area and the mark;
the second processing module is further used for determining the partitioned multimedia data corresponding to each target jurisdiction based on the correspondence between the jurisdiction and the identifier;
the second processing module is further configured to send the partitioned multimedia data corresponding to each target jurisdiction area to a corresponding target edge cloud server, so that each target edge cloud server stores the corresponding partitioned multimedia data in advance.
In a fifth aspect, an embodiment of the present invention provides a multimedia data interaction system, including: the system comprises an edge cloud server, a client and a center cloud server;
wherein the edge cloud server is configured to perform the method according to any one of the first aspect, the central cloud server is configured to perform the method according to the second aspect, and the client is configured to display the converted target multimedia data.
In a sixth aspect, an embodiment of the present invention provides a computer-readable storage medium, where computer-executable instructions are stored, and when executed by a processor, implement a media data interaction method according to any one of the first aspects.
In a seventh aspect, embodiments of the present invention provide a computer program product comprising a computer program which, when executed by a processor, implements the media data interaction method according to the first aspect and the various possible designs of the first aspect.
After the scheme is adopted, the multimedia data interaction request containing the position information sent by the client can be received first, then whether the client is in the range of the area governed by the edge cloud server can be judged based on the position information, if yes, corresponding target multimedia data can be determined from the pre-stored partition multimedia data according to the multimedia data interaction request, the target multimedia data is converted according to the preset augmented reality processing rule, the converted target multimedia data is obtained, the corresponding partition multimedia data is stored in advance through the edge cloud server, the target multimedia data is determined from the pre-stored partition multimedia data when the client is determined to be in the range of the area governed by the edge cloud server, the conversion mode is completed at the edge cloud server, the client does not need to download the multimedia data in advance, the limitation of the network transmission speed on the application of AR technology is reduced, and the use experience of users is further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic architecture diagram of an application system of a multimedia data interaction method according to an embodiment of the present invention;
fig. 2 is a flow chart of a multimedia data interaction method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a multimedia data interaction method according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a multimedia data interaction device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be capable of including other sequential examples in addition to those illustrated or described. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art, the AR (Augmented Reality ) technology is a technology for calculating the position and angle of a camera image in real time and combining the corresponding images, and is a new technology for integrating real world information and virtual world information in a 'seamless' manner, and the goal of the technology is to superimpose the virtual world on the real world on a screen and interact with the real world. The augmented reality is to implement not only real-time addition of images to a real environment, but also modification of the images to accommodate rotation of the user's head and eyes so that the images are always within the user's viewing angle range, and is widely applied and penetrated to the fields of marketing, games, social contact, movies, home, travel, education, military, medical treatment, engineering, traffic, etc. In the implementation process of the AR technology, the requirement on real-time performance is high, so that the network transmission speed is also required to a certain extent, and in particular, the existing application scene for enhancing experience by using the AR technology needs the client to download application data in advance, however, due to the limitation of the network transmission speed and the limitation of the storage capacity of the user terminal, the enhancement effect finally presented may not meet the requirement of the user, and the use experience of the user is affected.
Based on the problems, the method and the device have the advantages that the corresponding partitioned multimedia data are stored in advance by the edge cloud server, when the client side is determined to belong to the range of the jurisdiction of the edge cloud server, the target multimedia data are determined from the prestored partitioned multimedia data, and the conversion mode is completed at the edge cloud server side, so that the technical effects that the client side is not required to download the multimedia data in advance, the limitation of the network transmission speed on AR technology application is reduced, and the use experience of a user is improved are achieved.
Fig. 1 is a schematic architecture diagram of an application system of a multimedia data interaction method according to an embodiment of the present invention, where, as shown in fig. 1, the application system includes: the system comprises an edge cloud server, a center cloud server and a client, wherein the edge cloud server is deployed in an edge cloud, and the center cloud server is deployed in a center cloud. In addition, the edge cloud may further include UP (User Plane) and MSG-U (Multi Service access Gateway-User Plane, multi-service access gateway forwarding Plane), where UP and MSG-U are deployed at edge nodes, and all services of the mobile network and the fixed network are accessed by MSG-U at the sink node. The operator deploys lightweight edge cloud at the sink node, iaaS+PaaS capability of the edge cloud is opened to a third party, the third party deploys various AR applications at the sink layer edge cloud, multi-tenant shares cloud resources, network shifting and fixed network traffic are unloaded at the edge cloud, third party application content is accessed nearby, the operator sink layer edge cloud management platform comprises management nodes and computing nodes, the management nodes are deployed at the core cloud and are responsible for managing computing nodes in the whole province, and the computing nodes are deployed at the sink layer edge cloud and are used for bearing lightweight edge cloud service applications of various upper layer service applications.
Wherein, there may be one or more of the convergent edge clouds, there may be one or more of the clients communicating in each convergent edge cloud, and one of the central clouds.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 2 is a flow chart of a multimedia data interaction method according to an embodiment of the present invention, where the method of the present embodiment may be executed by an edge cloud server. As shown in fig. 2, the method of the present embodiment may include:
s201: and receiving a multimedia data interaction request sent by the client, wherein the multimedia data interaction request carries the position information.
In this embodiment, when a user experiences reality augmentation through AR technology through a client (for example, AR game, virtual positioning, etc.), a multimedia data interaction request may be sent to an edge cloud server to acquire multimedia data.
Further, the convergence edge cloud server is deployed on the convergence edge cloud and is used for temporarily storing the multimedia data of the AR application as required. The multimedia data of the AR application may include regional scene information and part of user data, specifically, the central cloud server stores all details (such as a panorama) of the AR application, and the user downloads application content (such as relevant service information of sub-modules corresponding to the geographic location region, such as relevant building, shop, restaurant, traffic, landmark building, detailed map information related to geographic identification) and relevant user content (such as relevant application information belonging to the user, personalized information of the user, or game progress information of the user) to the local convergence edge cloud server when needed, so as to provide AR service for the local user.
S202: and judging whether the client is in the range of the area governed by the edge cloud server or not based on the position information.
In this embodiment, after receiving the multimedia data interaction request sent by the client, the edge cloud server may determine whether the client is within the range governed by the edge cloud server according to the location information carried in the multimedia data interaction request.
Further, determining whether the client is within the range governed by the edge cloud server based on the location information may specifically include:
a pool of target IP addresses is determined based on the location information.
And judging whether the IP address of the client is in the target IP address pool.
If yes, determining that the client is in the range of the area governed by the edge cloud server.
Specifically, the IP addresses may be classified in advance according to the location information, and an IP address pool corresponding to different location information may be determined. Wherein, the IP address pool can comprise at least one IP address. And then when determining whether the client is in the range governed by the edge cloud server, directly judging whether the IP address of the client is in a target IP address pool corresponding to the position information of the client, and if so, determining that the client is in the range governed by the edge cloud server. If not, determining that the client is not in the range of the region governed by the edge cloud server, and forwarding the multimedia data interaction request sent by the client to the center cloud server to enable the center cloud server to determine the edge cloud server corresponding to the client.
In addition, the client uses the AR service in the same convergence area through the service end of the convergence edge cloud accessed by 5G or the fixed network WIFI, and the service can be provided by the same convergence edge cloud AR service end through the configuration of the AR service scheduling server, so that the continuity of the AR service application process and the consistency of the user experience can be ensured when the user switches 5G and WIFI signals, and the use scene of the AR application can be further expanded.
S203: if yes, determining corresponding target multimedia data from prestored partitioned multimedia data according to the multimedia data interaction request, and converting the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data.
In this embodiment, after determining that the client is within the range of the domain governed by the edge cloud server, the corresponding target multimedia data may be determined from the pre-stored partitioned multimedia data according to the multimedia data interaction request, and then the target multimedia data may be converted according to a preset augmented reality processing rule, so as to obtain the converted target multimedia data.
Further, the augmented reality processing rule may be preset, a specific processing manner may be that the convergent edge cloud server may perform computing operations related to AR application, such as animation, video image rendering operations, perform quick processing on image information, and ensure a rendering effect with higher quality, and perform operations such as precise scene, object, voice, and image recognition, comparison, and precise matching (compared with a terminal, a more precise image and voice recognition algorithm that requires more computing power and storage resources may be deployed at a cloud end of the convergent edge cloud, and based on algorithms of artificial intelligence AI (Artificial Intelligence) and machine learning ML (Machine Learning), input information and output information of an AR client are processed, and computing power of the terminal is released), performing planar or 3D virtual object model construction at the cloud end and real-time combination with detected real data, and operations such as AR game background operation. Specifically, the digital information of the same geographic scene is classified, and information with different dimensions such as historical personal information, catering and delicacy information, shopping entertainment, parking traffic information and the like related to the digital information can be respectively displayed according to different user preference settings, namely the converted target multimedia data, so that personalized requirements of different users are met. For example, when a food fan arrives at a shopping mall building, the AR application may intuitively present the shopping store information of each floor of the building to the user according to the acquired user azimuth information, or may switch to intuitively present the dining entertainment information of each floor of the building to the user, so that the user may quickly and accurately locate the target, thereby saving time and effort.
S204: and sending the converted target multimedia data to a client for display.
In this embodiment, after the converted target multimedia data is obtained, the target multimedia data may be sent to a corresponding client for display, where the client may be an AR terminal such as a smart phone, smart glasses, and a head display. Based on the convergence edge cloud architecture of the fixed-mobile comprehensive bearing, low-delay communication support is provided for AR service, an AR terminal client can conduct real-time information interaction through 5G, wifi and the convergence edge cloud AR server to provide AR service, hardware configuration of the AR terminal side is greatly reduced (such as configuration of a terminal side server is canceled, storage configuration of the terminal is reduced), a local server is not required to be connected through a cable, a user can use the AR service in indoor and outdoor mobile scenes, and flexibility of the use scene of the AR service is improved.
In addition, the client can upload the real scene information, the user interaction information and the geographic position information of the user location to the converged edge cloud server through a 5G, fixed network broadband and special line (broadband and special line can be converted into Wifi) low-delay low-jitter network, so that the edge cloud server can rapidly identify the information of the real scene and the like through stronger computing power and storage resources, and rapidly issue the digitalized information matched with the real scene, the user interaction and the geographic position information to the client for display, thereby realizing real-time information interaction between the edge cloud server converging the edge cloud and the client in the real scene. The aggregation edge cloud architecture comprehensively carried through fixed movement is formed from the client to the aggregation edge cloud server, and only the transmission access equipment and the aggregation equipment are needed, so that the hop count of the transmission equipment in the uplink direction and the downlink direction of the network is greatly reduced, and the end-to-end time delay and jitter are reduced, so that a powerful cloud network platform base is provided for real-time interaction of AR applications, user experience is improved, and solid cloud architecture support is also provided for social, entertainment, information acquisition and other scene applications of the client in urban space covered by 5G or Wifi.
Under the new system architecture, the client can upload the captured client input information to the edge cloud server through a network with low time delay and low jitter by means of the existing touch screen interaction, gesture interaction, eyeball tracking interaction, voice interaction, geographic position information interaction and the like, and then output the cloud computing result with low time delay and low jitter to the client, so that user experience friendliness is guaranteed.
After the scheme is adopted, the multimedia data interaction request containing the position information sent by the client can be received firstly, then, whether the client is in the area governed by the edge cloud server can be judged based on the position information, if so, the corresponding target multimedia data can be determined from the pre-stored partition multimedia data according to the multimedia data interaction request, the target multimedia data is converted according to the preset augmented reality processing rule, the converted target multimedia data is obtained, the corresponding partition multimedia data is stored in advance by the edge cloud server, the target multimedia data is determined from the pre-stored partition multimedia data when the client is determined to be in the area governed by the edge cloud server, and the conversion mode is completed at the edge cloud server without the need of downloading the multimedia data in advance by the client, so that the limitation of the network transmission speed to the AR technology application is reduced, and the use experience of users is improved.
The examples of the present specification also provide some specific embodiments of the method based on the method of fig. 2, which is described below.
In another embodiment, the edge cloud server includes an edge cloud server identifier, and before receiving the multimedia data interaction request sent by the client, the method further includes:
and determining the target jurisdiction corresponding to the target edge cloud server identifier based on the pre-stored jurisdiction and identifier correspondence.
And obtaining and storing the partitioned multimedia data corresponding to the target jurisdiction from the central cloud server.
In this embodiment, each edge cloud server may include an edge cloud server identifier, and when the corresponding partitioned multimedia data is acquired from the central cloud server, the target jurisdiction area corresponding to the target edge cloud server identifier may be determined based on the pre-stored jurisdiction area and identifier correspondence, and then the partitioned multimedia data corresponding to the target jurisdiction area is acquired from the central cloud server and stored, so that the central cloud server only needs to send the multimedia data in the corresponding partition to the corresponding edge cloud server, and the edge cloud server is not required to store all the multimedia data, thereby saving storage resources and bandwidth of the converged edge cloud.
In addition, in another embodiment, the client may include a first client and a second client, and then the method may specifically include sending the converted target multimedia data to the client for displaying:
and sending the converted first target multimedia data to the first client for display.
After the converted first target multimedia data is sent to the first client for display, the method may further include:
and judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client.
If yes, the converted second target multimedia data corresponding to the second client is obtained and sent to the first client for display, and the converted first target multimedia data corresponding to the first client is sent to the second client for display.
Furthermore, the method may further comprise:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for display, obtains the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for display.
In this embodiment, when AR information interaction is performed between different clients in a convergence area of the same convergence edge cloud coverage, an edge cloud server on the convergence edge cloud may provide services. In an AR game scenario, the first client may see the game scenario and progress of the second client, that is, the game information of the second client is sent to the first client through the edge cloud server. The service can be provided by the central cloud server by AR information interaction among different clients under different converged clouds, for example, a first client can send geographic information reality and digital information scenes of an area A to a second client under an area B through the central cloud server so as to perform service interaction.
In addition, the local convergent edge cloud server can synchronize related application process data to the central cloud server at regular time upwards for storage, after AR application service is completed, resources of the convergent edge cloud server can be emptied in time according to service requirements so as to release the convergent edge cloud resources to provide services for other services, and through the cloud edge cooperation mechanism of the central cloud and the edge cloud, the convergent edge cloud can provide low-delay AR application service, and the convergent edge cloud resources can be reasonably utilized and released in time, so that the effect of complementary cloud edge efficient cooperation advantages is achieved.
Fig. 3 is a flowchart of a multimedia data interaction method according to another embodiment of the present invention, where the method of the present embodiment may be performed by a central cloud server. As shown in fig. 3, the method of the present embodiment may include:
s301: and determining the correspondence between the jurisdiction and the identification.
S302: and determining the partitioned multimedia data corresponding to each target jurisdiction based on the correspondence between the jurisdiction and the identifier.
S303: and sending the partitioned multimedia data corresponding to each target jurisdiction to the corresponding target edge cloud server so that each target edge cloud server stores the corresponding partitioned multimedia data in advance.
In this embodiment, before sending the partitioned multimedia data to the corresponding edge cloud server, the central cloud server may determine the correspondence between each jurisdiction area and the identifier of the edge cloud server, and then determine the partitioned multimedia data corresponding to each target jurisdiction area based on the determined correspondence between the jurisdiction areas and the identifier, and send the partitioned multimedia data to the corresponding target edge cloud server, without sending all the multimedia data to the edge cloud server, thereby improving the transmission efficiency of the multimedia data.
Further, the central cloud server is deployed on the central cloud, and is configured to permanently store the full information of the AR application and all user data in the life cycle of the AR application, and may divide the aggregate area covered by different aggregate edge clouds into different sub-modules and number the sub-modules. For example, a city has 30 convergence zones, which can be numbered from module 1 to module 30, each module stores the digitized scene service information of the corresponding convergence zone, and in order to avoid boundary effects, the information of each module may include the information of the boundary zone, that is, the digitized scene service information stored in the sub-modules corresponding to adjacent convergence zones are properly overlapped. The user data comprises user account information and user personalized data, such as user interest, occupation characteristics and other settings, and the information content provided at present can be flexibly configured and modified according to the user needs, such as switching from food and beverage information to entertainment and shopping information and the like. And when the client moves, the converged edge cloud server in the area where the client is located provides services for the client nearby, namely when the client is in the service area convergence area A of the edge cloud A, the service is provided by the A, and when the AR client moves to the service area convergence area B of the edge cloud B, the service is provided by the B. Specifically, through the convergent edge cloud architecture based on the fixed and mobile comprehensive bearing, the data flow of the AR client accessing the AR application is scheduled to the AR server of the convergent edge cloud where the nearby access user is located by the AR service scheduling server according to the IP address pool of the fixed and mobile terminal, so that the AR service matched with the user reality scene is provided, therefore, the central cloud server only needs to download the AR application data of the area to the corresponding convergent edge cloud server, global data is not needed to be downloaded, and a large amount of storage resources and bandwidth of the convergent edge cloud are saved.
Based on the same idea, the embodiment of the present disclosure further provides a device corresponding to the method, and fig. 4 is a schematic structural diagram of a multimedia data interaction device provided by the embodiment of the present disclosure, where, as shown in fig. 4, the device provided by the embodiment may include:
the receiving module 401 is configured to receive a multimedia data interaction request sent by a client, where the multimedia data interaction request carries location information.
A first processing module 402, configured to determine, based on the location information, whether the client is within the range governed by the edge cloud server.
In this embodiment, the first processing module 402 is further configured to:
and determining a target IP address pool based on the position information.
And judging whether the IP address of the client is in the target IP address pool.
If yes, determining that the client is in the range of the area governed by the edge cloud server.
The first processing module 402 is further configured to determine, if yes, corresponding target multimedia data from the pre-stored partitioned multimedia data according to the multimedia data interaction request, and perform conversion processing on the target multimedia data according to a preset augmented reality processing rule, so as to obtain converted target multimedia data.
The first processing module 402 is further configured to send the converted target multimedia data to the client for display.
In another embodiment, the edge cloud server includes an edge cloud server identifier, and the first processing module 402 is further configured to:
and determining the target jurisdiction corresponding to the target edge cloud server identifier based on the pre-stored jurisdiction and identifier correspondence.
And obtaining and storing the partitioned multimedia data corresponding to the target jurisdiction from the central cloud server.
In another embodiment, the client includes a first client and a second client, and the first processing module 402 is further configured to:
and sending the converted first target multimedia data to the first client for display.
And judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client.
If yes, the converted second target multimedia data corresponding to the second client is obtained and sent to the first client for display, and the converted first target multimedia data corresponding to the first client is sent to the second client for display.
In another embodiment, the first processing module 402 is further configured to:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for display, obtains the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for display.
Furthermore, in another embodiment, the multimedia data interaction device may further include:
and the second processing module is used for determining the corresponding relation between the jurisdiction area and the identifier.
The second processing module is further configured to determine partition multimedia data corresponding to each target jurisdiction based on a correspondence between the jurisdiction and the identifier.
The second processing module is further configured to send the partitioned multimedia data corresponding to each target jurisdiction area to a corresponding target edge cloud server, so that each target edge cloud server stores the corresponding partitioned multimedia data in advance.
The device provided by the embodiment of the present invention can implement the method of the embodiment shown in fig. 2, and its implementation principle and technical effects are similar, and will not be described herein.
In addition, in another embodiment, a multimedia data interaction system is provided, which may specifically include: the system comprises an edge cloud server, a client and a central cloud server.
Wherein the edge cloud server is configured to perform the method according to any one of the first aspect, the central cloud server is configured to perform the method according to the second aspect, and the client is configured to display the converted target multimedia data.
The embodiment of the invention also provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and when a processor executes the computer execution instructions, the multimedia data interaction method of the method embodiment is realized.
The embodiment of the invention also provides a computer program product, comprising a computer program which realizes the multimedia data interaction method when being executed by a processor.
The computer readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). The processor and the readable storage medium may reside as discrete components in a device.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1. The multimedia data interaction method is characterized by being applied to an edge cloud server and comprising the following steps of:
the method comprises the steps that an edge cloud server receives a multimedia data interaction request sent by a client, wherein the multimedia data interaction request carries position information;
the edge cloud server judges whether the client is in the range of the area governed by the edge cloud server or not based on the position information;
if yes, the edge cloud server determines corresponding target multimedia data from the pre-stored partition multimedia data according to the multimedia data interaction request, and performs conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data;
the edge cloud server sends the converted target multimedia data to the client for display;
the edge cloud server judging whether the client is in the range of the area governed by the edge cloud server based on the position information comprises the following steps:
determining a target IP address pool based on the location information;
judging whether the IP address of the client is in the target IP address pool or not;
if yes, determining that the client is in the range of the area governed by the edge cloud server;
the edge cloud server comprises an edge cloud server identifier, and before the receiving of the multimedia data interaction request sent by the client, the edge cloud server further comprises:
determining a target jurisdiction corresponding to the target edge cloud server identifier based on a pre-stored jurisdiction and identifier correspondence;
and obtaining and storing the partitioned multimedia data corresponding to the target jurisdiction from the central cloud server.
2. The method of claim 1, wherein the client comprises a first client and a second client, and wherein the sending the converted target multimedia data to the client for display comprises:
the converted first target multimedia data is sent to the first client for display;
after the converted first target multimedia data is sent to the first client for display, the method further comprises:
judging whether the second client is in the range of the area governed by the edge cloud server or not based on the position information of the second client;
if yes, the converted second target multimedia data corresponding to the second client is obtained and sent to the first client for display, and the converted first target multimedia data corresponding to the first client is sent to the second client for display.
3. The method according to claim 2, wherein the method further comprises:
if not, the converted first target multimedia data corresponding to the first client side is sent to a central cloud server, so that the central cloud server sends the converted first target multimedia data corresponding to the first client side to the second client side for display, obtains the converted second target multimedia data corresponding to the second client side, and sends the converted second target multimedia data to the first client side for display.
4. The multimedia data interaction method is characterized by being applied to a central cloud server and comprising the following steps of:
determining the corresponding relation between the jurisdiction and the identifier;
determining the partitioned multimedia data corresponding to each target jurisdiction based on the correspondence between the jurisdiction and the identifier;
transmitting the partitioned multimedia data corresponding to each target jurisdiction to a corresponding target edge cloud server, so that each target edge cloud server stores the corresponding partitioned multimedia data in advance, and judging whether the client is in the jurisdiction range of the target edge cloud server or not based on the position information carried in the multimedia data interaction request when each target edge cloud server receives the multimedia data interaction request transmitted by the client; if yes, the target edge cloud server determines corresponding target multimedia data from prestored partitioned multimedia data according to the multimedia data interaction request, and performs conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data; the converted target multimedia data is sent to the client for display; the determining whether the client is within the range of the target edge cloud server based on the location information carried in the multimedia data interaction request includes: determining a target IP address pool based on the location information; judging whether the IP address of the client is in the target IP address pool or not; if yes, determining that the client is in the range of the area governed by the target edge cloud server.
5. A multimedia data interaction device, comprising:
the receiving module is used for receiving a multimedia data interaction request sent by the client, wherein the multimedia data interaction request carries position information;
the first processing module is used for judging whether the client is in the range of the area governed by the edge cloud server or not based on the position information;
the first processing module is further configured to determine corresponding target multimedia data from pre-stored partitioned multimedia data according to the multimedia data interaction request if yes, and perform conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data;
the first processing module is further configured to send the converted target multimedia data to the client for display;
the first processing module is specifically configured to determine a target IP address pool based on the location information; judging whether the IP address of the client is in the target IP address pool or not; if yes, determining that the client is in the range of the area governed by the edge cloud server;
the edge cloud server comprises an edge cloud server identifier, and the first processing module is further used for determining a target jurisdiction area corresponding to the target edge cloud server identifier based on a pre-stored jurisdiction area and identifier corresponding relation; and obtaining and storing the partitioned multimedia data corresponding to the target jurisdiction from the central cloud server.
6. A multimedia data interaction device, comprising:
the second processing module is used for determining the corresponding relation between the jurisdiction area and the mark;
the second processing module is further used for determining the partitioned multimedia data corresponding to each target jurisdiction based on the correspondence between the jurisdiction and the identifier;
the second processing module is further configured to send the partitioned multimedia data corresponding to each target jurisdiction area to a corresponding target edge cloud server, so that each target edge cloud server stores the corresponding partitioned multimedia data in advance, and when each target edge cloud server receives a multimedia data interaction request sent by a client, determine whether the client is within the jurisdiction area of the target edge cloud server based on position information carried in the multimedia data interaction request; if yes, the target edge cloud server determines corresponding target multimedia data from prestored partitioned multimedia data according to the multimedia data interaction request, and performs conversion processing on the target multimedia data according to a preset augmented reality processing rule to obtain converted target multimedia data; the converted target multimedia data is sent to the client for display; the determining whether the client is within the range of the target edge cloud server based on the location information carried in the multimedia data interaction request includes: determining a target IP address pool based on the location information; judging whether the IP address of the client is in the target IP address pool or not; if yes, determining that the client is in the range of the area governed by the target edge cloud server.
7. A multimedia data interaction system, comprising: the system comprises an edge cloud server, a client and a center cloud server;
wherein the edge cloud server is configured to perform the method of any of claims 1-3, the center cloud server is configured to perform the method of claim 4, and the client is configured to display the converted target multimedia data.
8. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the multimedia data interaction method of any of claims 1 to 3 or 4.
CN202110300494.5A 2021-03-22 2021-03-22 Multimedia data interaction method, device and system Active CN115190120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110300494.5A CN115190120B (en) 2021-03-22 2021-03-22 Multimedia data interaction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110300494.5A CN115190120B (en) 2021-03-22 2021-03-22 Multimedia data interaction method, device and system

Publications (2)

Publication Number Publication Date
CN115190120A CN115190120A (en) 2022-10-14
CN115190120B true CN115190120B (en) 2024-03-01

Family

ID=83511510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110300494.5A Active CN115190120B (en) 2021-03-22 2021-03-22 Multimedia data interaction method, device and system

Country Status (1)

Country Link
CN (1) CN115190120B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
CN109981523A (en) * 2017-12-27 2019-07-05 中国移动通信集团云南有限公司 A kind of multimedia file processing method, device and cloud computing platform
CN110290506A (en) * 2019-04-17 2019-09-27 中国联合网络通信集团有限公司 A kind of edge cloud motion management method and equipment
CN110989825A (en) * 2019-09-10 2020-04-10 中兴通讯股份有限公司 Augmented reality interaction implementation method and system, augmented reality device and storage medium
CN111612933A (en) * 2020-05-18 2020-09-01 上海齐网网络科技有限公司 Augmented reality intelligent inspection system based on edge cloud server
US10771569B1 (en) * 2019-12-13 2020-09-08 Industrial Technology Research Institute Network communication control method of multiple edge clouds and edge computing system
CN111738281A (en) * 2020-08-05 2020-10-02 鹏城实验室 Simultaneous positioning and mapping system, map soft switching method and storage medium thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
CN109981523A (en) * 2017-12-27 2019-07-05 中国移动通信集团云南有限公司 A kind of multimedia file processing method, device and cloud computing platform
CN110290506A (en) * 2019-04-17 2019-09-27 中国联合网络通信集团有限公司 A kind of edge cloud motion management method and equipment
CN110989825A (en) * 2019-09-10 2020-04-10 中兴通讯股份有限公司 Augmented reality interaction implementation method and system, augmented reality device and storage medium
US10771569B1 (en) * 2019-12-13 2020-09-08 Industrial Technology Research Institute Network communication control method of multiple edge clouds and edge computing system
CN111612933A (en) * 2020-05-18 2020-09-01 上海齐网网络科技有限公司 Augmented reality intelligent inspection system based on edge cloud server
CN111738281A (en) * 2020-08-05 2020-10-02 鹏城实验室 Simultaneous positioning and mapping system, map soft switching method and storage medium thereof

Also Published As

Publication number Publication date
CN115190120A (en) 2022-10-14

Similar Documents

Publication Publication Date Title
US20170124755A1 (en) Display application and perspective views of virtual space
JP6279468B2 (en) How to access the augmented reality user context
US20130288717A1 (en) Augmented reality (ar) target updating method, and terminal and server employing same
CN114095557B (en) Data processing method, device, equipment and medium
CN105872002A (en) Video program obtaining method and system, and device
CN112291363B (en) Method, apparatus, electronic device, and computer-readable storage medium for wireless communication
US11265387B2 (en) Synchronizing multiple user devices in an immersive media environment using time-of-flight light patterns
KR20140118605A (en) Server and method for transmitting augmented reality object
US20210127147A1 (en) Method and apparatus for providing content using edge computing service
US11483533B2 (en) System and method for social immersive content rendering
CN116867553A (en) Multi-session remote game rendering
CN108566514A (en) Image combining method and device, equipment, computer readable storage medium
CN110662119A (en) Video splicing method and device
US20220086249A1 (en) Methods and Systems for Multi-Access Server Orchestration
CN115769560A (en) Method for live streaming of user-generated content using 5G edge application servers
CN112514418B (en) User node, network node and method for providing location-dependent program content
CN115190120B (en) Multimedia data interaction method, device and system
US11509961B2 (en) Automatic rating of crowd-stream caller video
US20210099547A1 (en) Collaborative and edge-enhanced augmented reality systems
US10143022B2 (en) Dynamic generation of geographically bound MANET IDs
US20140080457A1 (en) Information sharing for third party applications in cellular telecommunication infrastructures
US20120173678A1 (en) Network aware provisioning in a mobility supporting cloud computing environment
KR20230116939A (en) Method and Apparatus for Delivering 5G AR/MR Perceptual Experience to 5G Devices
US11418560B1 (en) Media and application aware network architecture
US10891654B2 (en) Location-based advertising using hybrid radio

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant