CN110913279A - Processing method for augmented reality and augmented reality terminal - Google Patents

Processing method for augmented reality and augmented reality terminal Download PDF

Info

Publication number
CN110913279A
CN110913279A CN201811087566.7A CN201811087566A CN110913279A CN 110913279 A CN110913279 A CN 110913279A CN 201811087566 A CN201811087566 A CN 201811087566A CN 110913279 A CN110913279 A CN 110913279A
Authority
CN
China
Prior art keywords
data
characteristic
feature
augmented reality
edge computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811087566.7A
Other languages
Chinese (zh)
Other versions
CN110913279B (en
Inventor
王�义
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkehai Micro Beijing Technology Co ltd
Original Assignee
Beijing See Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing See Technology Co Ltd filed Critical Beijing See Technology Co Ltd
Priority to CN201811087566.7A priority Critical patent/CN110913279B/en
Publication of CN110913279A publication Critical patent/CN110913279A/en
Application granted granted Critical
Publication of CN110913279B publication Critical patent/CN110913279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server

Abstract

The embodiment of the invention provides a processing method for augmented reality and an augmented reality terminal method, wherein the processing method comprises the following steps: acquiring original data; processing the original data and determining characteristic data for representing the characteristics of the original data; generating a characteristic data packet according to the characteristic data; and sending the characteristic data packet to the edge computing server. In the embodiment of the invention, the terminal can process the acquired original data to obtain the characteristic data; and determining feature data for representing the features of the original data according to the feature data, and sending the feature data packet to the edge computing server. Compared with the prior art that the original data is directly sent to the edge computing server, the data volume sent by the terminal is smaller in the embodiment of the invention, so that the communication resource and the computing resource of the edge computing server are saved, the transmission rate and the computing efficiency of the edge computing server are improved, the processing efficiency and the response speed of the augmented reality terminal are improved, and the user experience is obviously enhanced.

Description

Processing method for augmented reality and augmented reality terminal
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to a processing method for augmented reality and an augmented reality terminal.
Background
After smart phones and tablet computers, Augmented Reality (AR) becomes the next important general computing platform. Face recognition and vehicle recognition at a first visual angle and corresponding superposition display through different objects of a recognition scene are one of important functions of augmented reality, and the functions need comprehensive and rapid recognition of information such as videos and audios.
The augmented reality relates to edge computing (edge computing), but at present, no terminal capable of supporting edge computing exists, recognition, tracking and positioning are generally selected on the terminal, but the terminal is influenced by the performance of a processor, and when a certain deep learning model is operated, the effect of real-time recognition processing is influenced.
With the advent of the fifth Generation mobile communication technology (5th-Generation, 5G) era, mobile communication networks were implemented to support edge computing. Referring to fig. 1, the prior art provides a solution: the edge computing server is integrated with the cell base station to provide edge computing resources and provide edge computing capability for the input and output equipment which is accessed. However, for the augmented reality terminal, the acquired original data is directly uploaded to the edge computing server, the edge computing server needs to perform a large amount of complex synchronous positioning And Mapping (SLAM) computing And high-definition rendering, which occupies a large amount of Graphics Processing Unit (GPU) resources, And since the computing resources of the edge computing server are limited, unlike the conventional cloud server, there are a large amount of cluster computing resources, the capability of edge computing is reduced in a high-concurrency environment, which results in a reduction in augmented reality Processing efficiency.
For the above reasons, a technical solution capable of improving the augmented reality processing efficiency is needed.
Disclosure of Invention
The embodiment of the invention provides a processing method for augmented reality and an augmented reality terminal, and solves the problem that the processing efficiency of augmented reality is reduced in the conventional method.
According to a first aspect of the embodiments of the present invention, there is provided a processing method for augmented reality, which is applied to an augmented reality terminal, and the method includes: acquiring original data; processing the original data and determining characteristic data for representing the characteristics of the original data; generating a characteristic data packet according to the characteristic data, wherein the characteristic data packet comprises a preset time stamp; and sending the feature data packet to an edge computing server.
Optionally, after the acquiring the raw data, the method further comprises: and sending the original data to the edge computing server.
Optionally, the raw data comprises one or more of: audio data, behavioral data, and video data; the processing the raw data and determining feature data representing features of the raw data comprises one or more of the following: analyzing the audio data to obtain a Mel frequency cepstrum coefficient MFCC of the audio data; performing Kalman filtering fusion on the behavior data to obtain a quaternion of the behavior data after Kalman filtering fusion; and carrying out image feature detection on the video data to obtain the characteristic FAST corner feature from the accelerated segmentation test.
Optionally, the preset format includes: a message header and a message body; wherein the message header includes one or more of: a flag word, the total length of the message, the feature type, the feature acquisition period and a serial number; the message body comprises: the characteristic data and the preset time stamp.
Optionally, the sending the feature data packet to an edge computing server includes: and sending the feature data packet to the edge computing server through a real-time streaming protocol RTSP, a real-time message transfer protocol RTMP or a hypertext transfer protocol HTTP.
According to a second aspect of the embodiments of the present invention, there is provided a terminal, including: the device comprises an edge calculation processing module and a communication module, wherein the edge calculation processing module is used for acquiring original data; the edge calculation processing module is further configured to process the raw data and determine feature data representing features of the raw data; the edge calculation processing module is further used for generating a feature data packet according to the feature data; and the communication module is used for sending the characteristic data packet to an edge computing server.
Optionally, the communication module is further configured to send the raw data to the edge computing server.
Optionally, the raw data comprises one or more of: audio data, behavioral data, and video data; the edge calculation processing module comprises: the MFCC unit is used for analyzing the audio data to obtain a Mel frequency cepstrum coefficient MFCC of the audio data; the Kalman filtering unit is used for carrying out Kalman filtering fusion on the behavior data to obtain a quaternion of the behavior data after the Kalman filtering fusion; and the image characteristic detection unit is used for carrying out image characteristic detection on the video data to obtain the characteristic FAST corner characteristic from the accelerated segmentation test.
Optionally, the edge calculation processing module includes: and the data processing unit is used for packaging the characteristic data according to a preset format and generating the characteristic data packet.
Optionally, the preset format includes: a message header and a message body; wherein the message header includes one or more of: a flag word, the total length of the message, the feature type, the feature acquisition period and a serial number; the message body comprises: the characteristic data and the preset time stamp.
In the embodiment of the invention, the terminal can process the acquired original data to obtain the characteristic data; and determining feature data for representing the features of the original data according to the feature data, and sending the feature data packet to the edge computing server. Compared with the prior art that the original data is directly sent to the edge computing server, the data volume sent by the terminal is smaller in the embodiment of the invention, so that the communication resource and the computing resource of the edge computing server are saved, the transmission rate and the computing efficiency of the edge computing server are improved, the processing efficiency and the response speed of the augmented reality terminal are improved, and the user experience is obviously enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an architecture of a conventional augmented reality system;
fig. 2 is a schematic flowchart of a processing method for augmented reality according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a format of a feature data packet according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of interaction between an augmented reality terminal and an edge computing server according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an augmented reality terminal according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an edge calculation processing module according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 2, an embodiment of the present invention provides a processing method for augmented reality, which is applied to an augmented reality terminal, and the method includes the following specific steps:
step 201: acquiring original data;
in embodiments of the present invention, the raw data may include one or more of: audio data, behavioral data, and video data.
The raw data may be collected by existing functional modules, for example: the audio data can be collected through an audio module, a microphone and the like; video data can be collected through a camera module and the like; the behavior data may be collected by various sensors, for example, a nine-axis sensor (a sensor including a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetometer), which can reflect the behavior of the user when using the augmented reality terminal.
After the augmented reality terminal collects original data through the existing functional module, the original data are sent to the edge calculation processing module, and the edge calculation processing module processes the original data.
Step 202: processing the original data and determining characteristic data for representing the characteristics of the original data;
in the embodiment of the present invention, the original data includes audio data, behavior data, and video data as an example:
analyzing the audio data to obtain Mel-frequency cepstral Coefficients (MFCC) of the audio data;
performing Kalman filtering fusion on the behavior data to obtain a quaternion of the behavior data after Kalman filtering fusion;
image feature detection is carried out on the video data, and corner Features (FAST) from acceleration segmentation tests are obtained.
Step 203: generating a characteristic data packet according to the characteristic data;
in the embodiment of the invention, the augmented reality terminal packs the feature data according to the preset format to generate the feature data packet. Because time consistency of the feature data needs to be ensured during SLAM calculation, and the acquisition frequencies of various data are not uniform, a preset timestamp needs to be added when the feature data are packaged, and time synchronization is performed on the feature data.
Specifically, referring to fig. 3, a preset format is shown, including: a message header and a message body;
wherein the message header includes one or more of: a flag word, the total length of the message, the feature type, the feature acquisition period and a serial number; the message body comprises one or more of the following items: characteristic data and a preset time stamp.
Referring to table 1, information of each field of a header is recorded in the table, it should be noted that the content in table 1 is only an example, and this is not specifically limited in this embodiment of the present invention.
Field(s) Length (byte) Remarks for note
Sign word 2 0xAAFF
Total length of message 2 Total Length
Type of feature 2 Feature Type
Feature acquisition period 2 Feature Detection Period
Serial number 2 Sequence number
TABLE 1
The fixed length of the message header is 10 bytes, and each field is described as follows:
(1) and (4) marking words: identifying the entire feature data packet;
(2) total length of message: the length of the whole message supports the length of 65535 bytes at most;
(3) the characteristic types are as follows: identifying the characteristic content to be transmitted by the message, for example:
mel-frequency cepstrum coefficients: 0x 01;
quaternion after Kalman filtering fusion: 0x 02;
FAST corner feature: 0x 03;
(4) and recording the characteristic acquisition period, and if transmission packet loss occurs, facilitating the processing of the edge computing server, for example:
mel-frequency cepstrum coefficients: 0x19(25 ms. audio 25ms chunked processing);
quaternion after Kalman filtering fusion: 0x02(33ms, image acquisition frame rate 30 fps);
FAST corner feature: 0x03(2ms, sensor sampling rate 500 Hz);
(5) sequence number: and identifying the data packet sequence number of the characteristic data sender for counting the data packet sequence.
For each field description in the body of the message:
(1) characteristic data: characteristic data identified in step 202 to represent characteristics of the raw data;
(2) presetting a timestamp: the feature acquisition frequency of each kind of data is different, but the data is required to have time consistency when processing, for example, the MFCC of audio is processing at an interval of 25ms and sending a frame of data, the FAST corner feature of video data is processing at an interval of 33ms and sending a frame of data, the quaternion after kalman filtering fusion of behavior data is processing at an interval of 2ms and sending a frame of data, so time synchronization is required, and a preset timestamp with a uniform format is added into a feature data stream, so that the method is helpful for an edge computing server to accurately record the time point when acquiring each packet after receiving each packet, and is helpful for performing accurate spatial positioning calculation and feature recognition calculation.
The preset time stamp is used for representing the time when the augmented reality terminal determines the feature data, optionally, the preset time stamp is "yyyy-MM-dd hh: MM: ss ms", is accurate to millisecond level, and after the feature data is determined, the time for generating the feature data is recorded in the feature data packet.
Step 204: and sending the characteristic data packet to the edge computing server.
In another embodiment of the present invention, for some special application scenarios, such as real-time monitoring, etc., there is a certain requirement for the raw data in these application scenarios, and therefore, the augmented reality terminal sends the feature data packet to the edge computing server, and optionally, the augmented reality terminal also sends the raw data to the edge computing server.
Specifically, referring to fig. 4, a process of an augmented reality terminal interacting with an edge computing server is shown.
The DESCRIBE message indicates that the augmented reality terminal initiates an application to the edge computing server, and the augmented reality terminal obtains description information of a session, including: session start time, type and format;
the SETUP message indicates that the augmented reality terminal reminds the edge computing server to establish a session and determines a transmission mode, and the edge computing server sends back information including: a session identifier of acknowledge (OK) and reply;
the PLAY message represents that the augmented reality terminal sends a video stream request, the range of the total video stream playing time is set, and the edge computing server starts response confirmation;
data stream transmission, including two parts of raw data and feature data of a time period, the time period is a feature data acquisition period, for example: the time period is 1 second, and the data includes image data of 30 frames, 500 sets of Inertial Measurement Unit (IMU) data, and the like. The raw data may be processed using standard plug flow protocols, such as: a Streaming Protocol based on a hypertext transfer Protocol (HTTP), such as a Real Time Streaming Protocol (RTSP), a Real Time Messaging Protocol (RTMP), and the like, in which the feature data is transmitted in the form of a feature data packet in the step 203;
the above-described flow of data stream transmission may be repeatedly performed a plurality of times.
The TEARDOWN message indicates that the augmented reality terminal initiates a closing request, and the edge computing server confirms to close the edge computing server in response.
In the embodiment of the invention, the terminal can process the acquired original data to obtain the characteristic data; and determining feature data for representing the features of the original data according to the feature data, and sending the feature data packet to the edge computing server. Compared with the prior art that the original data is directly sent to the edge computing server, the data volume sent by the terminal is smaller in the embodiment of the invention, so that the communication resource and the computing resource of the edge computing server are saved, the transmission rate and the computing efficiency of the edge computing server are improved, the processing efficiency and the response speed of the augmented reality terminal are improved, and the user experience is obviously enhanced.
Referring to fig. 5, an embodiment of the present invention provides an augmented reality terminal 500, including an edge computing processing module 510, a communication module 520, and a plurality of functional modules 5301 to 5313;
the edge calculation processing module 510 is configured to obtain original data, where the original data is collected by the plurality of functional modules 5301 to 5313 and sent to the edge calculation processing module 510;
the edge calculation processing module 510 is further configured to process the raw data, and determine feature data representing features of the raw data;
the edge calculation processing module 510 is further configured to generate a feature data packet according to the feature data;
the communication module 520 is configured to send the feature data packet to the edge computing server.
Optionally, the communication module 520 is further configured to send the raw data to an edge computing server.
Optionally, the raw data comprises one or more of: audio data, behavioral data, and video data;
alternatively, referring to fig. 6, a structure of an edge calculation processing module is shown.
The edge calculation processing module 510 includes:
an MFCC unit 511, configured to analyze the audio data to obtain a mel-frequency cepstrum coefficient MFCC of the audio data;
a kalman filtering unit 512, configured to perform kalman filtering fusion on the behavior data to obtain a quaternion of the behavior data after the kalman filtering fusion;
and an image feature detection unit 513, configured to perform image feature detection on the video data to obtain a feature FAST corner feature from an accelerated segmentation test.
Optionally, the edge calculation processing module 510 further includes:
the data processing unit 514 is configured to package the feature data according to a preset format, and generate the feature data packet;
optionally, the preset format includes: a message header and a message body;
the message header comprises: a flag word, the total length of the message, the feature type, the feature acquisition period and a serial number; the message body comprises: characteristic data and a preset time stamp.
The names and functions of the modules in the functional modules 5301-5313 can be referred to in table 2 as follows:
Figure BDA0001803535400000081
TABLE 2
It should be noted that the above functional modules are only one possible implementation, and other types of functional modules may be selected according to actual use situations. The functional modules are used for acquiring original data (such as video data and audio data) required by the augmented reality function, and the number and the functions of the functional modules are not specifically limited in the embodiment of the present invention.
In the embodiment of the invention, the terminal can process the acquired original data to obtain the characteristic data; and determining feature data for representing the features of the original data according to the feature data, and sending the feature data packet to the edge computing server. Compared with the prior art that the original data are directly sent to the edge computing server, the data volume sent by the terminal is smaller in the embodiment of the invention, and the computing resources of the edge computing server are saved, so that the augmented reality processing efficiency is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A processing method for augmented reality is applied to an augmented reality terminal, and is characterized in that the method comprises the following steps:
acquiring original data;
processing the original data and determining characteristic data for representing the characteristics of the original data;
generating a characteristic data packet according to the characteristic data, wherein the characteristic data packet comprises a preset time stamp;
and sending the feature data packet to an edge computing server.
2. The method of claim 1, wherein after said obtaining raw data, the method further comprises:
and sending the original data to the edge computing server.
3. The method of claim 2, wherein the raw data comprises one or more of: audio data, behavioral data, and video data;
the processing the raw data and determining feature data representing features of the raw data comprises one or more of the following:
analyzing the audio data to obtain a Mel frequency cepstrum coefficient MFCC of the audio data;
performing Kalman filtering fusion on the behavior data to obtain a quaternion of the behavior data after Kalman filtering fusion;
and carrying out image feature detection on the video data to obtain the characteristic FAST corner feature from the accelerated segmentation test.
4. The method of claim 1, wherein generating a signature data packet based on the signature data comprises:
and packing the characteristic data according to a preset format to generate the characteristic data packet.
5. The method of claim 4, wherein the preset format comprises: a message header and a message body;
wherein the message header includes one or more of: a flag word, the total length of the message, the feature type, the feature acquisition period and a serial number;
the message body comprises: the characteristic data and the preset time stamp.
6. An augmented reality terminal, comprising: an edge calculation processing module and a communication module, wherein,
the edge calculation processing module is used for acquiring original data;
the edge calculation processing module is further configured to process the raw data and determine feature data representing features of the raw data;
the edge calculation processing module is further used for generating a feature data packet according to the feature data;
and the communication module is used for sending the characteristic data packet to an edge computing server.
7. The augmented reality terminal of claim 6,
the communication module is further configured to send the raw data to the edge computing server.
8. The augmented reality terminal of claim 7, wherein the raw data comprises one or more of: audio data, behavioral data, and video data;
the edge calculation processing module comprises:
the MFCC unit is used for analyzing the audio data to obtain a Mel frequency cepstrum coefficient MFCC of the audio data;
the Kalman filtering unit is used for carrying out Kalman filtering fusion on the behavior data to obtain a quaternion of the behavior data after the Kalman filtering fusion;
and the image characteristic detection unit is used for carrying out image characteristic detection on the video data to obtain the characteristic FAST corner characteristic from the accelerated segmentation test.
9. The augmented reality terminal of claim 6,
the edge calculation processing module comprises:
and the data processing unit is used for packaging the characteristic data according to a preset format and generating the characteristic data packet.
10. The augmented reality terminal of claim 9, wherein the preset format comprises: a message header and a message body;
wherein the message header includes one or more of: a flag word, the total length of the message, the feature type, the feature acquisition period and a serial number;
the message body comprises: the characteristic data and the preset time stamp.
CN201811087566.7A 2018-09-18 2018-09-18 Processing method for augmented reality and augmented reality terminal Active CN110913279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811087566.7A CN110913279B (en) 2018-09-18 2018-09-18 Processing method for augmented reality and augmented reality terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811087566.7A CN110913279B (en) 2018-09-18 2018-09-18 Processing method for augmented reality and augmented reality terminal

Publications (2)

Publication Number Publication Date
CN110913279A true CN110913279A (en) 2020-03-24
CN110913279B CN110913279B (en) 2022-11-01

Family

ID=69813569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811087566.7A Active CN110913279B (en) 2018-09-18 2018-09-18 Processing method for augmented reality and augmented reality terminal

Country Status (1)

Country Link
CN (1) CN110913279B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112098358A (en) * 2020-09-07 2020-12-18 燕山大学 Near infrared spectrum parallel fusion quantitative modeling method based on quaternion convolution neural network
CN113806072A (en) * 2021-08-10 2021-12-17 中标慧安信息技术股份有限公司 Data processing method and system based on edge calculation

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050071496A1 (en) * 2001-01-29 2005-03-31 Singal Sanjay S. Method and system for media object streaming
CN101031069A (en) * 2006-12-13 2007-09-05 北京大学 Method and system for navigating video electronic programm in network TV-set
US7324555B1 (en) * 2003-03-20 2008-01-29 Infovalue Computing, Inc. Streaming while fetching broadband video objects using heterogeneous and dynamic optimized segmentation size
CN105578199A (en) * 2016-02-22 2016-05-11 北京佰才邦技术有限公司 Virtual reality panorama multimedia processing system and method and client device
CN205408064U (en) * 2016-02-22 2016-07-27 北京佰才邦技术有限公司 Virtual reality panorama multimedia processing system and customer end equipment
CN105809144A (en) * 2016-03-24 2016-07-27 重庆邮电大学 Gesture recognition system and method adopting action segmentation
CN106774916A (en) * 2016-12-27 2017-05-31 歌尔科技有限公司 The implementation method and virtual reality system of a kind of virtual reality system
CN107025662A (en) * 2016-01-29 2017-08-08 成都理想境界科技有限公司 A kind of method for realizing augmented reality, server, terminal and system
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
US20180130260A1 (en) * 2016-11-08 2018-05-10 Rockwell Automation Technologies, Inc. Virtual reality and augmented reality for industrial automation
CN108151759A (en) * 2017-10-31 2018-06-12 捷开通讯(深圳)有限公司 A kind of air navigation aid, intelligent terminal and navigation server
CN108171734A (en) * 2017-12-25 2018-06-15 西安因诺航空科技有限公司 A kind of method and device of ORB feature extracting and matchings
CN108415763A (en) * 2018-02-11 2018-08-17 中南大学 A kind of distribution method of edge calculations system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050071496A1 (en) * 2001-01-29 2005-03-31 Singal Sanjay S. Method and system for media object streaming
US7324555B1 (en) * 2003-03-20 2008-01-29 Infovalue Computing, Inc. Streaming while fetching broadband video objects using heterogeneous and dynamic optimized segmentation size
CN101031069A (en) * 2006-12-13 2007-09-05 北京大学 Method and system for navigating video electronic programm in network TV-set
CN107025662A (en) * 2016-01-29 2017-08-08 成都理想境界科技有限公司 A kind of method for realizing augmented reality, server, terminal and system
CN105578199A (en) * 2016-02-22 2016-05-11 北京佰才邦技术有限公司 Virtual reality panorama multimedia processing system and method and client device
CN205408064U (en) * 2016-02-22 2016-07-27 北京佰才邦技术有限公司 Virtual reality panorama multimedia processing system and customer end equipment
CN105809144A (en) * 2016-03-24 2016-07-27 重庆邮电大学 Gesture recognition system and method adopting action segmentation
US20180130260A1 (en) * 2016-11-08 2018-05-10 Rockwell Automation Technologies, Inc. Virtual reality and augmented reality for industrial automation
CN106774916A (en) * 2016-12-27 2017-05-31 歌尔科技有限公司 The implementation method and virtual reality system of a kind of virtual reality system
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
CN108151759A (en) * 2017-10-31 2018-06-12 捷开通讯(深圳)有限公司 A kind of air navigation aid, intelligent terminal and navigation server
CN108171734A (en) * 2017-12-25 2018-06-15 西安因诺航空科技有限公司 A kind of method and device of ORB feature extracting and matchings
CN108415763A (en) * 2018-02-11 2018-08-17 中南大学 A kind of distribution method of edge calculations system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YINGCHEN: "智能互连解决方案实现身临其境的无缝AR和VR体验", 《中国电子商情(基础电子)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112098358A (en) * 2020-09-07 2020-12-18 燕山大学 Near infrared spectrum parallel fusion quantitative modeling method based on quaternion convolution neural network
CN112098358B (en) * 2020-09-07 2021-12-17 燕山大学 Near infrared spectrum parallel fusion quantitative detection method based on quaternion convolution neural network
CN113806072A (en) * 2021-08-10 2021-12-17 中标慧安信息技术股份有限公司 Data processing method and system based on edge calculation
CN113806072B (en) * 2021-08-10 2022-10-21 中标慧安信息技术股份有限公司 Data processing method and system based on edge calculation

Also Published As

Publication number Publication date
CN110913279B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
US20150187390A1 (en) Video metadata
US9451180B2 (en) Video stitching system and method
CN109144858B (en) Fluency detection method and device, computing equipment and storage medium
JP2013527947A5 (en)
CN112488783B (en) Image acquisition method and device and electronic equipment
EP4148597A1 (en) Search result display method and apparatus, readable medium, and electronic device
CN110913279B (en) Processing method for augmented reality and augmented reality terminal
US20190342428A1 (en) Content evaluator
WO2019214370A1 (en) Multimedia information transmission method and apparatus, and terminal
CN112040333B (en) Video distribution method, device, terminal and storage medium
CN111800646A (en) Method, device, medium and electronic equipment for monitoring teaching effect
CN112995712A (en) Method, device and equipment for determining stuck factors and storage medium
CN110097004B (en) Facial expression recognition method and device
CN110008926B (en) Method and device for identifying age
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
CN108076370B (en) Information transmission method and device and electronic equipment
CN113747245A (en) Multimedia resource uploading method and device, electronic equipment and readable storage medium
CN113839829A (en) Cloud game delay testing method, device and system and electronic equipment
CN116708892A (en) Sound and picture synchronous detection method, device, equipment and storage medium
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
WO2023098576A1 (en) Image processing method and apparatus, device, and medium
CN112866745B (en) Streaming video data processing method, device, computer equipment and storage medium
CN113747063B (en) Video transmission method and device, electronic equipment and readable storage medium
CN111586295B (en) Image generation method and device and electronic equipment
CN115209215A (en) Video processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200421

Address after: 100190 South Road, Haidian District Academy of science, Haidian District, Beijing

Applicant after: Wang Yi

Address before: 100096 Changlin 813, Xisanqi, Haidian District, Beijing

Applicant before: BEIJING SEENGENE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210210

Address after: 100080 Building 1, Loongson Industrial Park, Institute of computing, Chinese Academy of Sciences, No.1 wensong Road, Haidian District, Beijing

Applicant after: Zhongkehai micro (Beijing) Technology Co.,Ltd.

Address before: 100190 No.6, south academy of Sciences Road, Haidian District, Beijing

Applicant before: Wang Yi

GR01 Patent grant
GR01 Patent grant