CN108492352B - Augmented reality implementation method, device, system, computer equipment and storage medium - Google Patents

Augmented reality implementation method, device, system, computer equipment and storage medium Download PDF

Info

Publication number
CN108492352B
CN108492352B CN201810242139.5A CN201810242139A CN108492352B CN 108492352 B CN108492352 B CN 108492352B CN 201810242139 A CN201810242139 A CN 201810242139A CN 108492352 B CN108492352 B CN 108492352B
Authority
CN
China
Prior art keywords
real scene
target
augmented reality
scene picture
webpage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810242139.5A
Other languages
Chinese (zh)
Other versions
CN108492352A (en
Inventor
张庆吉
魏扼
庞英明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810242139.5A priority Critical patent/CN108492352B/en
Publication of CN108492352A publication Critical patent/CN108492352A/en
Priority to PCT/CN2019/077781 priority patent/WO2019179331A1/en
Application granted granted Critical
Publication of CN108492352B publication Critical patent/CN108492352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The application relates to a method for realizing augmented reality, which comprises the following steps: the terminal receives a real scene acquisition request initiated by a webpage, calls a camera device to acquire a real scene picture according to the real scene acquisition request, displays the acquired real scene picture on the webpage in real time, copies the real scene picture and transmits the real scene picture to a local augmented reality identification module of the terminal, the augmented reality identification module identifies the real scene picture to obtain an identification result, and returns the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result. The method for realizing augmented reality enables the page to have a Native (local) AR recognition effect, and greatly improves the performance of augmented reality in a page scene. In addition, an augmented reality implementation device, computer equipment and a storage medium are also provided.

Description

Augmented reality implementation method, device, system, computer equipment and storage medium
Technical Field
The present application relates to the field of computer processing technologies, and in particular, to a method, an apparatus, a system, a computer device, and a storage medium for implementing augmented reality.
Background
At present, a plurality of open platforms exist in the Augmented Reality (AR) direction, but the open platforms are all directed at the Native scene, and no good method exists in the web scene. AR in a traditional web scene is implemented by identifying and tracking a target object through a front end js (javascript), and then rendering a 3D animation. However, JavaScript is a weak object scripting language, and has poor capability of applying to image processing, so that the performance of augmented reality in a conventional web scene is poor.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a system, a computer device, and a storage medium for implementing augmented reality with high performance.
An implementation method of augmented reality, the method comprising:
a terminal receives a real scene acquisition request initiated by a webpage, and calls a camera device to acquire a real scene picture according to the real scene acquisition request;
displaying the collected real scene picture on a webpage in real time, copying the real scene picture and transmitting the real scene picture to an augmented reality identification module of a terminal local, wherein the augmented reality identification module carries out target identification on the real scene picture to obtain an identification result;
and returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
An apparatus for implementing augmented reality, the apparatus comprising:
the acquisition module is used for receiving a real scene acquisition request initiated by a webpage and calling a camera device to acquire a real scene picture according to the real scene acquisition request;
the transmission module is used for displaying the acquired real scene picture on a webpage in real time, copying the real scene picture and transmitting the copied real scene picture to the local augmented reality identification module of the terminal;
the augmented reality identification module is used for carrying out target identification on the real scene picture to obtain an identification result;
and the return module is used for returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
An augmented reality implementation system, the system comprising:
the terminal is used for receiving a real scene acquisition request initiated by a webpage, calling a camera device to acquire a real scene picture according to the real scene acquisition request, displaying the acquired real scene picture on the webpage in real time, copying the real scene picture and transmitting the real scene picture to a local augmented reality identification module of the terminal, identifying the definition of the real scene picture acquired in real time by the augmented reality identification module, screening out a first target video frame with the definition being greater than a preset threshold value, and transmitting the first target video frame to the server;
the server is used for identifying a first target object in the first target video frame and returning a first target identification model corresponding to the first target object;
the terminal is further used for identifying a first target object in the real scene picture according to the first target identification model to obtain an identification result, and returning the identification result to the webpage, so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
receiving a real scene acquisition request initiated by a webpage, and calling a camera device to acquire a real scene picture according to the real scene acquisition request;
displaying the collected real scene picture on a webpage in real time, copying the real scene picture and transmitting the real scene picture to an augmented reality identification module of a terminal local, wherein the augmented reality identification module carries out target identification on the real scene picture to obtain an identification result;
and returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
receiving a real scene acquisition request initiated by a webpage, and calling a camera device to acquire a real scene picture according to the real scene acquisition request;
displaying the collected real scene picture on a webpage in real time, copying the real scene picture and transmitting the real scene picture to an augmented reality identification module of a terminal local, wherein the augmented reality identification module carries out target identification on the real scene picture to obtain an identification result;
and returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
After the reality scene picture is collected, on one hand, the collected reality scene picture is displayed on a webpage in real time, on the other hand, the copied reality scene picture is transmitted to a local augmented reality recognition module of the terminal to be subjected to target recognition to obtain a recognition result, the recognition result is returned to the webpage, and the webpage performs augmented reality processing on the reality scene picture displayed on the webpage according to the recognition result. According to the method for realizing augmented reality, the local augmented reality identification module of the terminal is adopted to identify the target of the picture of the real scene, so that the page has a Native (local) AR identification effect, and the performance of augmented reality in the page scene is greatly improved.
Drawings
FIG. 1 is a diagram of an application environment in which a method for implementing augmented reality is implemented in one embodiment;
FIG. 2 is a flow chart of a method for implementing augmented reality in one embodiment;
FIG. 3A is a schematic diagram of an interface for performing object recognition on a captured real scene image in one embodiment;
FIG. 3B is a diagram illustrating an augmented reality display interface, according to an embodiment;
FIG. 4 is a flowchart illustrating a method for recognizing a target of the real scene image by the augmented reality recognition module to obtain a recognition result according to an embodiment;
FIG. 5 is a flow chart of a method for implementing augmented reality according to another embodiment;
FIG. 6 is a flowchart of a method for identifying a real scene by the browsing service kernel module in one embodiment;
FIG. 7 is an architecture diagram of an implementation of augmented reality in one embodiment;
FIG. 8 is a flow chart of a method for implementing augmented reality according to another embodiment;
FIG. 9 is a block diagram of an apparatus for implementing augmented reality according to an embodiment;
FIG. 10 is a block diagram of a browsing services kernel module in one embodiment;
FIG. 11 is a block diagram of a system for implementing augmented reality in one embodiment;
FIG. 12 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is an application environment diagram of an implementation method of augmented reality in an embodiment. Referring to fig. 1, the method for implementing augmented reality is applied to an implementation system of augmented reality. The augmented reality implementation system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. Specifically, the terminal 110 receives a real scene acquisition request initiated by a webpage, calls a camera device to acquire a real scene picture according to the real scene acquisition request, displays the acquired real scene picture on the webpage in real time, copies the real scene picture and transmits the real scene picture to the local augmented reality identification module of the terminal, the augmented reality identification module uploads the acquired video frame including the real scene picture to the server 120, the server 120 identifies a target object in the real scene picture to obtain an identification result, and returns the identification result to the augmented reality identification module. And the augmented reality identification module returns the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
As shown in fig. 2, in one embodiment, a method for implementing augmented reality is provided. The embodiment is mainly illustrated as applied to the terminal 110. The method for realizing the augmented reality specifically comprises the following steps:
step S202, the terminal receives a real scene acquisition request initiated by a webpage, and calls a camera device to acquire a real scene picture according to the real scene acquisition request.
The camera device refers to a device for shooting a video, such as a camera. In order to realize augmented reality in a webpage (web) scene, a camera in the terminal needs to be called to capture a real scene picture. Specifically, the terminal receives a real scene acquisition request initiated by a webpage, and then responds to the real scene acquisition request to call a camera device to acquire a real scene picture. The real scene picture is a picture in the real world captured by the imaging device, and the real world is an objective world existing outside the human brain and is a sensible world.
And S204, displaying the acquired real scene picture on a webpage in real time, copying the real scene picture and transmitting the copied real scene picture to a local augmented reality identification module of the terminal, and identifying the target of the real scene picture by the augmented reality identification module to obtain an identification result.
The augmented reality identification module is used for identifying a target object in a real scene picture to obtain an identification result. The recognition result includes the recognized target object and the position information of the target object. The terminal local content is the terminal content stored in the terminal and can be directly accessed without a network, for example, the terminal content stored in a terminal disk. The content that the terminal can access through the network belongs to the content of the webpage end. After the terminal acquires the real scene picture acquired by the camera device, on one hand, the acquired real scene picture is displayed on a webpage in real time, and on the other hand, a copy of the real scene picture is transferred to the local augmented reality identification module of the terminal, wherein the real scene picture refers to the acquired video frame, the video is composed of images of a frame, and each video frame corresponds to one real scene picture. And carrying out target identification on the real scene picture through the augmented reality identification module to obtain an identification result. The augmented reality recognition module is implemented by an AR-SDK (AR-Software Development kit). Because the augmented reality identification module exists in the local terminal, the augmented reality identification module has Native (local) identification and target object tracking capabilities, and the identification capability is greatly improved, so that the subsequent augmented reality display effect is improved.
In another embodiment, a plurality of target objects may be included in the same real scene picture, and different target objects need different target recognition models for recognition, in order to increase the speed of recognizing the target object, a plurality of real scene pictures may be copied at the same time and transferred to the plurality of target recognition models in the augmented reality recognition module, so that the plurality of target recognition models may simultaneously recognize the target object, and the speed of recognition is increased.
In one embodiment, a web page real-time communication module (WEBRTC module) in the local terminal copies the collected video stream containing the real scene picture into two parts, one of the two parts is transmitted to the front end of the web page for display, and the other part is transmitted to the augmented reality identification module for target identification. Among them, WEBRTC (Web Real-Time Communication) is a technology supporting a Web browser to perform video conversation, and is mainly responsible for rendering Real-Time pictures of Real scenes.
In another embodiment, the copied video stream is first transferred to the AR engine module for preprocessing, where the preprocessing includes determining whether the video frame is clear, screening out a video frame with higher definition, and the like, and then the AR engine module transfers the preprocessed video frame to the enhanced identification module for target identification, so as to obtain an identification result.
And step S206, returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
Among them, Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos, and 3D models, and the purpose of the technology is to wrap a virtual world on a screen over the real world and perform interaction. Augmented reality can integrate information of the real world and information of the virtual world seamlessly, virtual information is applied to the real world, a real environment and a virtual object are overlaid to the same picture or space in real time and exist at the same time, and the two kinds of information are mutually supplemented and overlaid. The identification result contains the position information of the identified target object, the display position of the virtual object is determined according to the position information of the target object, and then the virtual object and the target object are displayed in a fusion mode according to the display position of the virtual object, so that the augmented reality is realized in the scene of the webpage. Fig. 3A and 3B are schematic diagrams of an interface for increasing reality in an embodiment, where fig. 3A is a schematic diagram of an interface for performing target recognition on a captured real scene picture, fig. 3B is a schematic diagram of an interface for augmented reality display, and a picture displayed after a real scene and a virtual object (for example, a virtual castle in the drawing) are superimposed is shown in the diagram.
According to the method for realizing augmented reality, after the real scene picture is collected, on one hand, the collected real scene picture is displayed on the webpage in real time, on the other hand, the copied real scene picture is transferred to the local augmented reality recognition module of the terminal for target recognition to obtain a recognition result, the recognition result is returned to the webpage, and the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the recognition result. According to the method for realizing augmented reality, the local augmented reality identification module of the terminal is adopted to identify the target of the picture of the real scene, so that the page has a Native (local) AR identification effect, and the performance of augmented reality in the page scene is greatly improved.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: and the augmented reality identification module acquires a target identification model, and identifies the target object in the real scene picture by adopting the target identification model to obtain an identification result.
The target recognition model is a pre-established model for recognizing a target object in a real scene picture. The target recognition models corresponding to different target objects are different. Before the real scene picture is identified, firstly, a target object to be identified is determined, then a target identification model corresponding to the target object is obtained, and the target object in the real scene picture is identified by adopting the target identification model to obtain an identification result.
In one embodiment, before the step of acquiring the target recognition model by the augmented reality recognition module, the method further comprises: the augmented reality identification module receives a sample image containing a target object and extracts the characteristics of the target object in the sample image; and establishing the target recognition model according to the extracted features.
Before the augmented reality recognition module recognizes a target object in a real scene picture by using a target recognition model, the target recognition model needs to be established in advance. The target recognition model is obtained by performing feature extraction on one or more registered sample images containing a target object and then performing training and learning according to the extracted features. For example, if a user wants to recognize a "hand", a picture including the "hand" is registered, and a target recognition model for recognizing the "hand" is obtained by recognizing the features of the "hand" in the picture.
As shown in fig. 4, in an embodiment, the augmented reality recognition module performs object recognition on the real scene picture, and the step of obtaining a recognition result includes:
step S204A, the augmented reality identification module identifies the definition of the real scene picture collected in real time, and screens out the first target video frame with the definition larger than a preset threshold value.
The definition refers to the definition of each detail shadow and its boundary on the picture of the real scene, and is used to reflect the image quality, and the definition can be measured by the resolution of the video picture. In order to acquire a target identification model from a server, a clear video frame needs to be selected from a video stream of a real scene picture and uploaded to the server, then the server identifies a target object contained in the video frame, and a corresponding target identification model is issued to a terminal according to the identified target object. Specifically, the definition of a real scene picture collected in real time is identified, and then a first target video frame with the definition larger than a preset threshold value is screened out. The preset threshold is defined and set in advance according to an actual situation, and the number of the first target video frames may also be set in a self-defined manner, for example, the number of the first target video frames may be one frame or multiple frames.
Step S204B, transmitting the first target video frame to the server, so that the server identifies the first target object in the first target video frame, and returns the first target identification model corresponding to the first target object.
The server stores target recognition models corresponding to different target objects respectively. In order to flexibly select a target recognition model according to a target object contained in a real scene picture, a first screened target video frame is transmitted to a server, then the server recognizes a first target object of the first target video frame, and a first target recognition model corresponding to the first target object is returned to an augmented reality recognition module, and the augmented reality recognition module recognizes the first target object in the real scene picture according to the received first target recognition model.
Step S204C, recognizing the first target object in the real scene picture according to the first target recognition model, and obtaining a recognition result.
After receiving the first target identification model returned by the server, the local augmented reality identification module of the terminal identifies the first target object in the picture of the real scene by adopting the first target identification model, identifies the position of the first target object, is convenient for determining the display position of the virtual object according to the position of the first target object, and then displays the virtual object and the first target object in the real scene in a combined manner.
As shown in fig. 5, in an embodiment, the method for implementing augmented reality further includes:
and step S208, detecting whether the scene corresponding to the real scene picture is switched, if so, entering step S210, and if not, continuing to detect until the end.
When a scene of the real scene picture is switched, it indicates that the target object included in the real scene picture is likely to be changed, for example, the target object in the previous real scene picture is a "face" and the target object in the changed real scene picture is a "hand". Therefore, when it is detected that a scene change occurs in a scene corresponding to a real scene picture, it is necessary to newly determine a target recognition model corresponding to a target object. The method for detecting whether scene switching occurs in the real scene picture includes comparing whether pictures in a plurality of continuous video frames change or not, if yes, indicating that the scene changes, acquiring a video frame corresponding to a switched updated scene, and screening out a second target video frame corresponding to the updated scene.
Step S210, obtaining a video frame corresponding to the updated scene after the scene switching, and screening a second target video frame corresponding to the updated scene.
And after the scene switching is detected, acquiring the video frame corresponding to the switched updated scene, and screening out a second target video frame corresponding to the updated scene according to the definition.
Step S212, transmitting the second target video frame to a server, so that the server identifies a second target object in the second target video frame, and returns a second target identification model corresponding to the second target object.
And uploading the screened second target video frame to a server, identifying a second target object in the second target video frame by the server, then acquiring a second target identification model corresponding to the second target object, and returning the second target identification to a local augmented reality identification module of the terminal.
Step S214, recognizing a second target object in the real scene picture according to the second target recognition model to obtain a recognition result.
And the local augmented reality identification module of the terminal identifies a second target object in the real scene picture by adopting a second target identification model, and then obtains the position information of the second target object.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: identifying the definition of a real scene picture collected in real time, and screening out a target video frame with the definition larger than a preset threshold value; transmitting the target video frame to a server so that the server identifies a target object in the real scene picture to obtain an identification result; and receiving the identification result returned by the server.
In order to obtain the recognition result from the server, the definition of the collected real scene picture is recognized, a target video frame with the definition larger than a preset threshold value is screened out, then the target video frame is transmitted to the server, the server recognizes a target object contained in the target video frame to obtain the recognition result, and then the recognition result is returned to the local augmented reality recognition module of the terminal. Specifically, the server includes a target identification model for identifying the target object, and the target identification model is used to identify the target object in the target video frame to obtain an identification result.
In one embodiment, the identification result is position information of the target object; the step of returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result comprises the following steps: and returning the position information of the target object to a webpage so that the webpage determines a display position corresponding to the virtual object according to the position information of the target object, and displaying the virtual object and the target object in a combined manner according to the display position.
And the webpage determines the display position of the virtual object according to the position information of the target object, and displays the virtual object and the target object in a combined manner according to the display position. In one embodiment, the processing of augmented reality is implemented using WebGL (web Graphics library) technology, which is a necessary technology for web front-end pages to render 3D animations.
As shown in fig. 6, in an embodiment, the terminal locally includes a browsing service kernel module, where the browsing service kernel module includes a web page real-time communication module and an augmented reality identification module;
the step S204 of displaying the collected real scene picture on a webpage in real time, copying the real scene picture, transmitting the real scene picture to a local augmented reality recognition module of the terminal, performing target recognition on the real scene picture by the augmented reality recognition module, and obtaining a recognition result includes:
step S204a, the real-time web page communication module in the browsing service kernel module displays the collected real scene picture on the web page in real time, copies the real scene picture, and transmits the copied real scene picture to the augmented reality identification module.
The browsing service kernel module refers to a module storing a browsing service kernel (for example, a TBS kernel), where the TBS refers to flight browsing service, which is supported by an X5 kernel, and an X5 kernel is a browser rendering engine based on chrome, where chrome is a browser engine. The browsing service kernel module comprises a webpage real-time communication module and an augmented reality identification module. The webpage real-time communication module copies the real scene pictures into two parts after acquiring the real scene pictures, one part of the real scene pictures is displayed on the webpage in real time, and the other part of the real scene pictures is transmitted to the augmented reality identification module.
Step S204b, the augmented reality identification module performs target identification on the real scene picture to obtain an identification result, and returns the identification result to the web page, so that the web page performs augmented reality processing on the real scene picture displayed on the web page according to the identification result.
The augmented reality identification module is used for identifying a target object in a real scene picture to obtain an identification result, then transmitting the identification result to a front-end webpage, and the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
Fig. 7 is an architecture diagram of an implementation method of augmented reality in an embodiment. Referring to fig. 7, a browsing service kernel (e.g., TBS kernel) module is stored inside the terminal, and the browsing service kernel module includes an AR-SDK module (i.e., an augmented reality identification module), an AR engine module, a WEBRTC module, a webbl module, and a JSAPI interface. An AR software tool development kit is integrated in the AR-SDK module and used for identifying and tracking (namely positioning) a target object in a real scene picture. The AR engine module is used for preprocessing the received real scene picture and is responsible for data transmission, compatibility processing and the like. WEBRTC is a technical scheme that H5(HTML 5) page displays camera shooting content. The WEBRTC module is used for acquiring a video stream of a picture of a real scene acquired by the camera device, and is responsible for copying the video stream into two parts, one part is displayed on a front-end page, the other part is transmitted to the AR engine module, the AR engine module preprocesses the received video stream, and then transmits the preprocessed video stream to the AR-SDK module. The WEBGL module is used for performing 3D animation rendering on a front-end page so as to realize augmented reality processing on a real scene picture. The JSAPI (javascript API) Interface is an API (Application Program Interface) Interface for providing output for the front-end page, and returns the recognition result to the front-end page through the JSAPI. Specifically, firstly, a WEBRTC module acquires a real scene picture collected by a camera device, on one hand, the real scene picture is displayed on a webpage end in real time, on the other hand, the real scene picture is copied and transmitted to an AR engine module, the AR engine module preprocesses the real scene picture and transmits the preprocessed real scene picture to an AR-SDK module, the AR-SDK module is used for recognizing a target object in the real scene picture to obtain a recognition result, then transmits the recognition result to the AR engine module, then the AR engine module returns the recognition result to a WEB application through a JSAPI interface, and the WEB application calls the WEBGL module to perform 3D rendering according to the recognition result to realize augmented reality processing.
In another embodiment, the AR-SDK module is further configured to communicate with the server to transmit the video frames to the server, and then the server identifies the target object in the real scene picture, and then returns the identification result to the AR-SDK module.
As shown in fig. 8, a method for implementing augmented reality is provided, which includes a web real-time communication module and an augmented reality identification module, and specifically includes the following steps:
step S801, the terminal receives a real scene acquisition request initiated by a webpage, and calls a camera device to acquire a real scene picture according to the real scene acquisition request.
And S802, the webpage real-time communication module displays the acquired real scene picture on a webpage in real time, copies the real scene picture and transmits the copied real scene picture to the local augmented reality identification module of the terminal.
Step S803, the augmented reality identification module identifies the definition of the real scene picture collected in real time, and screens out the first target video frame with the definition larger than a preset threshold value.
Step S804, transmitting the first target video frame to the server, so that the server identifies the first target object in the first target video frame, and returns the first target identification model corresponding to the first target object.
Step S805, recognizing a first target object in the real scene picture according to the first target recognition model, to obtain a first recognition result.
Step S806 returns the first recognition result to the web page, so that the web page performs augmented reality processing on the real scene picture displayed on the web page according to the first recognition result.
Step S807, determining whether a scene corresponding to the real scene picture is switched, if yes, proceeding to step S808, and if no, ending.
Step S808, acquiring a video frame corresponding to the updated scene after the scene switching, and screening a second target video frame corresponding to the updated scene.
Step S809, transmitting the second target video frame to the server, so that the server identifies the second target object in the second target video frame, and returns the second target identification model corresponding to the second target object.
And step S810, identifying a second target object in the real scene picture according to the second target identification model to obtain a second identification result.
Step S811, returning the second recognition result to the web page, so that the web page performs augmented reality processing on the real scene picture displayed on the web page according to the second recognition result.
As shown in fig. 9, an apparatus for implementing augmented reality is provided, the apparatus including:
the acquisition module 902 is used for receiving a real scene acquisition request initiated by a webpage and calling a camera device to acquire a real scene picture according to the real scene acquisition request;
a transfer module 904, configured to display the acquired real scene picture on a webpage in real time, copy the real scene picture, and transfer the copied real scene picture to a local augmented reality identification module 906 of the terminal;
the augmented reality identification module 906 is configured to perform target identification on the real scene picture to obtain an identification result;
a returning module 908, configured to return the identification result to the web page, so that the web page performs augmented reality processing on the real scene picture displayed on the web page according to the identification result.
In an embodiment, the augmented reality identification module 906 is further configured to obtain a target identification model, and identify a target object in the real scene picture by using the target identification model, so as to obtain an identification result.
In one embodiment, the augmented reality identification module 906 is further configured to receive a sample image containing a target object, extract features of the target object in the sample image, and establish the target identification model according to the extracted features.
In an embodiment, the augmented reality identification module 906 is further configured to identify the definition of a real scene picture acquired in real time, screen out a first target video frame with the definition greater than a preset threshold, transmit the first target video frame to a server, so that the server identifies a first target object in the first target video frame, return a first target identification model corresponding to the first target object, and identify the first target object in the real scene picture according to the first target identification model to obtain an identification result.
In an embodiment, the augmented reality identification module 906 is further configured to, when it is identified that a scene corresponding to the real scene picture is switched, acquire a video frame corresponding to an updated scene after the scene is switched, screen a second target video frame corresponding to the updated scene, transmit the second target video frame to a server, so that the server identifies a second target object in the second target video frame, return to a second target identification model corresponding to the second target object, and identify the second target object in the real scene picture according to the second target identification model to obtain an identification result.
In an embodiment, the augmented reality identification module 906 is further configured to identify the definition of a real scene picture acquired in real time, screen out a target video frame with the definition greater than a preset threshold, transmit the target video frame to a server, so that the server identifies a target object in the real scene picture to obtain an identification result, and receive the identification result returned by the server.
In one embodiment, the identification result is position information of the target object; the returning module 908 is further configured to return the position information of the target object to a webpage, so that the webpage determines a display position corresponding to the virtual object according to the position information of the target object, and displays the virtual object and the target object in a combined manner according to the display position.
As shown in fig. 10, in an embodiment, the terminal locally includes a browsing service kernel module 100, where the browsing service kernel module includes a web page real-time communication module 1002 and an augmented reality identification module 1004;
the web real-time communication module 1002 is configured to display the acquired real scene picture on a web in real time, copy the real scene picture, and transmit the copied real scene picture to the augmented reality identification module;
the augmented reality identification module 1004 is configured to perform target identification on the real scene picture to obtain an identification result, and return the identification result to the webpage, so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
As shown in fig. 11, an implementation system of augmented reality is proposed, the system including:
the terminal 1102 is used for receiving a real scene acquisition request initiated by a webpage, calling a camera device to acquire a real scene picture according to the real scene acquisition request, displaying the acquired real scene picture on the webpage in real time, copying the real scene picture and transmitting the real scene picture to a local augmented reality identification module of the terminal, identifying the definition of the real scene picture acquired in real time by the augmented reality identification module, screening out a first target video frame with the definition larger than a preset threshold value, and transmitting the first target video frame to a server;
a server 1104 for identifying a first target object in the first target video frame and returning a first target identification model corresponding to the first target object;
the terminal 1102 is further configured to identify a first target object in the real scene picture according to the first target identification model to obtain an identification result, and return the identification result to the web page, so that the web page performs augmented reality processing on the real scene picture displayed on the web page according to the identification result.
In an embodiment, the terminal 1102 is further configured to, when it is identified that a scene corresponding to a real scene picture is switched, acquire a video frame corresponding to an updated scene after the scene is switched, and screen a second target video frame corresponding to the updated scene; transmitting the second target video frame to a server; the server 1104 is further configured to identify a second target object in the second target video frame, and return a second target identification model corresponding to the second target object; the terminal 1102 is further configured to identify a second target object in the real scene picture according to the second target identification model, so as to obtain an identification result.
In one embodiment, the terminal 1102 is further configured to identify the definition of a real-time acquired picture of a real scene, and screen out a target video frame with the definition greater than a preset threshold; transmitting the target video frame to a server; the server 1104 is further configured to identify a target object in the real scene picture, and obtain an identification result; the terminal 1102 is further configured to receive an identification result returned by the server.
FIG. 12 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be a terminal. As shown in fig. 12, the computer apparatus includes a processor, a memory, a network interface, an input device, a camera device, and a display screen, which are connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program, which, when executed by the processor, causes the processor to implement the augmented reality implementation method. The internal memory may also store a computer program, and when the computer program is executed by the processor, the computer program may cause the processor to execute the method for implementing augmented reality. The camera device of the computer equipment is a camera and is used for collecting images. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the augmented reality implementation method provided by the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 12. The memory of the computer device may store various program modules constituting the augmented reality implementing apparatus, such as the acquisition module 902, the transfer module 904, the augmented reality identification module 906, and the return module 908 of fig. 9. The computer program constituted by the program modules causes the processor to execute the steps in the augmented reality implementation apparatus of the embodiments of the present application described in the present specification. For example, the computer device shown in fig. 12 may receive a real scene acquisition request initiated by a web page through the acquisition module 902 of the implementation apparatus of augmented reality shown in fig. 9, call a camera device to acquire a real scene picture according to the real scene acquisition request, display the acquired real scene picture on the web page in real time through the transmission module 904, copy the real scene picture and transmit the real scene picture to the local augmented reality identification module of the terminal, perform target identification on the real scene picture through the augmented reality identification module 906 to obtain an identification result, and return the identification result to the web page through the return module 908, so that the web page performs augmented reality processing on the real scene picture displayed on the web page according to the identification result.
In one embodiment, a computer device is proposed, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of: receiving a real scene acquisition request initiated by a webpage, and calling a camera device to acquire a real scene picture according to the real scene acquisition request; displaying the collected real scene picture on a webpage in real time, copying the real scene picture and transmitting the real scene picture to an augmented reality identification module of a local computer device, and carrying out target identification on the real scene picture by the augmented reality identification module to obtain an identification result; and returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: and the augmented reality identification module acquires a target identification model, and identifies the target object in the real scene picture by adopting the target identification model to obtain an identification result.
In one embodiment, the processor is further configured to perform the steps of: the augmented reality identification module receives a sample image containing a target object and extracts the characteristics of the target object in the sample image; and establishing the target recognition model according to the extracted features.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: the augmented reality identification module identifies the definition of a real scene picture acquired in real time and screens out a first target video frame with the definition larger than a preset threshold value; transmitting the first target video frame to a server so that the server identifies a first target object in the first target video frame and returns a first target identification model corresponding to the first target object; and identifying a first target object in the real scene picture according to the first target identification model to obtain an identification result.
In one embodiment, the processor is further configured to perform the steps of: when scene switching of a scene corresponding to the real scene picture is identified, acquiring a video frame corresponding to an updated scene after the scene switching, and screening a second target video frame corresponding to the updated scene; transmitting the second target video frame to a server so that the server identifies a second target object in the second target video frame and returns a second target identification model corresponding to the second target object; and identifying a second target object in the real scene picture according to the second target identification model to obtain an identification result.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: identifying the definition of a real scene picture collected in real time, and screening out a target video frame with the definition larger than a preset threshold value; transmitting the target video frame to a server so that the server identifies a target object in the real scene picture to obtain an identification result; and receiving the identification result returned by the server.
In one embodiment, the identification result is position information of the target object; the step of returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result comprises the following steps: and returning the position information of the target object to a webpage so that the webpage determines a display position corresponding to the virtual object according to the position information of the target object, and displaying the virtual object and the target object in a combined manner according to the display position.
In one embodiment, the computer device locally comprises a browsing service kernel module, and the browsing service kernel module comprises a webpage real-time communication module and an augmented reality identification module; the method comprises the following steps of displaying collected real scene pictures on a webpage in real time, copying the real scene pictures, transmitting the real scene pictures to a local augmented reality identification module of a terminal, carrying out target identification on the real scene pictures by the augmented reality identification module, and obtaining an identification result, wherein the steps comprise: the webpage real-time communication module in the browsing service kernel module displays the collected real scene picture on a webpage in real time, copies the real scene picture and transmits the real scene picture to the augmented reality identification module, the augmented reality identification module identifies the target of the real scene picture to obtain an identification result, and returns the identification result to the webpage, so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
In one embodiment, a computer-readable storage medium is proposed, in which a computer program is stored which, when executed by a processor, causes the processor to carry out the steps of: receiving a real scene acquisition request initiated by a webpage, and calling a camera device to acquire a real scene picture according to the real scene acquisition request; displaying the collected real scene picture on a webpage in real time, copying the real scene picture and transmitting the real scene picture to an augmented reality identification module of a local computer device, and carrying out target identification on the real scene picture by the augmented reality identification module to obtain an identification result; and returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: and the augmented reality identification module acquires a target identification model, and identifies the target object in the real scene picture by adopting the target identification model to obtain an identification result.
In one embodiment, the processor is further configured to perform the steps of: the augmented reality identification module receives a sample image containing a target object and extracts the characteristics of the target object in the sample image; and establishing the target recognition model according to the extracted features.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: the augmented reality identification module identifies the definition of a real scene picture acquired in real time and screens out a first target video frame with the definition larger than a preset threshold value; transmitting the first target video frame to a server so that the server identifies a first target object in the first target video frame and returns a first target identification model corresponding to the first target object; and identifying a first target object in the real scene picture according to the first target identification model to obtain an identification result.
In one embodiment, the processor is further configured to perform the steps of: when scene switching of a scene corresponding to the real scene picture is identified, acquiring a video frame corresponding to an updated scene after the scene switching, and screening a second target video frame corresponding to the updated scene; transmitting the second target video frame to a server so that the server identifies a second target object in the second target video frame and returns a second target identification model corresponding to the second target object; and identifying a second target object in the real scene picture according to the second target identification model to obtain an identification result.
In one embodiment, the augmented reality recognition module performs target recognition on the real scene picture, and the step of obtaining the recognition result includes: identifying the definition of a real scene picture collected in real time, and screening out a target video frame with the definition larger than a preset threshold value; transmitting the target video frame to a server so that the server identifies a target object in the real scene picture to obtain an identification result; and receiving the identification result returned by the server.
In one embodiment, the identification result is position information of the target object; the step of returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result comprises the following steps: and returning the position information of the target object to a webpage so that the webpage determines a display position corresponding to the virtual object according to the position information of the target object, and displaying the virtual object and the target object in a combined manner according to the display position.
In one embodiment, the processor is further configured to perform the steps of: the webpage real-time communication module in the browsing service kernel module displays the collected real scene picture on a webpage in real time, copies the real scene picture and transmits the real scene picture to the augmented reality identification module, the augmented reality identification module identifies the target of the real scene picture to obtain an identification result, and returns the identification result to the webpage, so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (20)

1. An implementation method of augmented reality, the method comprising:
a terminal receives a real scene acquisition request initiated by a webpage, and calls a camera device to acquire a real scene picture according to the real scene acquisition request;
displaying the acquired real scene picture on a webpage in real time, copying the real scene picture and transmitting the real scene picture to an augmented reality identification module of a terminal, identifying the definition of the real scene picture by the augmented reality identification module, screening out a first target video frame with the definition being greater than a preset threshold value, transmitting the first target video frame to a server so that the server identifies a first target object in the first target video frame, and returning to a first target identification model corresponding to the first target object; the target recognition model is a pre-established model for recognizing a target object in a real scene picture;
receiving the first target recognition model returned by the server, and performing target recognition on the real scene picture by the augmented reality recognition module by using the first target recognition model to obtain a recognition result;
and returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
2. The method of claim 1, wherein the target recognition models for different target objects are different.
3. The method of claim 2, wherein the step of deriving the target recognition model comprises:
the augmented reality identification module receives a sample image containing a target object and extracts the characteristics of the target object in the sample image;
and establishing the target recognition model according to the extracted features.
4. The method of claim 1, wherein the sharpness is used to reflect image quality; the sharpness is measured in terms of the resolution of the video pictures.
5. The method of claim 1, further comprising:
when scene switching of a scene corresponding to the real scene picture is identified, acquiring a video frame corresponding to an updated scene after the scene switching, and screening a second target video frame corresponding to the updated scene;
transmitting the second target video frame to a server so that the server identifies a second target object in the second target video frame and returns a second target identification model corresponding to the second target object;
and identifying a second target object in the real scene picture according to the second target identification model to obtain an identification result.
6. The method according to claim 1, wherein the augmented reality recognition module performs object recognition on the real scene picture by using the first object recognition model, and the step of obtaining the recognition result comprises:
the server identifies a target object in the real scene picture to obtain an identification result;
and receiving the identification result returned by the server.
7. The method according to claim 1, wherein the recognition result is position information of a target object;
the step of returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result comprises the following steps:
and returning the position information of the target object to a webpage so that the webpage determines a display position corresponding to the virtual object according to the position information of the target object, and displaying the virtual object and the target object in a combined manner according to the display position.
8. The method according to claim 1, wherein the terminal locally comprises a browsing service kernel module, and the browsing service kernel module comprises a web page real-time communication module and an augmented reality identification module;
the steps of displaying the collected real scene picture on a webpage in real time, copying the real scene picture and transmitting the real scene picture to a local augmented reality identification module of the terminal comprise:
and a webpage real-time communication module in the browsing service kernel module displays the acquired real scene picture on a webpage in real time, copies the real scene picture and transmits the copied real scene picture to the augmented reality identification module.
9. An apparatus for implementing augmented reality, the apparatus comprising:
the acquisition module is used for receiving a real scene acquisition request initiated by a webpage and calling a camera device to acquire a real scene picture according to the real scene acquisition request;
the transmission module is used for displaying the acquired real scene picture on a webpage in real time, copying the real scene picture and transmitting the copied real scene picture to the local augmented reality identification module of the terminal;
the augmented reality identification module is further configured to identify the definition of the picture of the real scene, screen out a first target video frame with the definition greater than a preset threshold, transmit the first target video frame to a server, so that the server identifies a first target object in the first target video frame, and return to a first target identification model corresponding to the first target object; the target recognition model is a pre-established model for recognizing a target object in a real scene picture;
the device is further used for receiving the first target recognition model returned by the server;
the augmented reality identification module is used for carrying out target identification on the real scene picture by utilizing the first target identification model to obtain an identification result;
and the return module is used for returning the identification result to the webpage so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
10. The apparatus of claim 9, wherein the target recognition models for different target objects are different.
11. The apparatus of claim 10, wherein the augmented reality recognition module is further configured to receive a sample image containing a target object, extract features of the target object in the sample image, and build the target recognition model according to the extracted features.
12. The apparatus of claim 9, wherein the sharpness is used to reflect image quality; the sharpness is measured in terms of the resolution of the video pictures.
13. The apparatus according to claim 9, wherein the augmented reality identification module is further configured to, when it is identified that a scene change occurs in the scene corresponding to the real scene picture, acquire a video frame corresponding to an updated scene after the scene change, and filter a second target video frame corresponding to the updated scene; transmitting the second target video frame to a server so that the server identifies a second target object in the second target video frame and returns a second target identification model corresponding to the second target object; and identifying a second target object in the real scene picture according to the second target identification model to obtain an identification result.
14. The apparatus according to claim 9, wherein the server identifies a target object in the real scene picture, resulting in an identification result; the augmented reality identification module is further configured to receive the identification result returned by the server.
15. The apparatus according to claim 9, wherein the recognition result is position information of a target object; the returning module is further configured to return the position information of the target object to a webpage, so that the webpage determines a display position corresponding to the virtual object according to the position information of the target object, and displays the virtual object and the target object in a combined manner according to the display position.
16. The device according to claim 9, wherein the terminal locally comprises a browsing service kernel module, and the browsing service kernel module comprises a web page real-time communication module and an augmented reality identification module; the webpage real-time communication module displays the collected real scene picture on a webpage in real time, copies the real scene picture and transmits the copied real scene picture to the augmented reality identification module.
17. An augmented reality implementation system, the system comprising:
the terminal is used for receiving a real scene acquisition request initiated by a webpage, calling a camera device to acquire a real scene picture according to the real scene acquisition request, displaying the acquired real scene picture on the webpage in real time, copying the real scene picture and transmitting the real scene picture to a local augmented reality identification module of the terminal, identifying the definition of the real scene picture acquired in real time by the augmented reality identification module, screening out a first target video frame with the definition being greater than a preset threshold value, and transmitting the first target video frame to the server;
the server is used for identifying a first target object in the first target video frame and returning a first target identification model corresponding to the first target object; the target recognition model is a pre-established model for recognizing a target object in a real scene picture;
the terminal is further used for identifying a first target object in the real scene picture according to the first target identification model to obtain an identification result, and returning the identification result to the webpage, so that the webpage performs augmented reality processing on the real scene picture displayed on the webpage according to the identification result.
18. The system according to claim 17, wherein the terminal is further configured to, when it is identified that a scene change occurs in the scene corresponding to the real scene picture, acquire a video frame corresponding to an updated scene after the scene change, screen a second target video frame corresponding to the updated scene, and transmit the second target video frame to the server;
the server is further used for identifying a second target object in the second target video frame and returning a second target identification model corresponding to the second target object;
and the terminal is also used for identifying a second target object in the real scene picture according to the second target identification model to obtain an identification result.
19. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 8.
20. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 8.
CN201810242139.5A 2018-03-22 2018-03-22 Augmented reality implementation method, device, system, computer equipment and storage medium Active CN108492352B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810242139.5A CN108492352B (en) 2018-03-22 2018-03-22 Augmented reality implementation method, device, system, computer equipment and storage medium
PCT/CN2019/077781 WO2019179331A1 (en) 2018-03-22 2019-03-12 Augmented reality implementation method, apparatus and system, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810242139.5A CN108492352B (en) 2018-03-22 2018-03-22 Augmented reality implementation method, device, system, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108492352A CN108492352A (en) 2018-09-04
CN108492352B true CN108492352B (en) 2021-10-22

Family

ID=63319449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810242139.5A Active CN108492352B (en) 2018-03-22 2018-03-22 Augmented reality implementation method, device, system, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN108492352B (en)
WO (1) WO2019179331A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492352B (en) * 2018-03-22 2021-10-22 腾讯科技(深圳)有限公司 Augmented reality implementation method, device, system, computer equipment and storage medium
CN109561278B (en) * 2018-09-21 2020-12-29 中建科技有限公司深圳分公司 Augmented reality display system and method
CN109889814A (en) * 2019-03-18 2019-06-14 罗叶迪 On-fixed panoramic video wears primary real-time video live broadcasting method to virtual reality
CN110147288A (en) * 2019-05-13 2019-08-20 浙江商汤科技开发有限公司 Information interacting method and device, electronic equipment and storage medium
CN110134532A (en) * 2019-05-13 2019-08-16 浙江商汤科技开发有限公司 A kind of information interacting method and device, electronic equipment and storage medium
CN110553714B (en) * 2019-08-31 2022-01-14 深圳市广宁股份有限公司 Intelligent vibration augmented reality testing method and related product
CN112712098A (en) * 2019-10-25 2021-04-27 北京四维图新科技股份有限公司 Image data processing method and device
JP2023543111A (en) * 2020-09-21 2023-10-13 奇▲オ▼創新有限公司 Augmented reality content distribution method, system and computer readable recording medium
CN112330816B (en) * 2020-10-19 2024-03-26 杭州易现先进科技有限公司 AR identification processing method and device and electronic device
CN112577488B (en) * 2020-11-24 2022-09-02 腾讯科技(深圳)有限公司 Navigation route determining method, navigation route determining device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937531A (en) * 2014-06-14 2017-07-07 奇跃公司 Method and system for producing virtual and augmented reality
CN107222529A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, WEB modules, terminal and cloud server
CN107316035A (en) * 2017-08-07 2017-11-03 北京中星微电子有限公司 Object identifying method and device based on deep learning neutral net
CN107480587A (en) * 2017-07-06 2017-12-15 阿里巴巴集团控股有限公司 A kind of method and device of model configuration and image recognition
CN107609051A (en) * 2017-08-22 2018-01-19 阿里巴巴集团控股有限公司 A kind of image rendering method, device and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180594A1 (en) * 2014-12-22 2016-06-23 Hand Held Products, Inc. Augmented display and user input device
EP3291531A1 (en) * 2016-09-06 2018-03-07 Thomson Licensing Methods, devices and systems for automatic zoom when playing an augmented reality scene
CN107085868A (en) * 2017-04-27 2017-08-22 腾讯科技(深圳)有限公司 image drawing method and device
CN107608649A (en) * 2017-11-02 2018-01-19 泉州创景视迅数字科技有限公司 A kind of AR augmented realities intelligent image identification displaying content system and application method
CN108492352B (en) * 2018-03-22 2021-10-22 腾讯科技(深圳)有限公司 Augmented reality implementation method, device, system, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937531A (en) * 2014-06-14 2017-07-07 奇跃公司 Method and system for producing virtual and augmented reality
CN107222529A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, WEB modules, terminal and cloud server
CN107480587A (en) * 2017-07-06 2017-12-15 阿里巴巴集团控股有限公司 A kind of method and device of model configuration and image recognition
CN107316035A (en) * 2017-08-07 2017-11-03 北京中星微电子有限公司 Object identifying method and device based on deep learning neutral net
CN107609051A (en) * 2017-08-22 2018-01-19 阿里巴巴集团控股有限公司 A kind of image rendering method, device and electronic equipment

Also Published As

Publication number Publication date
CN108492352A (en) 2018-09-04
WO2019179331A1 (en) 2019-09-26

Similar Documents

Publication Publication Date Title
CN108492352B (en) Augmented reality implementation method, device, system, computer equipment and storage medium
CN109376667B (en) Target detection method and device and electronic equipment
KR102348636B1 (en) Image processing method, terminal and storage medium
CN107222529B (en) Augmented reality processing method, WEB module, terminal and cloud server
CN107808111B (en) Method and apparatus for pedestrian detection and attitude estimation
CN109005334B (en) Imaging method, device, terminal and storage medium
AU2013273829A1 (en) Time constrained augmented reality
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
US11727707B2 (en) Automatic image capture system based on a determination and verification of a physical object size in a captured image
CN110738116B (en) Living body detection method and device and electronic equipment
CN107959798B (en) Video data real-time processing method and device and computing equipment
US20170287187A1 (en) Method of generating a synthetic image
CN115035580A (en) Figure digital twinning construction method and system
CN116916151B (en) Shooting method, electronic device and storage medium
US20180121729A1 (en) Segmentation-based display highlighting subject of interest
CN111800604A (en) Method and device for detecting human shape and human face data based on gun and ball linkage
US10282633B2 (en) Cross-asset media analysis and processing
CN115278047A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111914850B (en) Picture feature extraction method, device, server and medium
CN115082496A (en) Image segmentation method and device
CN108431867B (en) Data processing method and terminal
CN114143442B (en) Image blurring method, computer device, and computer-readable storage medium
KR102550216B1 (en) Image processing apparatuuus, server, image processing system, image processing method
CN117176979B (en) Method, device, equipment and storage medium for extracting content frames of multi-source heterogeneous video
CN112927142B (en) High-speed high-resolution video generation method and device based on time domain interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant