CN108109209A - A kind of method for processing video frequency and its device based on augmented reality - Google Patents

A kind of method for processing video frequency and its device based on augmented reality Download PDF

Info

Publication number
CN108109209A
CN108109209A CN201711309645.3A CN201711309645A CN108109209A CN 108109209 A CN108109209 A CN 108109209A CN 201711309645 A CN201711309645 A CN 201711309645A CN 108109209 A CN108109209 A CN 108109209A
Authority
CN
China
Prior art keywords
user
data
image
augmented reality
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711309645.3A
Other languages
Chinese (zh)
Inventor
汤锦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Guangzhou Dongjing Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Dongjing Computer Technology Co Ltd filed Critical Guangzhou Dongjing Computer Technology Co Ltd
Priority to CN201711309645.3A priority Critical patent/CN108109209A/en
Publication of CN108109209A publication Critical patent/CN108109209A/en
Priority to PCT/CN2018/103602 priority patent/WO2019114328A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present application offer a kind of method for processing video frequency and its device based on augmented reality, the method, including:An at least two field picture in first video flowing is handled, obtains at least one first object data in described image;Augmented reality processing is carried out with the user data obtained based on first object data, renders and obtains the first virtual scene that user is combined with described image.The embodiment of the present application can utilize augmented reality, and more comprehensively promoting product to the need in video is shown.

Description

A kind of method for processing video frequency and its device based on augmented reality
Technical field
The invention relates to augmented reality fields, and in particular to a kind of video processing side based on augmented reality Method and its device.
Background technology
Augmented reality is a kind of by " seamless " the integrated new technology of real world information and virtual world information, is handle Originally in the certain time spatial dimension of real world be difficult experience entity information (visual information, sound, taste, touch Feel etc.), it by science and technology such as computers, is superimposed again after analog simulation, virtual Information application to real world is felt by the mankind Official is perceived, so as to reach the sensory experience of exceeding reality.Real environment and virtual object have been added to same in real time A picture or space exist simultaneously.
Augmented reality not only presents the information of real world, but also virtual information is shown simultaneously, two Kind information is complementary to one another, is superimposed.In the augmented reality of visualization, user utilizes Helmet Mounted Display, real world and computer Figure is multiple to be synthesized together, and the real world can be seen around it.
In traditional augmented reality, mainly using image processing techniques, the posture and image of user are captured Afterwards, the sample based on storage renders the first virtual scene of generation, and product promotion and use are realized using first virtual scene It buys at family.And with the development of video technique, it is comprehensive based on video content interest in itself and product introduction, pass through sight It sees video and is increasingly popularized with user's purchase to carry out product promotion.
Therefore, how using augmented reality, optimizing the comprehensive of product introduction in video becomes the prior art urgently The technical issues of solution.
The content of the invention
One of the technical issues of the embodiment of the present application solves is to provide a kind of method for processing video frequency based on augmented reality And its device, augmented reality can be utilized, more comprehensively promoting product to the need in video is shown.
The embodiment of the present application provides a kind of method for processing video frequency based on augmented reality, including:
An at least two field picture in first video flowing is handled, obtains at least one first number of objects in described image According to;
Augmented reality processing is carried out with the user data obtained based on first object data, renders and obtains user and institute State the first virtual scene that image is combined.
In one specific embodiment of the application, at least two field picture in the first video flowing is handled, and is obtained At least one first object data in described image includes:
Image recognition processing is carried out at least frame image data in first video flowing using image recognition algorithm;
Acquisition meets at least one first object of user input instruction and/or preset instructions and first object Data.
It is described to be increased based on first object data with the user data obtained in one specific embodiment of the application Strong reality processing, render acquisition user includes with the first virtual scene that described image is combined:
Judge first object data whether face;
If so, the human face data is then carried out by augmented reality processing, wash with watercolours with the user data obtained using MTCNN algorithms Dye obtains the first virtual scene that user is combined with described image;
If not, the non-face data are carried out by augmented reality processing, wash with watercolours with the user data obtained using SSD algorithms Dye obtains the first virtual scene that user is combined with described image.
In one specific embodiment of the application, the method further includes:
Image is generated and/or according to the video flowing and first virtual scene according to first virtual scene Generate the second video flowing.
In one specific embodiment of the application, the method further includes:
Share described image and/or second video flowing.
In one specific embodiment of the application, first object includes:Face, clothes, shoes and hats, accessories, dressing effect, At least one of hair style, furniture, decoration, scene, personage.
In one specific embodiment of the application, the method further includes:
The details of first object are provided.
In one specific embodiment of the application, the method further includes:
The purchase information of first object is provided.
Corresponding to the above method, one embodiment of the application provides a kind of video process apparatus based on augmented reality, including:
Data acquisition module is used for:An at least two field picture in first video flowing is handled, is obtained in described image At least one first object data;
Scenario generating module is used for:It is carried out based on first object data with the user data obtained at augmented reality Reason renders and obtains the first virtual scene that user is combined with described image.
In one specific embodiment of the application, the data acquisition module includes:
Identifying processing unit, is used for:Using image recognition algorithm at least two field picture number in first video flowing According to progress image recognition processing;
First object handles unit, is used for:It obtains and meets described at least the 1 the of user input instruction and/or preset instructions An object and first object data.
In one specific embodiment of the application, the scenario generating module includes:
Object judging unit, for judge first object data whether face;
First algorithm unit, for if so, then using MTCNN algorithms by the human face data with obtain user data into The processing of row augmented reality renders and obtains the first virtual scene that user is combined with described image;
Second algorithm unit, for if not, using SSD algorithms by the non-face data with obtain user data into The processing of row augmented reality renders and obtains the first virtual scene that user is combined with described image.
In one specific embodiment of the application, described device further includes:
Video generation module, is used for:According to first virtual scene generate image and/or according to the video flowing with And first virtual scene generates the second video flowing.
In one specific embodiment of the application, described device further includes:
Video sharing module, is used for:Share described image and/or second video flowing.
In one specific embodiment of the application, first object includes:Face, clothes, shoes and hats, accessories, dressing effect, At least one of hair style, furniture, decoration, scene, personage.
In one specific embodiment of the application, the method further includes:
Information providing module is used for:The details of first object are provided.
In one specific embodiment of the application, the method further includes:
Purchase provides module, is used for:The purchase information of first object is provided.
The embodiment of the present application handles at least two field picture in the first video flowing, obtains in described image at least One first object data.And then augmented reality processing is carried out with the user data obtained based on first object data, it renders Obtain the first virtual scene that user is combined with described image.The application can be by the processing to video flowing, and utilizes and regard First object data combination user data caused by frequency stream process carries out augmented reality processing, obtain more real user with The first virtual scene that described image is combined.Therefore, the application can utilize augmented reality, more comprehensively to video In need promote product be shown.
Description of the drawings
It in order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments described in application, for those of ordinary skill in the art, can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is the hardware structure diagram for the computer equipment that the embodiment of the present application is applied;
Fig. 2 is a kind of one embodiment flow chart of method for processing video frequency based on augmented reality that the application provides;
Fig. 3 is a kind of flow for another embodiment step S1 of method for processing video frequency based on augmented reality that the application provides Figure;
Fig. 4 is the signal of a user selection interface in a kind of method for processing video frequency based on augmented reality that the application provides Figure;
Fig. 5 is that another user selection interface is shown in a kind of method for processing video frequency based on augmented reality that the application provides It is intended to;
Fig. 6 is that another user selection interface is shown in a kind of method for processing video frequency based on augmented reality that the application provides It is intended to;
Fig. 7 is a kind of flow for method for processing video frequency another embodiment step S2 based on augmented reality that the application provides Figure;
Fig. 8 is another embodiment flow chart of a kind of method for processing video frequency based on augmented reality that the application provides;
Fig. 9 is a kind of method for processing video frequency another embodiment flow chart based on augmented reality that the application provides;
Figure 10 is a kind of method for processing video frequency another embodiment flow chart based on augmented reality that the application provides;
Figure 11 is a kind of method for processing video frequency another embodiment flow chart based on augmented reality that the application provides;
Figure 12 is a kind of one example structure figure of video process apparatus based on augmented reality that the application provides;
Figure 13 is data acquisition in another embodiment of a kind of video process apparatus based on augmented reality that the application provides The structure chart of module;
Figure 14 is a kind of video process apparatus another embodiment Scene generation based on augmented reality that the application provides The structure chart of module;
Figure 15 is another example structure figure of a kind of video process apparatus based on augmented reality that the application provides;
Figure 16 is a kind of video process apparatus another embodiment structure chart based on augmented reality that the application provides;
Figure 17 is a kind of video process apparatus another embodiment structure chart based on augmented reality that the application provides;
Figure 18 is a kind of video process apparatus another embodiment structure chart based on augmented reality that the application provides;
Figure 19 is a kind of video process apparatus institute application hardware equipment structure chart based on augmented reality that the application provides.
Specific embodiment
The embodiment of the present application handles at least two field picture in the first video flowing, obtains in described image at least One first object data.And then augmented reality processing is carried out with the user data obtained based on first object data, it renders Obtain the first virtual scene that user is combined with described image.The application can be by the processing to video flowing, and utilizes and regard First object data combination user data caused by frequency stream process carries out augmented reality processing, obtain more real user with The first virtual scene that described image is combined.Therefore, the application can utilize augmented reality, more comprehensively to video In need promote product be shown.
Although the application can have many various forms of embodiments, in the accompanying drawings display and will herein in detail The specific embodiment of description, it should be appreciated that the disclosure of this embodiment should be considered as the example of principle, and be not intended to this Shen It please be limited to the specific embodiment being shown and described.In the following description, identical label shows for describing the several of attached drawing Identical, similar or corresponding part in figure.
As used herein, "one" or " one kind " of term are defined as one (kind) or more than one (kind).As herein It is used, term " multiple " is defined as two or more than two.As used herein, term " other " is defined as at least again It is one or more.As used herein, term "comprising" and/or " having " are defined to include (that is, open language).Such as Used herein, term " coupling " is defined as connecting, but is not necessarily to be directly connected to, and is not necessarily mechanically to connect. As used herein, term " program " or " computer program " or similar terms are defined as designed on the computer systems The command sequence of execution." program " or " computer program " may include subprogram, function, process, the first object method, first pair As realizing, executable application, applet, servlet, source code, object code, shared library/dynamic load library and/ Or other command sequences designed for performing on the computer systems.
Table is referred to " one embodiment ", " some embodiments ", " embodiment " or similar terms in entire this document Show that a particular feature, structure, or characteristic described in conjunction with the embodiments is included at least one embodiment of the application.Therefore, exist The appearance of this word in the various places of entire this specification need not all represent identical embodiment.It is in addition, described specific Feature, structure or characteristic can combine in any suitable manner in one or more embodiments without limitation.
As used herein, term "or" should be construed as inclusive or any or any group of expression It closes.Therefore, " A, B or C " expression " following any one:A;B;C;A and B;A and C;B and C;A, B and C ".Only when element, When function, step or the combination of action inherently mutually exclusive in some way, it will the exception of this definition occurs.
In order to which those skilled in the art is made to more fully understand the technical solution in the embodiment of the present application, below in conjunction with the application The technical solution in the embodiment of the present application is clearly and completely described in attached drawing in embodiment, it is clear that described reality It is only some embodiments of the present application to apply example, instead of all the embodiments.Based on the embodiment in the application, this field is common Technical staff's all other embodiments obtained should all belong to the scope of the application protection.
Further illustrate that the embodiment of the present application implements with reference to the embodiment of the present application attached drawing.
One embodiment of the application carries a kind of method for processing video frequency based on augmented reality, and it is mobile to can be applied to mobile phone, PAD etc. Terminal can also be applied to the terminals such as PC ends or advertisement machine.
Referring to Fig. 1, the terminal generally includes:Main control chip 11, memory 12, input/output unit 13 and other are hard Part 14.The main control chip 11 controls each function module, and memory 12 stores each application program and data.
Referring to Fig. 2, the described method includes:
S1, at least two field picture in the first video flowing is handled, obtains at least one first pair in described image Image data.
The application can be handled every two field picture in first video flowing, can also be only to needing to be inserted into advertisement Or the image of the operations such as progress merchandise display is handled, i.e., is handled for the image of particular frame.
Position described in first video flowing where the first object may be employed fixed or mobile mark and represent, It can also represent using the position where popup web page expression first object or by the way of additional transparent laver described Position where first object.
Specifically, first object includes:Face, clothes, shoes and hats, accessories, dressing effect, hair style, furniture, decoration, At least one of scene, personage.
The application can pre-process video flowing, that is, carry out format conversion and/or depression of order processing, the terminal is connect It receives or the video flowing of capture is converted to the unified picture format that the image processing engine of the terminal can be handled.And to figure As carrying out depression of order processing, the efficiency of image procossing can be improved.
If first object is face, recognition of face processing is carried out using existing face recognition algorithms, is obtained Human face data.Skin color model, template identification or form identification may be employed in the face recognition algorithms, since it belongs to existing Face recognition algorithms, therefore repeat no more.
If first object is decoration, for example, pendent lamp, then at the identification using existing recognizer progress pendent lamp Reason, obtains pendent lamp data.Processing method is identical with face recognition algorithms, and pendent lamp data only are replaced human face data.For example, It is lamp cap, lamp fringe feature of pendent lamp etc. by five, the three front yard feature replacement of face.
The application can set a general image recognizer carry out the first object identifying processing, then as needed into First features of the object of row identification is changed specific identification language or identification parameter, is identified required for replacement recognizer The first object.
The application can also set a variety of image recognition algorithms that processing is identified to the first different objects respectively, then The first object being identified as needed, is adaptively adjusted.
In the application one in the specific implementation, including referring to Fig. 3, the step S1:
S11, image identification is carried out at least frame image data in first video flowing using image recognition algorithm Processing.
Specifically, described image recognizer is prestores, the first object being identified as needed it is different and from Selection is adapted to, server periodically can push newer image recognition algorithm to terminal, and user can also log in clothes as needed Business device downloads the image recognition algorithm needed.
Therefore, the application has good extended capability, and the first object that can be identified as needed is different and identifies Image recognition algorithm is constantly updated in the development of algorithm, it is made to can be applied to the identification of different first objects, and can select to know Not more efficient recognizer.
The first object of the identification is selected for the first object of pre-set fixed identification or user input instruction It selects.
For example, referring to Fig. 4, it is the first object of face that user, which selects to need to be identified, and the application automatically configures face Recognizer carries out image recognition processing at least frame image data in first video flowing.
For example, referring to Fig. 5, user can also select multiple the first objects for needing to be identified, and (the first object of face is hung Lamp), the application automatically configures face recognition algorithms and pendent lamp recognizer at least two field picture in first video flowing Data carry out image recognition processing.
Therefore, the first object that needs can be identified in the application user makes choice, using flexible, operation side Just.
S12, acquisition meet at least one first object and described first of user input instruction and/or preset instructions Object data.
Specifically, the user input instruction can be to be carried out in multiple first object options provided on terminal interface It clicks, or user inputs the first object oriented in input frame.
If the recognizer for the first object that user inputs in input frame is not pre-stored within the terminal, described Terminal game server download can carry out the usable image recognizer of image identification automatically.Alternatively, notify user's sheet Ground and the recognizer that the first object of input is not present ask user to replace the first object of other inputs, shown in Figure 6.
The application can carry out image identification according to preset instructions for corresponding first object of the preset instructions.Than Such as, preset instructions are that AM 7-9 the first objects of point are face, and the first object of PM 7-9 points is hair style.
Therefore, the application can be according to user input instruction, self-defined various first objects of selection, using flexible, improvement User experience.The application can also pre-set the first object according to preset instructions, easy to use without operation.
S2, augmented reality processing is carried out with the user data obtained based on first object data, renders and obtain user With the first virtual scene that described image is combined.
Specifically, the user data can be the user image data obtained by the photographic device of the terminal, It can be image data input by user, can also be the other image datas obtained by internet.
For example, user data is the face image data of the user obtained by the photographic device of the terminal, the application The face image data of the first object data of face identified by step S1 and the user are carried out at augmented reality Reason renders and obtains the first virtual scene.The augmented reality processing of face in the image of user's face substitution video stream can be obtained Effect.
For another example, user data is room images data input by user, and the application will be identified by step S1 The first object data of furniture carries out augmented reality processing with the room images data input by user, and it is virtual to render acquisition first Scene.The augmented reality treatment effect that furniture in the image of video flowing is placed in the room can be obtained.
It is another in the application in the specific implementation, including referring to Fig. 7, the step S2:
S21, judge first object data whether face.
S22, if so, then using MTCNN algorithms will the human face data with obtain user data carry out augmented reality at Reason renders and obtains the first virtual scene that user is combined with described image.
S23, if not, using SSD algorithms will the non-face data with obtain user data carry out augmented reality at Reason renders and obtains the first virtual scene that user is combined with described image.
The application is according to the difference of the first object data, using different algorithms to the first object data and user data Augmented reality processing is carried out, so as to improve the recognition efficiency and recognition effect of different first objects, obtains more life-like first Virtual scene.
The embodiment of the present application handles at least two field picture in the first video flowing, obtains in described image at least One first object data.And then augmented reality processing is carried out with the user data obtained based on first object data, it renders Obtain the first virtual scene that user is combined with described image.The application can be by the processing to video flowing, and utilizes and regard First object data combination user data caused by frequency stream process carries out augmented reality processing, obtain more real user with The first virtual scene that described image is combined.Therefore, the application can utilize augmented reality, more comprehensively to video In need promote product be shown.
In the application another specific embodiment, referring to Fig. 8, herein described method is included outside above-mentioned steps S1-S2, also Including step:
S3, image is generated according to first virtual scene and/or according to the video flowing and first virtual field Scape generates the second video flowing.
Specifically, after the first virtual scene generation, image can be converted to.
The application can replace the object in the first virtual scene in at least two field picture in first video flowing The first object, so as to generate image.
The application can also will be shot for the first virtual scene, generate image;Alternatively, by first virtual field Scape is projected to two-dimensional space, using three-dimensional-two-dimentional algorithm, is converted into image.
After the first virtual scene generation, can also the second video flowing be generated according to first virtual scene.
The present embodiment can utilize the softwares for editing such as digital master that the image that virtual scene generates is reconverted into video, this The object that application will directly can also replace in virtual scene in video flowing per the first object in two field picture, so as to form the Two video flowings.
In the application in the specific implementation, second video flowing can be short-sighted frequency, user can carry out following at least one Operation:User seizes software by video volume increases the information such as various effects and word, figure;User can also add purchase letter Breath (for example, purchase link) is described in detail;User can also intercept second video flowing or increase other Video elementary.
Therefore, the present embodiment can generate image and/or the second video according to the first virtual scene, so as to make user can be with It is trueer so that user obtains the first virtual scene by the first virtual scene described in described image and/or the second video display Real perception.
In the application still another embodiment, referring to Fig. 9, herein described method is included outside above-mentioned steps S1-S3, also Including step:
S4, described image and/or second video flowing are shared.
Specifically, user can be shared by input instruction regards according to the image of the first virtual scene formation and/or second Frequency flows, i.e., shares described image and/or the second video flowing to microblogging, wechat good friend, wechat circle of friends etc., so as to by point Enjoying behavior improves user experience.
It is described share including:Described image and/or second video flowing are directly loaded onto server-side, the service End is by described image and/or second video stream to other users end;Alternatively, other users can select to log in institute It states server-side and checks described image and/or second video flowing.
It is described share can directly share described image and/or the second video flowing;Or it described image and/or second regards The link of frequency stream, other users check described image and/or second video flowing, other users by clicking on the link It can also discuss simultaneously for described image and/or second video flowing in zone of discussion.
The zone of discussion can be to be superimposed to described image and/or pop-up in second video flowing, transparent additional Layer, the zone of discussion can also be other windows beyond described image and/or the second video flowing display window.
The mode of augmented reality may be employed in zone of discussion described in the embodiment of the present application, is identified using method in the step S1 Go out at least one second object in described image and/or second video flowing, at least one second object data is with participating in by described in The input data of the user of discussion is rendered into the second virtual scene using augmented reality.
In addition, second object data can also use the input data of the user to participate in discussion or according to described Other data obtained in zone of discussion.
The user to participate in discussion can also input voice data, by the voice data and second virtual scene With reference to the effect that formation zone of discussion user is really discussed in virtual scene.
Specifically, the voice data can also be converted to word, shows word in second virtual scene, institute Stating word can be shown in second virtual scene by the way of barrage.
In the application still another embodiment, referring to Figure 10, herein described method is included outside above-mentioned steps S1-S2, Further include step:
S5, the details that first object is provided.
Specifically, the application provides the details of first object by modes such as pop-up, transparent extra plays, described Details include the explanations such as model, attribute, the size of first object, or the use of first object is said It is bright.
In the application still another embodiment, referring to Figure 11, herein described method is included outside above-mentioned steps S1-S2, Further include step:
S6, the purchase information that first object is provided.
Specifically, the application provides the purchase link of first object or the purchase approach of first object etc.. The purchase information can also include following at least one:The first of the evaluation of other users and other users generation is virtual Scene.
The corresponding above method, the application also provide a kind of video process apparatus based on augmented reality, can be applied to mobile phone, The mobile terminals such as PAD can also be applied to the terminals such as PC ends or advertisement machine.
Referring to Fig. 1, the terminal generally includes:Main control chip 11, memory 12, input/output unit 13 and other are hard Part 14.The main control chip 11 controls each function module, and memory 12 stores each application program and data.
Referring to Figure 12, described device includes:
Data acquisition module 121, is used for:An at least two field picture in first video flowing is handled, obtains the figure At least one first object data as in;
Scenario generating module 122, is used for:Augmented reality is carried out with the user data obtained based on first object data Processing renders and obtains the first virtual scene that user is combined with described image.
The application can be handled every two field picture in first video flowing, can also be only to needing to be inserted into advertisement Or the image of the operations such as progress merchandise display is handled, i.e., is handled for the image of particular frame.
Position described in first video flowing where the first object may be employed fixed or mobile mark and represent, It can also represent using the position where popup web page expression first object or by the way of additional transparent laver described Position where first object.
Specifically, first object includes:Face, clothes, shoes and hats, accessories, dressing effect, hair style, furniture, decoration, At least one of scene, personage.
The application can pre-process video flowing, that is, carry out format conversion and/or depression of order processing, the terminal is connect It receives or the video flowing of capture is converted to the unified picture format that the image processing engine of the terminal can be handled.And to figure As carrying out depression of order processing, the efficiency of image procossing can be improved.
If first object is face, recognition of face processing is carried out using existing face recognition algorithms, is obtained Human face data.Skin color model, template identification or form identification may be employed in the face recognition algorithms, since it belongs to existing Face recognition algorithms, therefore repeat no more.
If first object is decoration, for example, pendent lamp, then at the identification using existing recognizer progress pendent lamp Reason, obtains pendent lamp data.Processing method is identical with face recognition algorithms, and pendent lamp data only are replaced human face data.For example, It is lamp cap, lamp fringe feature of pendent lamp etc. by five, the three front yard feature replacement of face.
The application can set a general image recognizer carry out the first object identifying processing, then as needed into First features of the object of row identification is changed specific identification language or identification parameter, is identified required for replacement recognizer The first object.
The application can also set a variety of image recognition algorithms that processing is identified to the first different objects respectively, then The first object being identified as needed, is adaptively adjusted.
In the application one in the specific implementation, referring to Figure 13, the data acquisition module 121 includes:
Identifying processing unit 1211, is used for:Using image recognition algorithm at least frame figure in first video flowing As data carry out image recognition processing.
First object handles unit 1212, is used for:Obtain meet user input instruction and/or preset instructions it is described at least One first object and first object data.
Specifically, described image recognizer is prestores, the first object being identified as needed it is different and from Selection is adapted to, server periodically can push newer image recognition algorithm to terminal, and user can also log in clothes as needed Business device downloads the image recognition algorithm needed.
Therefore, the application has good extended capability, and the first object that can be identified as needed is different and identifies Image recognition algorithm is constantly updated in the development of algorithm, it is made to can be applied to the identification of different first objects, and can select to know Not more efficient recognizer.
The first object of the identification is selected for the first object of pre-set fixed identification or user input instruction It selects.
For example, referring to Fig. 4, it is the first object of face that user, which selects to need to be identified, and the application automatically configures face Recognizer carries out image recognition processing at least frame image data in first video flowing.
For example, referring to Fig. 5, user can also select multiple the first objects for needing to be identified, and (the first object of face is hung Lamp), the application automatically configures face recognition algorithms and pendent lamp recognizer at least two field picture in first video flowing Data carry out image recognition processing.
Therefore, the first object that needs can be identified in the application user makes choice, using flexible, operation side Just.
Specifically, the user input instruction can be to be carried out in multiple first object options provided on terminal interface It clicks, or user inputs the first object oriented in input frame.
If the recognizer for the first object that user inputs in input frame is not pre-stored within the terminal, described Terminal game server download can carry out the usable image recognizer of image identification automatically.Alternatively, notify user's sheet Ground and the recognizer that the first object of input is not present ask user to replace the first object of other inputs, shown in Figure 6.
The application can carry out image identification according to preset instructions for corresponding first object of the preset instructions.Than Such as, preset instructions are that AM 7-9 the first objects of point are face, and the first object of PM 7-9 points is hair style.
Therefore, the application can be according to user input instruction, self-defined various first objects of selection, using flexible, improvement User experience.The application can also pre-set the first object according to preset instructions, easy to use without operation.
Specifically, the user data can be the user image data obtained by the photographic device of the terminal, It can be image data input by user, can also be the other image datas obtained by internet.
For example, user data is the face image data of the user obtained by the photographic device of the terminal, the application By by the face image data of the data acquisition module 121 obtained the first object data of face of identification and the user into The processing of row augmented reality, renders and obtains the first virtual scene.Face in the image of user's face substitution video stream can be obtained Augmented reality treatment effect.
For another example, user data is room images data input by user, and the application will be identified by step S1 The first object data of furniture carries out augmented reality processing with the room images data input by user, and it is virtual to render acquisition first Scene.The augmented reality treatment effect that furniture in the image of video flowing is placed in the room can be obtained.
It is another in the application in the specific implementation, referring to Figure 14, the scenario generating module 122 includes:
Object judging unit 1221, for judge first object data whether face.
First algorithm unit 1222, for if so, then using MTCNN algorithms by the human face data and the number of users of acquisition According to augmented reality processing is carried out, render and obtain the first virtual scene that user is combined with described image.
Second algorithm unit 1223, for if not, using SSD algorithms by the non-face data with obtain number of users According to augmented reality processing is carried out, render and obtain the first virtual scene that user is combined with described image.
The application is according to the difference of the first object data, using different algorithms to the first object data and user data Augmented reality processing is carried out, so as to improve the recognition efficiency and recognition effect of different first objects, obtains more life-like first Virtual scene.
The embodiment of the present application handles at least two field picture in the first video flowing, obtains in described image at least One first object data.And then augmented reality processing is carried out with the user data obtained based on first object data, it renders Obtain the first virtual scene that user is combined with described image.The application can be by the processing to video flowing, and utilizes and regard First object data combination user data caused by frequency stream process carries out augmented reality processing, obtain more real user with The first virtual scene that described image is combined.Therefore, the application can utilize augmented reality, more comprehensively to video In need promote product be shown.
In the application another specific embodiment, referring to Figure 15, herein described device includes above-mentioned data acquisition module 121st, outside scenario generating module 122, further include:
Video generation module 123, is used for:Image is generated and/or according to the video flowing according to first virtual scene And first virtual scene generates the second video flowing.
Specifically, after the first virtual scene generation, image can be converted to.
The application can replace the object in the first virtual scene in at least two field picture in first video flowing The first object, so as to generate image.
The application can also will be shot for the first virtual scene, generate image;Alternatively, by first virtual field Scape is projected to two-dimensional space, using three-dimensional-two-dimentional algorithm, is converted into image.
After the first virtual scene generation, can also the second video flowing be generated according to first virtual scene.
The present embodiment can utilize the softwares for editing such as digital master that the image that virtual scene generates is reconverted into video, this The object that application will directly can also replace in virtual scene in video flowing per the first object in two field picture, so as to form the Two video flowings.
In the application in the specific implementation, second video flowing can be short-sighted frequency, user can carry out following at least one Operation:User seizes software by video volume increases the information such as various effects and word, figure;User can also add purchase letter Breath (for example, purchase link) is described in detail;User can also intercept second video flowing or increase other Video elementary.
Therefore, the present embodiment can generate image and/or the second video according to the first virtual scene, so as to make user can be with It is trueer so that user obtains the first virtual scene by the first virtual scene described in described image and/or the second video display Real perception.
In the application still another embodiment, referring to Figure 16, herein described device includes above-mentioned data acquisition module 121st, scenario generating module 122, outside video generation module 123, further include:
Video sharing module 124, is used for:Share described image and/or second video flowing.
Specifically, user can be shared by input instruction regards according to the image of the first virtual scene formation and/or second Frequency flows, i.e., shares described image and/or the second video flowing to microblogging, wechat good friend, wechat circle of friends etc., so as to by point Enjoying behavior improves user experience.
It is described share including:Described image and/or second video flowing are directly loaded onto server-side, the service End is by described image and/or second video stream to other users end;Alternatively, other users can select to log in institute It states server-side and checks described image and/or second video flowing.
It is described share can directly share described image and/or the second video flowing;Or it described image and/or second regards The link of frequency stream, other users check described image and/or second video flowing, other users by clicking on the link It can also discuss simultaneously for described image and/or second video flowing in zone of discussion.
The zone of discussion can be to be superimposed to described image and/or pop-up in second video flowing, transparent additional Layer, the zone of discussion can also be other windows beyond described image and/or the second video flowing display window.
The mode of augmented reality may be employed in zone of discussion described in the embodiment of the present application, is identified using method in the step S1 Go out at least one second object in described image and/or second video flowing, at least one second object data is with participating in by described in The input data of the user of discussion is rendered into the second virtual scene using augmented reality.
In addition, second object data can also use the input data of the user to participate in discussion or according to described Other data obtained in zone of discussion.
The user to participate in discussion can also input voice data, by the voice data and second virtual scene With reference to the effect that formation zone of discussion user is really discussed in virtual scene.
Specifically, the voice data can also be converted to word, shows word in second virtual scene, institute Stating word can be shown in second virtual scene by the way of barrage.
In the application still another embodiment, referring to Figure 17, herein described device includes above-mentioned data acquisition module 121st, outside scenario generating module 122, further include:
Information providing module 125, is used for:The details of first object are provided.
Specifically, the application provides the details of first object by modes such as pop-up, transparent extra plays, described Details include the explanations such as model, attribute, the size of first object, or the use of first object is said It is bright.
In the application still another embodiment, referring to Figure 18, herein described device includes above-mentioned data acquisition module 121st, outside scenario generating module 122, further include:
Purchase provides module 126, is used for:The purchase information of first object is provided.
Specifically, the application provides the purchase link of first object or the purchase approach of first object etc.. The purchase information can also include following at least one:The first of the evaluation of other users and other users generation is virtual Scene.
Figure 19 is a kind of the hard of the electronic equipment of method for processing video frequency based on augmented reality provided by the embodiments of the present application Part structure diagram, as shown in figure 19, the equipment include:
One or more processors 1910 and memory 1920, in Figure 19 by taking a processor 1910 as an example.
Performing the electronic equipment of the method for processing video frequency based on augmented reality can also include:Input unit 1930 and output Device 1940.
Processor 1910, memory 1920, input unit 1930 and output device 1940 can by bus or other Mode connects, in Figure 19 exemplified by being connected by bus.
Memory 1920 is used as a kind of non-volatile computer readable storage medium storing program for executing, available for storage non-volatile software journey Sequence, non-volatile computer executable program and module, such as the processing of the video based on augmented reality in the embodiment of the present application Corresponding program instruction/the module of method (for example, data acquisition module 121, scenario generating module 122 shown in attached drawing 12).Place Reason device 1910 is stored in non-volatile software program, instruction and module in memory 1920 by operation, so as to perform clothes The various function application of business device and data processing, that is, realize video processing side of the above method embodiment based on augmented reality Method.
Memory 1920 can include storing program area and storage data field, wherein, storing program area can store operation system System, the required application program of at least one function;Storage data field can store the video processing electronics based on augmented reality and set Standby uses created data etc..In addition, memory 1920 can include high-speed random access memory, can also include non- Volatile memory, for example, at least a disk memory, flush memory device or other non-volatile solid state memory parts. In some embodiments, memory 1920 is optional including compared with the remotely located memory of processor 1910, these long-range storages Device can be by network connection to the processing unit that the video based on augmented reality is handled.The example of above-mentioned network includes but unlimited In internet, intranet, LAN, mobile radio communication and combinations thereof.
Input unit 1930 can receive the video processing of the number or character information and generation of input based on augmented reality The key signals that the user setting and function control of electronic equipment are related input.Output device 1940 may include the displays such as display screen Equipment.
One or more of modules are stored in the memory 1920, when by one or more of processors During 1910 execution, the method for processing video frequency based on augmented reality in above-mentioned any means embodiment is performed.
The said goods can perform the method that the embodiment of the present application is provided, and possesses the corresponding function module of execution method and has Beneficial effect.The not technical detail of detailed description in the present embodiment, reference can be made to the method that the embodiment of the present application is provided.
The electronic equipment of the embodiment of the present application exists in a variety of forms, includes but not limited to:
(1) mobile communication equipment:The characteristics of this kind equipment is that possess mobile communication function, and to provide speech, data It communicates as main target.This Terminal Type includes:Smart mobile phone (such as iPhone), multimedia handset, functional mobile phone and low Hold mobile phone etc..
(2) super mobile personal computer equipment:This kind equipment belongs to the scope of personal computer, there is calculating and processing work( Can, generally also possess mobile Internet access characteristic.This Terminal Type includes:PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device:This kind equipment can show and play multimedia content.The kind equipment includes:Audio, Video player (such as iPod), handheld device, e-book and intelligent toy and portable car-mounted navigation equipment.
(4) server:The equipment for providing the service of calculating, the composition of server are total including processor, hard disk, memory, system Line etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, in processing energy Power, stability, reliability, security, scalability, manageability etc. are more demanding.
(5) other have the function of the electronic device of data interaction.
It will be understood by those skilled in the art that embodiments herein can be provided as method, apparatus (equipment) or computer Program product.Therefore, in terms of the application can be used complete hardware embodiment, complete software embodiment or combine software and hardware Embodiment form.Moreover, the embodiment of the present application can be used wherein includes computer available programs generation in one or more The meter implemented in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of code The form of calculation machine program product.
The application is with reference to the method, apparatus (equipment) of embodiment and the flow chart and/or box of computer program product Figure describes.It should be understood that each flow and/or the side in flowchart and/or the block diagram can be realized by computer program instructions The combination of flow and/or box in frame and flowchart and/or the block diagram.These computer program instructions can be provided to logical With the processor of computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate a machine Device so that the instruction generation performed by computer or the processor of other programmable data processing devices is used to implement in flow The device for the function of being specified in one flow of figure or multiple flows and/or one box of block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction generation being stored in the computer-readable memory includes referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or The function of being specified in multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to generate computer implemented processing, so as in computer or The instruction offer performed on other programmable devices is used to implement in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although the preferred embodiment of the application has been described, those skilled in the art once know basic creation Property concept, then can make these embodiments other change and modification.So appended claims be intended to be construed to include it is excellent It selects embodiment and falls into all change and modification of the application scope.Obviously, those skilled in the art can be to the application Embodiment carries out various modification and variations without departing from spirit and scope.If in this way, this of the embodiment of the present application A little modifications and variations belong within the scope of the embodiment of the present application claim and its equivalent technologies, then the application is also intended to include Including these modification and variations.

Claims (17)

1. a kind of method for processing video frequency based on augmented reality, which is characterized in that including:
An at least two field picture in first video flowing is handled, obtains at least one first object data in described image;
Augmented reality processing is carried out with the user data obtained based on first object data, renders and obtains user and the figure As the first virtual scene being combined.
2. the method as described in claim 1, which is characterized in that at at least two field picture in the first video flowing Reason, at least one first object data obtained in described image include:
Image recognition processing is carried out at least frame image data in first video flowing using image recognition algorithm;
Acquisition meets at least one first object of user input instruction and/or preset instructions and first number of objects According to.
3. method as claimed in claim 1 or 2, which is characterized in that the use based on first object data with acquisition User data carries out augmented reality processing, and render acquisition user includes with the first virtual scene that described image is combined:
Judge first object data whether face;
If so, the human face data then is carried out augmented reality processing with the user data obtained using MTCNN algorithms, render and obtain Obtain the first virtual scene that user is combined with described image;
If not, the non-face data are carried out augmented reality processing with the user data obtained using SSD algorithms, render and obtain Obtain the first virtual scene that user is combined with described image.
4. the method as described in claim 1, which is characterized in that the method further includes:
Image is generated according to first virtual scene and/or is generated according to the video flowing and first virtual scene Second video flowing.
5. method as claimed in claim 4, which is characterized in that the method further includes:
Share described image and/or second video flowing.
6. the method as described in claim 1, which is characterized in that first object includes:Face, clothes, shoes and hats, accessories, At least one of dressing effect, hair style, furniture, decoration, scene, personage.
7. the method as described in claim 1, which is characterized in that the method further includes:
The details of first object are provided.
8. the method as described in claim 1, which is characterized in that the method further includes:
The purchase information of first object is provided.
9. a kind of video process apparatus based on augmented reality, which is characterized in that including:
Data acquisition module is used for:An at least two field picture in first video flowing is handled, is obtained in described image extremely Few one first object data;
Scenario generating module is used for:Augmented reality processing, wash with watercolours are carried out with the user data obtained based on first object data Dye obtains the first virtual scene that user is combined with described image.
10. device as claimed in claim 9, which is characterized in that the data acquisition module includes:
Identifying processing unit, is used for:Using image recognition algorithm at least frame image data in first video flowing into Row image recognition processing;
First object handles unit, is used for:Obtain meet user input instruction and/or preset instructions described at least one first pair As and first object data.
11. the device as described in claim 9 or 10, which is characterized in that the scenario generating module includes:
Object judging unit, for judge first object data whether face;
First algorithm unit, for if so, then being increased the human face data with the user data obtained using MTCNN algorithms Strong reality processing, renders and obtains the first virtual scene that user is combined with described image;
Second algorithm unit, for if not, being increased the non-face data with the user data obtained using SSD algorithms Strong reality processing, renders and obtains the first virtual scene that user is combined with described image.
12. device as claimed in claim 9, which is characterized in that described device further includes:
Video generation module, is used for:Image is generated and/or according to the video flowing and institute according to first virtual scene It states the first virtual scene and generates the second video flowing.
13. device as claimed in claim 12, which is characterized in that described device further includes:
Video sharing module, is used for:Share described image and/or second video flowing.
14. device as claimed in claim 9, which is characterized in that first object includes:Face, clothes, shoes and hats, accessories, At least one of dressing effect, hair style, furniture, decoration, scene, personage.
15. device as claimed in claim 9, which is characterized in that the method further includes:
Information providing module is used for:The details of first object are provided.
16. device as claimed in claim 9, which is characterized in that the method further includes:
Purchase provides module, is used for:The purchase information of first object is provided.
17. a kind of terminal device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;The memory can be held for storing at least one Row instruction, the executable instruction make the processor perform such as any one of claim 1-8 corresponding operation.
CN201711309645.3A 2017-12-11 2017-12-11 A kind of method for processing video frequency and its device based on augmented reality Pending CN108109209A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711309645.3A CN108109209A (en) 2017-12-11 2017-12-11 A kind of method for processing video frequency and its device based on augmented reality
PCT/CN2018/103602 WO2019114328A1 (en) 2017-12-11 2018-08-31 Augmented reality-based video processing method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711309645.3A CN108109209A (en) 2017-12-11 2017-12-11 A kind of method for processing video frequency and its device based on augmented reality

Publications (1)

Publication Number Publication Date
CN108109209A true CN108109209A (en) 2018-06-01

Family

ID=62209582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711309645.3A Pending CN108109209A (en) 2017-12-11 2017-12-11 A kind of method for processing video frequency and its device based on augmented reality

Country Status (2)

Country Link
CN (1) CN108109209A (en)
WO (1) WO2019114328A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109743584A (en) * 2018-11-13 2019-05-10 百度在线网络技术(北京)有限公司 Panoramic video synthetic method, server, terminal device and storage medium
WO2019114328A1 (en) * 2017-12-11 2019-06-20 广州市动景计算机科技有限公司 Augmented reality-based video processing method and device thereof
CN110636365A (en) * 2019-09-30 2019-12-31 北京金山安全软件有限公司 Video character adding method and device
CN111243101A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence
CN111915744A (en) * 2020-08-31 2020-11-10 深圳传音控股股份有限公司 Interaction method, terminal and storage medium for augmented reality image
CN113784148A (en) * 2020-06-10 2021-12-10 阿里巴巴集团控股有限公司 Data processing method, system, related device and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862657B (en) * 2019-11-28 2024-05-10 阿里巴巴集团控股有限公司 Image processing method, device, electronic equipment and computer storage medium
CN111010599B (en) * 2019-12-18 2022-04-12 浙江大华技术股份有限公司 Method and device for processing multi-scene video stream and computer equipment
CN111240482B (en) * 2020-01-10 2023-06-30 北京字节跳动网络技术有限公司 Special effect display method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240277A (en) * 2013-06-24 2014-12-24 腾讯科技(深圳)有限公司 Augmented reality interaction method and system based on human face detection
CN104834897A (en) * 2015-04-09 2015-08-12 东南大学 System and method for enhancing reality based on mobile platform
CN105872588A (en) * 2015-12-09 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for loading advertisement in video
CN106604147A (en) * 2016-12-08 2017-04-26 天脉聚源(北京)传媒科技有限公司 Video processing method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9754626B1 (en) * 2016-03-01 2017-09-05 Meograph, Inc. Mobile device video personalization
CN107343211B (en) * 2016-08-19 2019-04-09 北京市商汤科技开发有限公司 Method of video image processing, device and terminal device
CN107391060B (en) * 2017-04-21 2020-05-29 阿里巴巴集团控股有限公司 Image display method, device, system and equipment, and readable medium
CN107221346B (en) * 2017-05-25 2019-09-03 亮风台(上海)信息科技有限公司 It is a kind of for determine AR video identification picture method and apparatus
CN108109209A (en) * 2017-12-11 2018-06-01 广州市动景计算机科技有限公司 A kind of method for processing video frequency and its device based on augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240277A (en) * 2013-06-24 2014-12-24 腾讯科技(深圳)有限公司 Augmented reality interaction method and system based on human face detection
CN104834897A (en) * 2015-04-09 2015-08-12 东南大学 System and method for enhancing reality based on mobile platform
CN105872588A (en) * 2015-12-09 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for loading advertisement in video
US20170171639A1 (en) * 2015-12-09 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method and electronic device for loading advertisement to videos
CN106604147A (en) * 2016-12-08 2017-04-26 天脉聚源(北京)传媒科技有限公司 Video processing method and apparatus

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019114328A1 (en) * 2017-12-11 2019-06-20 广州市动景计算机科技有限公司 Augmented reality-based video processing method and device thereof
CN109743584A (en) * 2018-11-13 2019-05-10 百度在线网络技术(北京)有限公司 Panoramic video synthetic method, server, terminal device and storage medium
CN109743584B (en) * 2018-11-13 2021-04-06 百度在线网络技术(北京)有限公司 Panoramic video synthesis method, server, terminal device and storage medium
CN110636365A (en) * 2019-09-30 2019-12-31 北京金山安全软件有限公司 Video character adding method and device
CN110636365B (en) * 2019-09-30 2022-01-25 北京金山安全软件有限公司 Video character adding method and device, electronic equipment and storage medium
CN111243101A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence
CN111243101B (en) * 2019-12-31 2023-04-18 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence
CN113784148A (en) * 2020-06-10 2021-12-10 阿里巴巴集团控股有限公司 Data processing method, system, related device and storage medium
CN111915744A (en) * 2020-08-31 2020-11-10 深圳传音控股股份有限公司 Interaction method, terminal and storage medium for augmented reality image

Also Published As

Publication number Publication date
WO2019114328A1 (en) 2019-06-20

Similar Documents

Publication Publication Date Title
CN108109209A (en) A kind of method for processing video frequency and its device based on augmented reality
Balakrishnan et al. Interaction of Spatial Computing In Augmented Reality
US11887231B2 (en) Avatar animation system
CN110163054B (en) Method and device for generating human face three-dimensional image
CN111833418B (en) Animation interaction method, device, equipment and storage medium
WO2018095273A1 (en) Image synthesis method and device, and matching implementation method and device
CN104170318B (en) Use the communication of interaction incarnation
US20210191690A1 (en) Virtual Reality Device Control Method And Apparatus, And Virtual Reality Device And System
US20180027307A1 (en) Emotional reaction sharing
US20220044490A1 (en) Virtual reality presentation of layers of clothing on avatars
CN109420336A (en) Game implementation method and device based on augmented reality
US20140022238A1 (en) System for simulating user clothing on an avatar
KR102491140B1 (en) Method and apparatus for generating virtual avatar
CN107291352A (en) Application program is redirected in a kind of word read method and its device
CN109035373A (en) The generation of three-dimensional special efficacy program file packet and three-dimensional special efficacy generation method and device
US10783713B2 (en) Transmutation of virtual entity sketch using extracted features and relationships of real and virtual objects in mixed reality scene
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
US20210375067A1 (en) Virtual reality presentation of clothing fitted on avatars
CN115244495A (en) Real-time styling for virtual environment motion
CN113362263A (en) Method, apparatus, medium, and program product for changing the image of a virtual idol
CN108983974B (en) AR scene processing method, device, equipment and computer-readable storage medium
US20230177755A1 (en) Predicting facial expressions using character motion states
CN108563327A (en) Augmented reality method, apparatus, storage medium and electronic equipment
CN111274489B (en) Information processing method, device, equipment and storage medium
CN109445573A (en) A kind of method and apparatus for avatar image interactive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200527

Address after: 310051 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 510627 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping B radio square 14 storey tower

Applicant before: GUANGZHOU UCWEB COMPUTER TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180601