CN107743270A - Exchange method and equipment - Google Patents
Exchange method and equipment Download PDFInfo
- Publication number
- CN107743270A CN107743270A CN201711048796.8A CN201711048796A CN107743270A CN 107743270 A CN107743270 A CN 107743270A CN 201711048796 A CN201711048796 A CN 201711048796A CN 107743270 A CN107743270 A CN 107743270A
- Authority
- CN
- China
- Prior art keywords
- equipment
- video image
- virtual
- raw video
- interactive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Exchange method and equipment.Some embodiments of the present application provide a kind of interaction schemes, the first equipment is after the raw video image that the second equipment is sent is received in the program, show the raw video image, the user of the first equipment is allowd to make corresponding interactive action based on raw video image, to express the exchange of information for some contents in other side's picture.After first equipment obtains the image of interactive action, the raw video image is superimposed on so that the user of the first equipment can see stack result of the interactive action made in raw video image, it is possible thereby to be modified to interactive action.First equipment can obtain virtual interactive information according to the image of interactive action, so that the first equipment or the second equipment generate composograph according to raw video image and virtual interactive information, because the composograph contains interactive action content to be expressed, so that user improves the interactive experience of user without being described by language.
Description
Technical field
The application is related to areas of information technology, more particularly to a kind of exchange method and equipment.
Background technology
With the development of internet, people have got used to realizing mutual ditch by internet in daily life
It is logical.Video chat is also used as a kind of more conventional interactive mode by increasing user.In Video chat process
In, interactive user can see as the picture taken by the camera device of counterpart device, and be exchanged accordingly.
Apply for content
The purpose of the application is to provide a kind of scheme that can realize same scene interactivity.
To achieve the above object, some embodiments of the present application provide a kind of exchange method of first equipment end, the party
Method includes:The raw video image of the second equipment transmission is received, and shows the raw video image;It is based on according to user described
The interactive action that raw video image is made obtains virtual interactive information, and obtains interactive action described in raw video image and produce
Raw image, and by imaging importing caused by the interactive action in the raw video image;Sent to second equipment
On the data of the virtual interactive information so that second equipment shows composograph, wherein, the composograph according to
Raw video image and virtual interactive the information generation.
Some embodiments of the present application additionally provide a kind of exchange method of second equipment end, and this method includes:Obtain former
Beginning video image;The raw video image is sent to the first equipment, so as to obtain base using the user of first equipment
In the interactive action that the raw video image is made;First equipment is received to feed back in response to the raw video image
, data on virtual interactive information, wherein, the virtual interactive information is obtained by the first equipment according to the interactive action
Take;Composograph is shown, wherein, the composograph generates according to the raw video image and virtual interactive information.
Some embodiments of the present application additionally provide a kind of equipment, and the equipment includes being used to store computer program instructions
Memory and the processor for performing computer program instructions, wherein, when the computer program instructions are by the computing device
When, trigger the equipment and perform foregoing method.
Some embodiments of the present application additionally provide a kind of computer-readable medium, are stored thereon with computer program and refer to
Order, the computer-readable instruction can be executed by processor to realize foregoing method.
In the scheme that some embodiments of the present application provide, the first equipment is receiving the original video figure of the second equipment transmission
As after, the raw video image is shown so that the user of the first equipment can make corresponding friendship based on raw video image
Mutually action, to express the exchange of information for some contents in other side's picture.First equipment is based on described original according to user
The interactive action that video image is made obtains virtual interactive information, and obtains caused by interactive action described in raw video image
Image, and by imaging importing caused by the interactive action in the raw video image so that the user of the first equipment can be with
Stack result of the interactive action made in raw video image is seen, it is possible thereby to be modified to interactive action.Meanwhile
First equipment or the second equipment can also generate composograph according to raw video image and virtual interactive information, due to the conjunction
Interactive action content to be expressed is contained into image so that user improves user's without being described by language
Interactive experience.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of flow of interaction schemes between the first equipment and the second equipment in some embodiments of the present application
Figure;
Fig. 2 is the flow of another interaction schemes between the first equipment and the second equipment in some embodiments of the present application
Figure;
Fig. 3 is the flow of another interaction schemes between the first equipment and the second equipment in some embodiments of the present application
Figure;
Fig. 4 is the flow chart that interaction process is realized using the scheme of the application some embodiments offer;
Fig. 5 is the flow chart that remote assistance interaction is realized using the scheme of the application some embodiments offer;
Fig. 6 is the schematic diagram for the interactive device that some embodiments of the present application provide;
Same or analogous reference represents same or analogous part in accompanying drawing.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
In accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is
Some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belong to the scope of the application protection.
In one typical configuration of the application, terminal, the equipment of service network include one or more processors
(CPU), input/output interface, network interface and internal memory.
Internal memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only storage (ROM) or flash memory (flashRAM).Internal memory is showing for computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media, can be by any side
Method or technology realize that information stores.Information can be computer-readable instruction, data structure, the module of program or other numbers
According to.The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM
(SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only storage
(ROM), Electrically Erasable Read Only Memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc (CD-
ROM), digital versatile disc (DVD) or other optical storages, magnetic cassette tape, magnetic disk storage or other magnetic storages
Equipment or any other non-transmission medium, the information that can be accessed by a computing device available for storage.
Some embodiments of the present application provide a kind of exchange method, for realizing the video interactive between two equipment,
Fig. 1 shows the interaction process flow between the first equipment 100 and the second equipment 200, and the first equipment and the second equipment can be
All kinds of terminal devices such as mobile phone, tablet personal computer, computer, and can by built-in or external camera device shooting image,
It realizes that the handling process of interaction is as follows:
Step S101, the second equipment obtain raw video image.The mode for obtaining image can be by the second equipment
Put or external all kinds of camera devices realized, such as when the second equipment is mobile phone, can by the camera of mobile phone come
Shooting picture, obtain raw video image.The raw video image refers to the initial pictures that camera device directly photographs, should
Image did not carried out the post-processings such as the other images of superposition or virtual effect.
Step S102, the second equipment send raw video image to the first equipment after raw video image is got.
Step S103, the first equipment receive the raw video image that the second equipment is sent.
Step S104, the first equipment show the raw video image received.First equipment can by various modes come
The raw video image is shown, for example, showing original video figure by the display screen or display of the first equipment itself
Picture, the raw video image can also be projected etc. by the projector equipment that the first equipment is connected.In two users
In the scene interacted by two equipment, user A is by the first equipment with being interacted using the user B of the second equipment
When, user A can now can see the picture that user B is photographed by the second equipment by the display screen of its first equipment.
Step S105, virtual interactive letter is obtained based on the interactive action that the raw video image is made according to user
Breath, and image caused by the interactive action is obtained, by imaging importing caused by the interactive action in the original video figure
Picture.
In some typical case scenes of this programme, user A need be directed to raw video image in certain content with
User B is interacted, such as points out the position of a certain things or operating method etc. in raw video image to user B,
If being described by language, clear situation can not be described by being likely to occur so that interactive efficiency reduces, and Consumer's Experience is poor.
And in this programme, user A can be based on described original after the raw video image of the second equipment shooting is seen according to user
The interactive action that video image is made obtains virtual interactive information, and the virtual interactive information contains the use using the first equipment
The information that family interacts for raw video image content, for generating the composograph for representing to include interaction content so that
Second equipment can directly be seen that at least part content that the user of the first equipment interacts, and improve interactive efficiency.
In certain embodiments, user refers to user first based on the interactive action that the raw video image is made
The action made in the coverage of the camera device of equipment so that the first equipment can get these friendships by camera device
The image mutually acted, so as to obtain virtual interactive letter based on the interactive action that the raw video image is made according to user
Breath, by imaging importing caused by the interactive action in the raw video image.For example, if desired point out raw video image
In a certain things position when, user A can point to some point using finger, allow the dynamic of camera device shooting finger
Make, obtain the image of interactive action.
Wherein, the virtual interactive information is used to combine what raw video image generation was checked for the user B of the second equipment
Composograph.In certain embodiments, virtual interactive information has comprised at least two parts content, i.e., virtual effect active position and
Virtual effect type.Wherein, virtual effect active position determines the display location of virtual effect in the composite image, virtual effect
Fruit type determines which kind of effect can be specifically shown in the composite image, such as red point or circle can be shown at finger sensing
Deng.
Under this scene, the first equipment is being obtained according to user based on the interactive action that the raw video image is made
During virtual interactive information, the captured interactive action is obtained first, and image knowledge then is carried out to captured interactive action
Not, the operating position and type of action of the interactive action are obtained.
According to the operating position of the interactive action, the virtual effect active position in the virtual interactive information is determined,
And the type of action according to the interactive action, determine the virtual effect type in the virtual interactive information.Wherein,
Operating position represents the position of user mutual action in the picture, can be determined by way of image recognition, for determining
Virtual effect active position;Type of action refers to the action that user is specifically made, such as points to certain position, in certain position with finger
Put and draw a circle or make other various actions, type of action is used to determine virtual effect type.
After virtual interactive information is got, virtual interactive class that the first equipment can first in virtual interactive information
Type determines image caused by the interactive action.Image caused by interactive action can be the image from captured interactive action
Middle truncated picture content, for example, the image of the user A intercepted from captured image finger fore-end or
Some virtual images being pre-configured with, such as red point or circle.
Then, the first equipment according to caused by the virtual effect active position determines the interactive action image described
Display location in raw video image, such as display location are x=237, and (x, y value represent x, the pixel of y-axis to y=724
Position).
It is determined that after image caused by interactive action and display location, by imaging importing caused by the interactive action in
The display location of the raw video image, thus, the red point of flicker can be shown in raw video image
On the position of (237,724) pixel.Image after superposition is used for for being checked using the user A of the first equipment, to determine its friendship
Mutually whether action is wrong, such as user A wants to point out a position in raw video image, by checking the figure after being superimposed
Picture, you can to see whether the position currently referred to is correct, if seeing the incorrect position that can also move hand, to reach correctly
Position.
In other embodiments of the application, interactive action bag that user is made based on the raw video image
Include:User is based on the raw video image come the touch interactive action made to the touch-screen of first equipment, such as
After the user of one equipment shows raw video image on touch-screen is seen, raw video image is clicked on the touchscreen
Certain partial content.
Correspondingly, believed according to user based on the virtual interactive that the interactive action that the raw video image is made obtains
Breath, it can be obtained by the touch-screen for relying on the first equipment.For example, the touch location that user touches interactive action is as virtual
The virtual effect active position of interactive information, and the other parameters of interactive action are touched, such as touching intensity, touch manner etc.,
It can then determine that virtual effect type, such as touching intensity then show greatly the red point not flashed, touching intensity is small then to show flicker
Red point etc..
When interactive action is the touch interactive action that user makes to the touch-screen of the first equipment, caused by interactive action
Image can also be included based on virtual image caused by the touch interactive action.Foregoing virtual interactive information can be utilized,
Image caused by the interactive action is obtained, and by imaging importing caused by the interactive action in the raw video image;
Namely the type of effect of virtual image is determined by parameters such as touching intensity or touch manners, determined by touch location virtual
Display location of the image in the raw video image, and then virtual image is superimposed on to the display of the raw video image
On position.
In existing video communication interaction, the superposition of virtual image is primarily to the interesting body of increase video communication
Test;For example, (various cartoon characters are such as superimposed on facial image) is superimposed based on the facial virtual image that recognition of face is realized, can
Increase the enjoyment of the video communication between user.Also, the superposition of this virtual image is based on carrying out image to local video
Identify to realize.The above embodiment of the present invention employs the scheme different from this area conventional thought, by that will be based on local
The imaging importing that the interactive action of user determines can make video communication on the raw video image of local and video communication opposite end
The exchange of both sides is more convenient, improves the efficiency of video exchange.
Step S106, the first equipment send the data on the virtual interactive information to second equipment.
Step S107, the second equipment receive it is that first equipment is fed back in response to the raw video image, on void
Intend the data of interactive information.
Step S108, the second equipment show composograph.Thus, the synthesis can be passed through using the user B of the second equipment
Image sees the position indicated by user A, such as virtual effect type is the red point of flicker, user B see flicker red point it
The position pointed by user A is would know that afterwards, and the position is described by language without user A, substantially increases user's ditch
Logical convenience.
Because composograph is according to raw video image and the generation of virtual interactive information, and generation processing can be by the
One equipment performs, and can also be performed by the second equipment.If being performed by the first equipment, the first equipment directly sends to the second equipment and closed
Into image, i.e. the data on the virtual interactive information are according to the raw video image and the life of virtual interactive information
Into composograph.If being performed by the second equipment, what the first equipment was sent to the second equipment is to include the virtual interactive information
Packet, i.e. the data on the virtual interactive information are to include the packet of the virtual interactive information, the second equipment
Virtual interactive information after packet is received, it is necessary in raw video image and packet carries out image synthesis, generation
Composograph.
Thus, some embodiments of the present application provide one kind by the first equipment according to raw video image and virtual interactive
Information generates the interaction schemes of composograph, its handling process as shown in Fig. 2 wherein, step S101 to S105 can with Fig. 1
Identical, here is omitted, comprises the following steps thereafter:
Step S106a, the first equipment generate composograph according to the raw video image and virtual interactive information.Due to
Final second equipment can show composograph, and the first equipment can also show the synthesis after composograph is generated to user
Image, so that the user A using the first equipment and user B using the second equipment is it can be seen that completely the same figure
Picture, the asymmetry for occurring information in interaction is avoided, reduces interactive efficiency.
Step S107a, the first equipment send the composograph to the second equipment.
Step S108a, the second equipment receive the composite diagram that first equipment is fed back in response to the raw video image
Picture.
Step S109a, the second equipment show composograph.
And the application other embodiments then provide one kind by the second equipment according to raw video image and virtual interactive
Information generates the interaction schemes of composograph, its handling process as shown in figure 3, wherein, step S101 to S105 can with Fig. 1
Identical, here is omitted, comprises the following steps thereafter:
Packet comprising virtual interactive information is sent to the second equipment by step S106b, the first equipment.
Step S107b, the second equipment receive first equipment and include void in response to what the raw video image fed back
Intend the packet of interactive information.
Step S108b, virtual interactive information generation composite diagram of second equipment in raw video image and packet
Picture.
Step S109b, the second equipment show composograph.
Fig. 4 shows schematic diagram when user A and user B interact video interactive using mobile phone a and mobile phone b respectively,
The interaction schemes of some embodiments of the application are used in this time interaction, specific interaction flow is as follows:
Step S401, user B shoot the video image in its room by mobile phone b camera, and the video image is made
Sent for raw video image to user A mobile phone a.
Step S402, after mobile phone a receives video image, pass through the mobile phone a touch-screen real-time display original video figure
Picture, common user A are checked.
Step S403, user A is after video image is seen, it is desirable to informs certain in video images of the user B captured by it
The particular location of individual ornament.Thus, user A clicks on the position where this ornament in the touch-screen of mobile phone.
Step S404, mobile phone a touch-screen can produce corresponding touch-control letter after the touch control operation that user clicks on is detected
Breath so that mobile phone a can obtain the parameters such as touch location, touching intensity, and thereby determine that virtual interactive information.The virtual interactive
Information includes virtual effect active position and virtual effect type, wherein, virtual effect active position is determined by finger position,
Such as can be position where finger tip, virtual effect type can be red point of flicker and other effects.
Meanwhile the touch control operation clicked on of user can show the virtual image of feedback on the touchscreen, the virtual image can be with
The raw video image is superimposed on, is shown on the touchscreen so that user A can see its finger has been specifically directed towards where.
User A can make it point to the correct position in raw video image by constantly adjusting finger.
Packet comprising virtual interactive information is sent to mobile phone b by step S405, mobile phone a.
Step S406, after mobile phone b is received, composograph can be generated simultaneously using virtual effect information and raw video image
Display so that user B can be intuitive to see the position of ornament in the picture pointed by user A.
The scheme that some embodiments of the application provide can also be applied to other interaction scenarios, such as user A is being assembled
One furniture a, it is not known that how bolt should be installed, and at this moment can carries out video interactive with user B, allows user B to it
Carry out remote assistance.Fig. 5 shows handling process when video interactive is carried out between user A and user B, is used in this time interaction
The equipment that family A is used is mobile phone, and the equipment that user B is used is computer, and the computer is connected with a camera and projecting apparatus, taken the photograph
As head is used to obtain image, projecting apparatus is used for user's display image.Some embodiments of the application have been used in this time interaction
Interaction schemes, specific interaction flow are as follows:
Step S501, user A shoot the video image on its furniture assembled by the camera of mobile phone;
Step S502, user A are regard the video image as raw video image by mobile phone and sent to user B computer
In.
The video image is carried out live fluoroscopic by step S503, computer after video image is received, by projecting apparatus,
User B is allowd to see furniture by projected picture.
Step S504, user B open the camera of computer connection, and hand-held bolt exists after projected picture is seen
The action of stubborn bolt is made in the coverage of camera so that camera can photograph these actions.
Step S505, computer are handled the image for twisting bolt action, determine virtual interactive information, the virtual interactive
Information includes virtual effect active position and virtual effect type, wherein, virtual effect active position is determined by finger position,
Such as can be position where bolt, virtual effect type can be the display effect for the dynamic virtual image that finger turns bolt
Fruit etc..
Meanwhile image caused by the interactive action is obtained, and imaging importing caused by interactive action original is regarded in described
In frequency image.For example, in the present embodiment, the position of finger and bolt in the picture can be first determined, and according to motion images
Proportionate relationship between raw video image, by the image of finger and bolt (or can also be its alternate image, for example, with
The same or similar image of virtual effect) it is superimposed in the raw video image of real-time display so that user B can see it
Whether the action for twisting bolt has been added to correct position.By checking the image of superposition, user B can be acted constantly in adjustment,
It is set to point to the correct position in raw video image.
Packet comprising virtual interactive information will be sent to mobile phone by step S506, computer.
Step S507, after mobile phone receives, generate composograph using virtual effect information and raw video image and show,
User A is allowd to be intuitive to see that user B twists virtual acting and the position of bolt, so as to know how bolt should be installed.
To sum up, in the scheme that some embodiments of the present application provide, the first equipment is receiving the original of the second equipment transmission
After video image, the raw video image is shown so that the user of the first equipment can make phase based on raw video image
The interactive action answered, to express the exchange of information for some contents in other side's picture.First equipment obtains interactive action
After image, the raw video image is superimposed on so that the user of the first equipment can see the interactive action made and exist
Stack result in raw video image, it is possible thereby to be modified to interactive action.First equipment can be according to interactive action
Image obtain virtual interactive information so that the first equipment or the second equipment are according to raw video image and virtual interactive information
Composograph is generated, because the composograph contains interactive action content to be expressed so that user need not pass through language
It is described, improves the interactive experience of user.
In addition, the part of the application can be applied to computer program product, such as computer program instructions, when its quilt
When computer performs, by the operation of the computer, it can call or provide according to the present processes and/or technical scheme.
And the programmed instruction of the present processes is called, it is possibly stored in fixed or moveable recording medium, and/or pass through
Broadcast or the data flow in other signal bearing medias and be transmitted, and/or be stored according to programmed instruction run calculating
In the working storage of machine equipment.Here, including an equipment as shown in Figure 6 according to some embodiments of the present application, this sets
The standby one or more memories 610 for including being stored with computer-readable instruction and the processing for performing computer-readable instruction
Device 620, wherein, when the computer-readable instruction is by the computing device so that the equipment performs and is based on foregoing the application
Multiple embodiments methods and/or techniques scheme.
In addition, some embodiments of the present application additionally provide a kind of computer-readable medium, computer journey is stored thereon with
Sequence instruct, the computer-readable instruction can be executed by processor with realize the method for multiple embodiments of foregoing the application and/
Or technical scheme.
It should be noted that the application can be carried out in the assembly of software and/or software and hardware, for example, can adopt
With application specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In some embodiments
In, the software program of the application can realize above step or function by computing device.Similarly, the software of the application
Program (include related data structure) can be stored in computer readable recording medium storing program for performing, for example, RAM memory, magnetic or
CD-ROM driver or floppy disc and similar devices.In addition, some steps or function of the application can employ hardware to realize, for example,
Coordinate as with processor so as to perform the circuit of each step or function.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit herein or essential characteristic, the application can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, scope of the present application is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the application.Any reference in claim should not be considered as to the involved claim of limitation.This
Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in device claim is multiple
Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table
Show title, and be not offered as any specific order.
Claims (12)
1. a kind of exchange method of first equipment end, wherein, this method includes:
The raw video image of the second equipment transmission is received, and shows the raw video image;
Virtual interactive information is obtained based on the interactive action that the raw video image is made according to user, and obtains the friendship
Mutually image caused by action, and by imaging importing caused by the interactive action in the raw video image;
The data on the virtual interactive information are sent to second equipment, so that second equipment shows composite diagram
Picture, wherein, the composograph generates according to the raw video image and virtual interactive information.
2. according to the method for claim 1, wherein, the data on the virtual interactive information are included comprising described virtual
The packet of interactive information;
The data on the virtual interactive information are sent to second equipment, so that second equipment shows composite diagram
Picture, including:
Packet comprising the virtual interactive information is sent to the second equipment, so that second equipment is according to described original
Virtual interactive information generation composograph in video image and the packet, and show.
3. according to the method for claim 1, wherein, the data on the virtual interactive information are included according to described original
Video image and the composograph of virtual interactive information generation;
The data on the virtual interactive information are sent to second equipment, so that second equipment shows composite diagram
Picture, including:
According to the raw video image and virtual interactive information generation composograph;
The composograph is sent to the second equipment, so that second equipment shows composograph.
4. the method according to claim 11, wherein, the interactive action bag that user is made based on the raw video image
User is included based on the raw video image come the touch interactive action made to the touch-screen of first equipment, the interaction
Image caused by action is included based on virtual image caused by the touch interactive action.
5. method according to any one of claim 1 to 4, wherein, the raw video image institute is based on according to user
The interactive action made obtains virtual interactive information, including:
Obtain the captured interactive action;
Image recognition is carried out to captured interactive action, obtains the operating position and type of action of the interactive action;
According to the operating position of the interactive action, the virtual effect active position in the virtual interactive information is determined, and
According to the type of action of the interactive action, the virtual effect type in the virtual interactive information is determined.
6. according to the method for claim 5, wherein, image caused by the interactive action is obtained, by the interactive action
Caused imaging importing in the raw video image, including:
The image according to caused by the virtual interactive type determines the interactive action;
Image is in the raw video image according to caused by the virtual effect active position determines the interactive action
Display location;
By imaging importing caused by the interactive action in the display location of the raw video image.
7. a kind of exchange method of second equipment end, wherein, this method includes:
Obtain raw video image;
The raw video image is sent to the first equipment, the original is based on so as to be obtained using the user of first equipment
The interactive action that beginning video image is made;
Data that first equipment is fed back in response to the raw video image, on virtual interactive information are received, wherein,
The virtual interactive information is obtained by the first equipment according to the interactive action;
Composograph is shown, wherein, the composograph generates according to the raw video image and virtual interactive information.
8. according to the method for claim 7, wherein, the data on virtual interactive information are to include virtual interactive information
Packet;
Composograph is shown, including:
Virtual interactive information generation composograph in the raw video image and the packet, and show.
9. according to the method for claim 7, wherein, on virtual interactive information data for it is described to end data according to institute
State raw video image and the composograph of virtual interactive information generation.
10. the method according to any one of claim 7 to 9, wherein, the virtual interactive information is made including virtual effect
With position and virtual effect type.
11. a kind of equipment, the equipment includes being used to store the memory of computer program instructions and for performing computer program
The processor of instruction, wherein, when the computer program instructions are by the computing device, trigger the equipment perform claim requirement
Method any one of 1 to 9.
12. a kind of computer-readable medium, is stored thereon with computer program instructions, the computer-readable instruction can be processed
Device is performed to realize method as claimed in any one of claims 1-9 wherein.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711048796.8A CN107743270A (en) | 2017-10-31 | 2017-10-31 | Exchange method and equipment |
PCT/CN2018/103175 WO2019085623A1 (en) | 2017-10-31 | 2018-08-30 | Interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711048796.8A CN107743270A (en) | 2017-10-31 | 2017-10-31 | Exchange method and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107743270A true CN107743270A (en) | 2018-02-27 |
Family
ID=61233790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711048796.8A Pending CN107743270A (en) | 2017-10-31 | 2017-10-31 | Exchange method and equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107743270A (en) |
WO (1) | WO2019085623A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019085623A1 (en) * | 2017-10-31 | 2019-05-09 | 上海掌门科技有限公司 | Interaction method and device |
CN109993814A (en) * | 2019-03-19 | 2019-07-09 | 广东智媒云图科技股份有限公司 | Interaction drawing method, device, terminal device and storage medium based on outline |
CN112929688A (en) * | 2021-02-09 | 2021-06-08 | 歌尔科技有限公司 | Live video recording method, projector and live video system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101931621A (en) * | 2010-06-07 | 2010-12-29 | 上海那里网络科技有限公司 | Device and method for carrying out emotional communication in virtue of fictional character |
CN103813127A (en) * | 2014-03-04 | 2014-05-21 | 腾讯科技(深圳)有限公司 | Video call method, terminal and system |
CN104902212A (en) * | 2015-04-30 | 2015-09-09 | 努比亚技术有限公司 | Video communication method and apparatus |
CN106713811A (en) * | 2015-11-17 | 2017-05-24 | 腾讯科技(深圳)有限公司 | Video communication method and device |
CN106803921A (en) * | 2017-03-20 | 2017-06-06 | 深圳市丰巨泰科电子有限公司 | Instant audio/video communication means and device based on AR technologies |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103369288B (en) * | 2012-03-29 | 2015-12-16 | 深圳市腾讯计算机系统有限公司 | The instant communication method of video Network Based and system |
CN107491174B (en) * | 2016-08-31 | 2021-12-24 | 中科云创(北京)科技有限公司 | Method, device and system for remote assistance and electronic equipment |
CN107743270A (en) * | 2017-10-31 | 2018-02-27 | 上海掌门科技有限公司 | Exchange method and equipment |
-
2017
- 2017-10-31 CN CN201711048796.8A patent/CN107743270A/en active Pending
-
2018
- 2018-08-30 WO PCT/CN2018/103175 patent/WO2019085623A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101931621A (en) * | 2010-06-07 | 2010-12-29 | 上海那里网络科技有限公司 | Device and method for carrying out emotional communication in virtue of fictional character |
CN103813127A (en) * | 2014-03-04 | 2014-05-21 | 腾讯科技(深圳)有限公司 | Video call method, terminal and system |
CN104902212A (en) * | 2015-04-30 | 2015-09-09 | 努比亚技术有限公司 | Video communication method and apparatus |
CN106713811A (en) * | 2015-11-17 | 2017-05-24 | 腾讯科技(深圳)有限公司 | Video communication method and device |
CN106803921A (en) * | 2017-03-20 | 2017-06-06 | 深圳市丰巨泰科电子有限公司 | Instant audio/video communication means and device based on AR technologies |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019085623A1 (en) * | 2017-10-31 | 2019-05-09 | 上海掌门科技有限公司 | Interaction method and device |
CN109993814A (en) * | 2019-03-19 | 2019-07-09 | 广东智媒云图科技股份有限公司 | Interaction drawing method, device, terminal device and storage medium based on outline |
CN112929688A (en) * | 2021-02-09 | 2021-06-08 | 歌尔科技有限公司 | Live video recording method, projector and live video system |
CN112929688B (en) * | 2021-02-09 | 2023-01-24 | 歌尔科技有限公司 | Live video recording method, projector and live video system |
Also Published As
Publication number | Publication date |
---|---|
WO2019085623A1 (en) | 2019-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11516381B2 (en) | Display device and operating method thereof with adjustments related to an image display according to bending motion of the display device | |
CN111277849B (en) | Image processing method and device, computer equipment and storage medium | |
US8789094B1 (en) | Optimizing virtual collaboration sessions for mobile computing devices | |
CN106708452B (en) | Information sharing method and terminal | |
JP2016506669A (en) | Camera with privacy mode | |
CN107743270A (en) | Exchange method and equipment | |
CN107040808A (en) | Treating method and apparatus for barrage picture in video playback | |
EP3466057A1 (en) | Information processing apparatus, conference system, and control method of information processing apparatus | |
WO2024174971A1 (en) | Video processing method and apparatus, and device and storage medium | |
CN114339363B (en) | Picture switching processing method and device, computer equipment and storage medium | |
CN103607632A (en) | Previewing method and device based on desktop live broadcast | |
CN105407313A (en) | Video calling method, equipment and system | |
US11040278B2 (en) | Server device distributing video data and replay data and storage medium used in same | |
CN110794966B (en) | AR display system and method | |
CN107733874A (en) | Information processing method, device, computer equipment and storage medium | |
US11910068B2 (en) | Panoramic render of 3D video | |
US10009550B1 (en) | Synthetic imaging | |
JP5818326B2 (en) | Video viewing history analysis method, video viewing history analysis apparatus, and video viewing history analysis program | |
US11451743B2 (en) | Control of image output | |
US20140178035A1 (en) | Communicating with digital media interaction bundles | |
CN114866835A (en) | Bullet screen display method, bullet screen display device and electronic equipment | |
CN110677728B (en) | Method, device and equipment for playing video and storage medium | |
JP6623905B2 (en) | Server device, information processing method and program | |
CN118138819B (en) | Integrated machine processing method and device, integrated machine and storage medium | |
CN114286002B (en) | Image processing circuit, method, device, electronic equipment and chip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180227 |