CN110278387A - A kind of data processing method and system - Google Patents
A kind of data processing method and system Download PDFInfo
- Publication number
- CN110278387A CN110278387A CN201810220924.0A CN201810220924A CN110278387A CN 110278387 A CN110278387 A CN 110278387A CN 201810220924 A CN201810220924 A CN 201810220924A CN 110278387 A CN110278387 A CN 110278387A
- Authority
- CN
- China
- Prior art keywords
- characteristic
- equipment
- data
- display device
- performing artist
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 50
- 230000009471 action Effects 0.000 claims abstract description 31
- 238000009877 rendering Methods 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims description 44
- 230000033001 locomotion Effects 0.000 claims description 12
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000003786 synthesis reaction Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 239000000463 material Substances 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005553 drilling Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
This specification embodiment discloses a kind of data processing method and system.By receiving the characteristic of the collected different performing artists of institute, processing equipment can handle the characteristic according to preset presentation mode, and will treated that characteristic passes through display device presentation.Characteristic collected includes: at least one of expressive features data, limb action characteristic, and by the processing of processing equipment, the image of presentation and the image of the performing artist itself are different.Using the data processing method in this specification embodiment, so that converting rendering image under the scene of race or stage performance for the image of performing artist itself or being realized by other main bodys, change traditional performance form.
Description
Technical field
This application involves field of computer technology more particularly to a kind of data processing methods and system.
Background technique
Currently, the entertainment contents such as live game coverage, stage performance are enriched constantly, user by scene or can pass through it
He watches above-mentioned entertainment content by medium.
In the prior art, for above-mentioned entertainment content, in practical applications, in order to provide preferably user as far as possible
Experience often enhances the display effect of entertainment content, wherein if under the scene presented at the scene, usually can rely on live cloth
Scape, live stage effect, clothes effect or lighting scene effect etc.;If with medium present scene under, can live streaming,
Increase the display effects such as environment material, virtual expression, shadow in recording process.
But the object presented in above-mentioned entertainment content is mainly actual performer itself, and presentation mode is limited.
Summary of the invention
This specification embodiment provides a kind of data processing method and system, to solve traditional race or stage performance
Presentation mode is limited to the problem of performing artist itself.
This specification embodiment adopts the following technical solutions:
This specification embodiment provides a kind of data processing method, comprising:
Receive institute it is collected difference performing artists characteristic;Wherein, the characteristic includes: expressive features number
According at least one of, limb action characteristic;
The characteristic is handled according to preset presentation mode;
By treated, characteristic passes through display device presentation;
Wherein, the image of presentation and the image of the performing artist itself are different.
Further, the performing artist and the display device are in same scene.
Further, the characteristic of different performing artists is acquired, comprising: the spy of different performing artists is acquired by acquiring equipment
Levy data.
Further, the characteristic of different performing artists is acquired by acquiring equipment, comprising: by the acquisition equipment,
The characteristic of different performing artists is acquired according to the order of packets of different performing artists.
Further, the performing artist includes: the first performing artist and the second performing artist;Different tables are acquired by acquisition equipment
The characteristic for the person of drilling, comprising:
The expressive features data that first performing artist is acquired by the acquisition equipment, are acquired by the acquisition equipment
The limb action characteristic of second performing artist.
Further, the characteristic is handled according to preset presentation mode, comprising:
Rendering synthesis is carried out for the collected expressive features data and limb action characteristic, is generated to be presented
Computer graphical CG data.
Further, the characteristic is handled according to preset presentation mode, comprising:
According to the collected expressive features data set limb action characteristic, control instruction data are generated.
Further, by treated, characteristic passes through display device presentation, comprising:
The CG data to be presented are shown by display device;
Wherein, the display device includes: projection device, screen, enhancing display AR equipment, Virtual Reality equipment, mixes
Close at least one of reality MR equipment, hologram device.
Further, the display device includes: intelligent robot;
By treated, characteristic passes through display device presentation, comprising:
Indicate that the intelligent robot executes corresponding movement and/or expression according to the control instruction data.
Accordingly, this specification embodiment also provides a kind of data processing system, comprising:
Equipment is acquired, the characteristic of different performing artists is acquired;Wherein, the characteristic include: expressive features data,
At least one of limb action characteristic;
Processing equipment receives the characteristic of the collected difference performing artists of acquisition equipment institute, and according to preset
The presentation mode processing characteristic;
Treated characteristic is presented in display device;
Wherein, the image of presentation and the image of the performing artist itself are different.
Further, the performing artist and the display device are in same scene.
Further, the acquisition equipment, the characteristic of different performing artists is acquired according to the order of packets of different performing artists
According to.
Further, the performing artist includes: the first performing artist and the second performing artist;
The acquisition equipment acquires the expressive features data of first performing artist, by described in acquisition equipment acquisition
The limb action characteristic of second performing artist.
Further, the processing equipment, for the collected expressive features data and limb action characteristic
Rendering synthesis is carried out, CG data to be presented are generated.
Further, the processing equipment, according to the collected expressive features data set limb action characteristic,
Generate control instruction data.
Further, the display device shows the CG data to be presented by display device;
Wherein, the display device includes: projection device, screen, enhancing display AR equipment, Virtual Reality equipment, mixes
Close at least one of reality MR equipment, hologram device.
Further, the display device includes: intelligent robot;
The processing equipment according to the control instruction data indicate the intelligent robot execute corresponding movement and/or
Expression.
This specification embodiment use at least one above-mentioned technical solution can reach it is following the utility model has the advantages that
It, can be by acquiring facial table of the equipment by performing artist accordingly for participating in the performing artist of race or stage performance
The features such as feelings, limb action are acquired, and on the basis of collecting characteristic, processing equipment can be in the way of setting
Characteristic is managed, and is finally presented by display device.
Different from traditional presentation mode, based on the above method, actual performing artist itself no longer directly as
Show content is directly presented, but corresponding feature is presented by modes such as image, robots.Especially for the presentation of image
For mode, by the rendering function of processing equipment, all kinds of performance effects abundant can be showed, so as to as much as possible
The arrangement to On-the-spot factors such as live stage, performing artist's dress ornament, light is reduced or avoided, further can also be reduced or avoided
Consumption to costs such as human and material resources and times.
In addition, also exactly using the data processing method in this specification embodiment, so that in race or stage performance
Under scene, rendering image is converted by the image of performing artist itself or is realized by other main bodys, traditional performance form is changed.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen
Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 is the configuration diagram that the data processing method that this specification embodiment provides is based on;
Fig. 2 is the data handling procedure schematic diagram that this specification embodiment provides;
Fig. 3 is the data processing schematic diagram under actual race or performance scene that this specification embodiment provides;
Fig. 4 is a kind of presentation mode schematic diagram that this specification embodiment provides;
Fig. 5 is the data processing equipment structural diagram that this specification embodiment provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and
Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one
Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
For the entertainment contents such as existing live game coverage or stage performance, needing to enhance display effect
In the case where fruit, it usually needs by On-the-spot factors such as actual place, performing artist's clothes, lighting effects.Even for practical
Race, show content carry out shooting carry out post-production, it is also desirable to rely on above-mentioned On-the-spot factor.Obviously, needed for existing way
It is limited that effect is presented in human and material resources and time higher cost.
For this purpose, providing a kind of data processing method in this specification embodiment, this method is directed to live game coverage, stage performance
Equal entertainment contents provide new presentation mode and effect.Data processing method different from existing implementation, in the application
By at least one of limbs feature, the facial characteristics for acquiring actual performer, by collected feature by dynamic image
Carry out real-time exhibition.
In one or more embodiments of this specification, framework as shown in Figure 1 can be used.
In Fig. 1, acquisition equipment can acquire at least one of the facial expression of performing artist, limb action.In this theory
In bright book embodiment, acquisition equipment can be: video camera, camera, camera, mobile phone, tablet computer etc. are adopted with image
Collect the equipment of function;It is also possible to: wearable motion capture sensor, and motion capture sensor described here is further
It can be mechanical, acoustics formula, electromagnetic type or optical profile type sensor, can specifically be determined according to the needs of practical application,
Herein and it should not be used as restriction to the application.
Processing equipment in Fig. 1 can (include facial characteristics, limb action based on the collected characteristic of acquisition equipment institute
At least one of feature) it is handled, the mode of processing includes but is not limited to: the synthesis of characteristic, characteristic
Rendering etc..In general, processing equipment may be generally disposed at the scene of race, stage performance, certainly, in other embodiments
In, processing equipment can also be by the way of remotely located.
Processing equipment can be server or computer, in the case where processing equipment is server, can use
The frameworks such as single server, cluster server or distributed server, here without specifically limiting.
Treated characteristic can be presented to the user by the display device in Fig. 1, in various embodiments, be in
Existing equipment can be different, such as: display device can be projection device (such as: projector, line holographic projections equipment), screen, enhancing
Real (Augmented Reality, AR) equipment or virtual reality (Virtual Reality, VR) equipment etc., to will be through
Crossing treated, characteristic is presented in a manner of image;Another example is: display device can be intelligent robot, based at
Characteristic after reason, intelligent robot can make corresponding limb action and expression.It is also understood that display device is adopted
Which kind of, with the mode characteristic that is presented that treated, can determine according to the needs of practical application.
For above-mentioned framework, it should be noted that in a kind of more typical embodiment, acquire equipment and presentation
Equipment is independent from each other equipment, and is generally arranged in same scene, which can be race scene or stage scene,
In this embodiment, performing artist is similarly in the scene.
Based on above-mentioned framework as described in Figure 1, technical side provided in this specification embodiment will be detailed below
Case.
A kind of data processing method is provided in this specification embodiment, as described in Figure 2, specifically includes the following steps:
Step S201: receive institute it is collected difference performing artists characteristic.
Based on foregoing teachings, in this specification embodiment, by acquisition equipment characteristic collected, it is believed that be match
At least one of the expressive features data of the performing artist of thing or stage performance, limb action characteristic.
Usually, performing artist's quantity more than one of race or performance is participated in, so needing the table for each performing artist
The features such as feelings, limb action are acquired.
In practical applications, performing artist usually constantly makes different limb actions during race or performance
And expression, therefore it is appreciated that acquisition equipment also will be acquired constantly and obtain characteristic.In order to guarantee the real-time of subsequent presentation
Property, equipment is sustainable that characteristic is sent to processing equipment for acquisition, so that processing equipment processing feature data.
Step S203: the characteristic is handled according to preset presentation mode.
According to race or the difference of stage performance, required content presentation mode is generally also different.As previously mentioned,
Some content presentation modes may be the imaging modality by rendering, and some content presentation modes then may be intelligent robot
Mapping.It is so, also just corresponding to the processing mode of characteristic that there is some difference.
It as a kind of embodiment of this specification, is if desired presented in a manner of image, then processing equipment can be based on
Collected characteristic carries out rendering processing, generates computer graphical (Computer Graphics, CG) image data, or
The data for the extended formatting that person is suitable for showing by display device with imaging modality.Another embodiment party as this specification
If desired the limb action and expression of performing artist is presented in formula with intelligent robot, then processing equipment can be based on collected feature
Data generate corresponding control instruction.
Step S205: by treated, characteristic passes through display device presentation.
In this specification embodiment, display device is generally disposed at race or stage scene, can in real time will be above-mentioned
Characteristic after treatment shows.What needs to be explained here is that the presentation showed by display device as a result, with
The image of performing artist itself is not identical.It in other words,, will by corresponding data handling procedure in this specification embodiment
The limb action of performing artist itself, expression are showed in a manner of non-performing artist itself.
In some possible embodiments, according to image present, then may include to the presentation mode of characteristic but
It is not limited to: 3D animation, 2D animation, AR image, VR image, mixed reality (Mixed Reality, MR) image or hologram etc.
Deng.
It is worth noting that, as previously mentioned, treated, characteristic can be usually presented by display device in real time,
But in view of data transmission may be influenced by factors such as temperature, humidity, cable material, network signals in practical application, out
Existing data transmission is time-consuming, equipment processing is time-consuming, and then leads to the result expression/movement actual compared to performing artist itself presented
There can be the lag (lag is generally Millisecond or even second grade) of certain time length, it is therefore, real-time in this specification embodiment
Property, it is thus understood that allow the delay of certain time length.Here the restriction to the application should not constituted.
Through the above steps, it for participating in the performing artist of race or stage performance, can be incited somebody to action by acquiring equipment accordingly
The features such as facial expression, the limb action of performing artist are acquired, and on the basis of collecting characteristic, processing equipment can be pressed
According to the mode processing feature data of setting, and finally presented by display device.
Different from traditional presentation mode, based on the above method, actual performing artist itself no longer directly as
Show content is directly presented, but corresponding feature is presented by modes such as image, robots.Especially for the presentation of image
For mode, by the rendering function of processing equipment, all kinds of performance effects abundant can be showed, so as to as much as possible
The arrangement to On-the-spot factors such as live stage, performing artist's dress ornament, light is reduced or avoided, further can also be reduced or avoided
Consumption to costs such as human and material resources and times.
For above-mentioned method as shown in Figure 2, in practical applications, executing subject corresponding to different step may occur
Variation, such as: the executing subject of step S201 and S203 can be processing equipment, and the executing subject of step S205 can be and be in
Existing equipment.Certainly, it will specifically be determined according to the needs of practical application, and not constitute the restriction to the application here.
As previously mentioned, under actual race or performance scene, it will usually there are multiple performing artists, for different scenes,
The mode of acquisition equipment institute acquisition characteristics data is possible to different.
Specifically, in some races or performance scene, then the multiple performing artists to take in competition would generally be divided into different points
Group, at least one performing artist in each group, the performing artist for belonging to different grouping would generally perform according to the sequence of setting.
So in this scenario, acquisition equipment will also be acquired the acquisition of performer characteristic data according to the sequence of setting,
That is, acquisition equipment acquires the characteristic of different performing artists according to the order of packets of different performing artists.
In other races or performance scene, there may be multiple performing artists while perform, these performing artists are not
Grouping, for this situation, then multiple performing artists can be directed to while carry out the acquisition of characteristic by acquiring equipment.
In the more typical embodiment of this specification, for participating in the performing artist of race or performance, first can be divided into
Performing artist and the second performing artist.Wherein, so-called " first " and " second " are merely for convenience of the differentiation in description here, not
It is understood as quantity or restriction sequentially.
As shown in figure 3, being directed to the first performing artist, acquisition equipment acquires the facial characteristics of the first performing artist, thus available
Expressive features data.For the second performing artist, the limb action feature that equipment acquires the second performing artist is acquired, so that limb can be obtained
Body motion characteristic data.It is general, the quantity of the first performing artist either one be also possible to it is multiple, similarly, for
For second performing artist, quantity is also possible to one or more, is defined herein not for the quantity of performing artist.
Later, the expressive features data collected and limb action characteristic can be sent to processing equipment progress
Processing.
For processing equipment, can presentation mode according to actual needs, collected characteristic is handled.
In a scenario, the characteristic is handled according to preset presentation mode, it may include: it is special for the collected expression
Sign data and limb action characteristic carry out rendering synthesis, generate CG data to be presented.
The CG data to be presented can be by such as: projection device, screen, AR equipment, VR equipment, mixed reality MR, complete
At least one of equipment equipment is ceased to present.
What needs to be explained here is that, for competition performing artist, can show it in a kind of possible exhibition method with split screen
Corresponding CG, as shown in Figure 4.Which is suitable for spectators or judging panel is compared, scores.It certainly, is also only a kind of possibility here
Mode, should not constitute the restriction to the application.
Under another scene, the characteristic is handled according to preset presentation mode, it may include: according to collected
The expressive features data set limb action characteristic generates control instruction data.
Under the scene, the display device includes: intelligent robot;So, will treated characteristic by being in
Existing equipment is presented, comprising: indicates that the intelligent robot executes corresponding movement and/or table according to the control instruction data
Feelings.
From the discussion above as it can be seen that exactly using the data processing method in this specification embodiment, so that in race or dance
Under the scene of platform performance, rendering image is converted by the image of performing artist itself or is realized by other main bodys, is changed traditional
Performance form.
The above are the data processing methods that this specification embodiment provides, and are based on same thinking, this specification embodiment
Corresponding data processing system is also provided.
Specifically, data processing system provided in this specification embodiment as shown in figure 5, the system comprises:
Equipment 501 is acquired, the characteristic of different performing artists is acquired;Wherein, the characteristic includes: expressive features number
According at least one of, limb action characteristic;
Processing equipment 502, receives the characteristic of the collected different performing artists of acquisition equipment institute, and according to presetting
Presentation mode handle the characteristic;
Treated characteristic is presented in display device 503;Wherein, the image and the shape of the performing artist itself of presentation
As difference.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device,
For equipment and medium class embodiment, since it is substantially similar to the method embodiment, so being described relatively simple, related place
Illustrate referring to the part of embodiment of the method, just no longer repeats one by one here.
So far, the specific embodiment of this theme is described.Other embodiments are in the appended claims
In range.In some cases, the movement recorded in detail in the claims can execute and still in a different order
Desired result may be implemented.In addition, process depicted in the drawing not necessarily requires the particular order shown or continuous suitable
Sequence, to realize desired result.In some embodiments, multitasking and parallel processing can be advantageous.
In the 1990s, the improvement of a technology can be distinguished clearly be on hardware improvement (for example,
Improvement to circuit structures such as diode, transistor, switches) or software on improvement (improvement for method flow).So
And with the development of technology, the improvement of current many method flows can be considered as directly improving for hardware circuit.
Designer nearly all obtains corresponding hardware circuit by the way that improved method flow to be programmed into hardware circuit.Cause
This, it cannot be said that the improvement of a method flow cannot be realized with hardware entities module.For example, programmable logic device
(Programmable Logic Device, PLD) (such as field programmable gate array (Field Programmable Gate
Array, FPGA)) it is exactly such a integrated circuit, logic function determines device programming by user.By designer
Voluntarily programming comes a digital display circuit " integrated " on a piece of PLD, designs and makes without asking chip maker
Dedicated IC chip.Moreover, nowadays, substitution manually makes IC chip, this programming is also used instead mostly " is patrolled
Volume compiler (logic compiler) " software realizes that software compiler used is similar when it writes with program development,
And the source code before compiling also write by handy specific programming language, this is referred to as hardware description language
(Hardware Description Language, HDL), and HDL is also not only a kind of, but there are many kind, such as ABEL
(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description
Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL
(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby
Hardware Description Language) etc., VHDL (Very-High-Speed is most generally used at present
Integrated Circuit Hardware Description Language) and Verilog.Those skilled in the art also answer
This understands, it is only necessary to method flow slightly programming in logic and is programmed into integrated circuit with above-mentioned several hardware description languages,
The hardware circuit for realizing the logical method process can be readily available.
Controller can be implemented in any suitable manner, for example, controller can take such as microprocessor or processing
The computer for the computer readable program code (such as software or firmware) that device and storage can be executed by (micro-) processor can
Read medium, logic gate, switch, specific integrated circuit (Application Specific Integrated Circuit,
ASIC), the form of programmable logic controller (PLC) and insertion microcontroller, the example of controller includes but is not limited to following microcontroller
Device: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320 are deposited
Memory controller is also implemented as a part of the control logic of memory.It is also known in the art that in addition to
Pure computer readable program code mode is realized other than controller, can be made completely by the way that method and step is carried out programming in logic
Controller is obtained to come in fact in the form of logic gate, switch, specific integrated circuit, programmable logic controller (PLC) and insertion microcontroller etc.
Existing identical function.Therefore this controller is considered a kind of hardware component, and to including for realizing various in it
The device of function can also be considered as the structure in hardware component.Or even, it can will be regarded for realizing the device of various functions
For either the software module of implementation method can be the structure in hardware component again.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity,
Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used
Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play
It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment
The combination of equipment.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, implementing this
The function of each unit can be realized in the same or multiple software and or hardware when application.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net
Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable
Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM),
Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices
Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates
Machine readable medium does not include temporary computer readable media (transitorymedia), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability
It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap
Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want
There is also other identical elements in the process, method of element, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can provide as method, system or computer program product.
Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application
Form.It is deposited moreover, the application can be used to can be used in the computer that one or more wherein includes computer usable program code
The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
Formula.
The application can describe in the general context of computer-executable instructions executed by a computer, such as program
Module.Generally, program module includes routine, programs, objects, the group for executing particular transaction or realizing particular abstract data type
Part, data structure etc..The application can also be practiced in a distributed computing environment, in these distributed computing environments, by
Affairs are executed by the connected remote processing devices of communication network.In a distributed computing environment, program module can be with
In the local and remote computer storage media including storage equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art
For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal
Replacement, improvement etc., should be included within the scope of the claims of this application.
Claims (17)
1. a kind of data processing method characterized by comprising
Receive institute it is collected difference performing artists characteristic;Wherein, the characteristic includes: expressive features data, limb
At least one of body motion characteristic data;
The characteristic is handled according to preset presentation mode;
By treated, characteristic passes through display device presentation;
Wherein, the image of presentation and the image of the performing artist itself are different.
2. the method as described in claim 1, which is characterized in that the performing artist and the display device are in same scene.
3. the method as described in claim 1, which is characterized in that acquire the characteristic of different performing artists, comprising:
The characteristic of different performing artists is acquired by acquiring equipment.
4. method as claimed in claim 3, which is characterized in that the characteristic of different performing artists is acquired by acquiring equipment,
Include:
By the acquisition equipment, the characteristic of different performing artists is acquired according to the order of packets of different performing artists.
5. method as claimed in claim 3, which is characterized in that the performing artist includes: the first performing artist and the second performing artist;
The characteristic of different performing artists is acquired by acquiring equipment, comprising:
The expressive features data that first performing artist is acquired by the acquisition equipment, by described in acquisition equipment acquisition
The limb action characteristic of second performing artist.
6. the method as described in claim 1, which is characterized in that handle the characteristic, packet according to preset presentation mode
It includes:
Rendering synthesis is carried out for the collected expressive features data and limb action characteristic, generates calculating to be presented
Machine figure CG data.
7. the method as described in claim 1, which is characterized in that handle the characteristic, packet according to preset presentation mode
It includes:
According to the collected expressive features data set limb action characteristic, control instruction data are generated.
8. method as claimed in claim 6, which is characterized in that by treated, characteristic passes through display device presentation, packet
It includes:
The CG data to be presented are shown by display device;
Wherein, the display device includes: projection device, screen, enhancing display AR equipment, Virtual Reality equipment, mixes now
At least one of real MR equipment, hologram device.
9. the method for claim 7, which is characterized in that the display device includes: intelligent robot;
By treated, characteristic passes through display device presentation, comprising:
Indicate that the intelligent robot executes corresponding movement and/or expression according to the control instruction data.
10. a kind of data processing system characterized by comprising
Equipment is acquired, the characteristic of different performing artists is acquired;Wherein, the characteristic includes: expressive features data, limbs
At least one of motion characteristic data;
Processing equipment receives the characteristic of the collected difference performing artists of acquisition equipment institute, and according to preset presentation
Mode handles the characteristic;
Treated characteristic is presented in display device;
Wherein, the image of presentation and the image of the performing artist itself are different.
11. system as claimed in claim 10, which is characterized in that the performing artist is in same existing with the display device
?.
12. system as claimed in claim 10, which is characterized in that the acquisition equipment, the grouping according to different performing artists are suitable
Sequence acquires the characteristic of different performing artists.
13. system as claimed in claim 10, which is characterized in that the performing artist includes: the first performing artist and the second performance
Person;
The acquisition equipment acquires the expressive features data of first performing artist, passes through acquisition equipment acquisition described second
The limb action characteristic of performing artist.
14. system as claimed in claim 10, which is characterized in that the processing equipment, it is special for the collected expression
Sign data and limb action characteristic carry out rendering synthesis, generate CG data to be presented.
15. system as claimed in claim 10, which is characterized in that the processing equipment, it is special according to the collected expression
Data set limb action characteristic is levied, control instruction data are generated.
16. system as claimed in claim 14, which is characterized in that the display device passes through the CG data to be presented
Display device is shown;
Wherein, the display device includes: projection device, screen, enhancing display AR equipment, Virtual Reality equipment, mixes now
At least one of real MR equipment, hologram device.
17. system as claimed in claim 15, which is characterized in that the display device includes: intelligent robot;
The processing equipment indicates that the intelligent robot executes corresponding movement and/or table according to the control instruction data
Feelings.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810220924.0A CN110278387A (en) | 2018-03-16 | 2018-03-16 | A kind of data processing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810220924.0A CN110278387A (en) | 2018-03-16 | 2018-03-16 | A kind of data processing method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110278387A true CN110278387A (en) | 2019-09-24 |
Family
ID=67958615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810220924.0A Pending CN110278387A (en) | 2018-03-16 | 2018-03-16 | A kind of data processing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110278387A (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080012865A1 (en) * | 2006-07-16 | 2008-01-17 | The Jim Henson Company | System and method of animating a character through a single person performance |
US20080028312A1 (en) * | 2006-07-28 | 2008-01-31 | Accelerated Pictures, Inc. | Scene organization in computer-assisted filmmaking |
CN101298141A (en) * | 2007-04-30 | 2008-11-05 | 林其禹 | Robot system and control method thereof |
US20100201693A1 (en) * | 2009-02-11 | 2010-08-12 | Disney Enterprises, Inc. | System and method for audience participation event with digital avatars |
US20120252575A1 (en) * | 2011-03-31 | 2012-10-04 | Konami Digital Entertainment Co., Ltd. | Game device, game device control method, and information storage medium |
US20130017894A1 (en) * | 2011-07-14 | 2013-01-17 | Hongzhi Li | System and Method for Integrating Digital Background with Stage Performance |
CN103578135A (en) * | 2013-11-25 | 2014-02-12 | 恒德数字舞美科技有限公司 | Virtual image and real scene combined stage interaction integrating system and realizing method thereof |
US20150030305A1 (en) * | 2012-04-12 | 2015-01-29 | Dongguk University Industry-Academic Cooperation Foundation | Apparatus and method for processing stage performance using digital characters |
CN104883557A (en) * | 2015-05-27 | 2015-09-02 | 世优(北京)科技有限公司 | Real time holographic projection method, device and system |
CN105938541A (en) * | 2015-03-02 | 2016-09-14 | 卡雷风险投资有限责任公司 | System and method for enhancing live performances with digital content |
CN206340066U (en) * | 2016-12-07 | 2017-07-18 | 西安蒜泥电子科技有限责任公司 | Visual human's On-the-spot Interaction performance system |
US20180330549A1 (en) * | 2015-11-19 | 2018-11-15 | Bespoke Vr Limited | Editing interactive motion capture data for creating the interaction characteristics of non player characters |
-
2018
- 2018-03-16 CN CN201810220924.0A patent/CN110278387A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080012865A1 (en) * | 2006-07-16 | 2008-01-17 | The Jim Henson Company | System and method of animating a character through a single person performance |
US20080028312A1 (en) * | 2006-07-28 | 2008-01-31 | Accelerated Pictures, Inc. | Scene organization in computer-assisted filmmaking |
CN101298141A (en) * | 2007-04-30 | 2008-11-05 | 林其禹 | Robot system and control method thereof |
US20100201693A1 (en) * | 2009-02-11 | 2010-08-12 | Disney Enterprises, Inc. | System and method for audience participation event with digital avatars |
US20120252575A1 (en) * | 2011-03-31 | 2012-10-04 | Konami Digital Entertainment Co., Ltd. | Game device, game device control method, and information storage medium |
US20130017894A1 (en) * | 2011-07-14 | 2013-01-17 | Hongzhi Li | System and Method for Integrating Digital Background with Stage Performance |
US20150030305A1 (en) * | 2012-04-12 | 2015-01-29 | Dongguk University Industry-Academic Cooperation Foundation | Apparatus and method for processing stage performance using digital characters |
CN103578135A (en) * | 2013-11-25 | 2014-02-12 | 恒德数字舞美科技有限公司 | Virtual image and real scene combined stage interaction integrating system and realizing method thereof |
CN105938541A (en) * | 2015-03-02 | 2016-09-14 | 卡雷风险投资有限责任公司 | System and method for enhancing live performances with digital content |
CN104883557A (en) * | 2015-05-27 | 2015-09-02 | 世优(北京)科技有限公司 | Real time holographic projection method, device and system |
US20180330549A1 (en) * | 2015-11-19 | 2018-11-15 | Bespoke Vr Limited | Editing interactive motion capture data for creating the interaction characteristics of non player characters |
CN206340066U (en) * | 2016-12-07 | 2017-07-18 | 西安蒜泥电子科技有限责任公司 | Visual human's On-the-spot Interaction performance system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110428388B (en) | Image data generation method and device | |
US10019825B2 (en) | Karaoke avatar animation based on facial motion data | |
WO2022156532A1 (en) | Three-dimensional face model reconstruction method and apparatus, electronic device, and storage medium | |
US11653072B2 (en) | Method and system for generating interactive media content | |
CN105321142B (en) | Sampling, mistake manages and/or the context switching carried out via assembly line is calculated | |
CN112199016B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN112819944A (en) | Three-dimensional human body model reconstruction method and device, electronic equipment and storage medium | |
CN108986190A (en) | A kind of method and system of the virtual newscaster based on human-like persona non-in three-dimensional animation | |
WO2020077914A1 (en) | Image processing method and apparatus, and hardware apparatus | |
CN104091607A (en) | Video editing method and device based on IOS equipment | |
JP2023504030A (en) | Display method and device based on augmented reality, and storage medium | |
CN107291222A (en) | Interaction processing method, device, system and the virtual reality device of virtual reality device | |
JP2024502407A (en) | Display methods, devices, devices and storage media based on augmented reality | |
US20160239095A1 (en) | Dynamic 2D and 3D gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds | |
CN111179392A (en) | Virtual idol comprehensive live broadcast method and system based on 5G communication | |
CN104091608A (en) | Video editing method and device based on IOS equipment | |
CN115331265A (en) | Training method of posture detection model and driving method and device of digital person | |
JP7447293B2 (en) | References to Neural Network Models for Adaptation of 2D Video for Streaming to Heterogeneous Client Endpoints | |
CN110278387A (en) | A kind of data processing method and system | |
Baker | Virtual, artificial and mixed reality: New frontiers in performance | |
TW202013005A (en) | Camera module and system using the same | |
CN108960130A (en) | Video file intelligent processing method and device | |
CN110276232A (en) | A kind of data processing method based on social scene, system | |
CN115428416A (en) | Setting up and distribution of immersive media to heterogeneous client endpoints | |
CN115136595A (en) | Adaptation of 2D video for streaming to heterogeneous client endpoints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190924 |
|
RJ01 | Rejection of invention patent application after publication |