CN203630822U - Virtual image and real scene combined stage interaction integrating system - Google Patents

Virtual image and real scene combined stage interaction integrating system Download PDF

Info

Publication number
CN203630822U
CN203630822U CN201320751532.XU CN201320751532U CN203630822U CN 203630822 U CN203630822 U CN 203630822U CN 201320751532 U CN201320751532 U CN 201320751532U CN 203630822 U CN203630822 U CN 203630822U
Authority
CN
China
Prior art keywords
data
performing artist
behavior
expression
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201320751532.XU
Other languages
Chinese (zh)
Inventor
赵松德
郭树涛
孙涛
赵煜
张红武
崔德靖
苏宏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengde Technology Co. Ltd.
Original Assignee
HENGDE DIGITAL WUMEI TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HENGDE DIGITAL WUMEI TECHNOLOGY Co Ltd filed Critical HENGDE DIGITAL WUMEI TECHNOLOGY Co Ltd
Priority to CN201320751532.XU priority Critical patent/CN203630822U/en
Application granted granted Critical
Publication of CN203630822U publication Critical patent/CN203630822U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The utility model discloses a virtual image and real scene combined stage interaction integrating system and a realizing method thereof. The system comprises a scene behavior acquiring unit, a behavior learning unit, a performance intention analyzing unit and a scenery unit, wherein the scene behavior acquiring unit acquires training data or scene data for generating performance behaviors of performers; the behavior learning unit uses a support vector machine classifier to learn the training data to obtain preset behaviors of the performers, and uses the scene data to obtain scene behaviors of the performers; the performance intention analyzing unit obtains performance intentions of the performers according to corresponding relations between the present scene behaviors of the performers and corresponding preset behaviors; the scenery unit stores a plurality of scenery frames corresponding to the performance intentions of the performers, and chooses corresponding scenery frames to project on a stage according to the performance intentions. The system can realize the interaction between the performers and the stage scenes to achieve the stage effect of combining virtual images with real scenes.

Description

The mutual integrated system of stage that virtual image combines with real scene
Technical field
The utility model relates to stage display systems, is specifically related to the mutual integrated system of stage that virtual image combines with real scene.
Background technology
As everyone knows, the stage of art performance is shown more and more outstanding to the effect of performing effect.But what existing stage display systems adopted conventionally is manually or regularly converts the mode of scene picture, realize the combination of virtual image and real scene.This mode, cannot realize the interaction of performing artist and stage set, can not meet the higher level demand of people, and the stage effect that virtual image combines with real scene more and more receives people's concern.
Utility model content
Technical problem to be solved in the utility model is to realize the interactive problem of performing artist and stage set at stage display systems.
In order to solve the problems of the technologies described above, the technical scheme that the utility model adopts is to provide the mutual integrated system of stage that a kind of virtual image combines with real scene, comprising:
Behavior collecting unit, gathers the training data or the field data that are used for the performance behavior that generates performing artist;
Action learning unit, utilizes support vector machine classifier to learn described training data, obtains performing artist's default behavior; Utilize described field data to obtain performing artist's on-the-spot behavior;
Performance intention analytic unit, according to performing artist's current on-the-spot behavior and the corresponding relation of corresponding default behavior, obtains performing artist's performance intention;
Setting unit, preserves on it with performing artist's performance and is intended to corresponding multiple scenes, and described setting unit is intended to according to described performance, selects corresponding scene to be loaded on stage; Described scene comprises corresponding sound, light, tableaux and dynamic menu.
In said system, described behavior collecting unit comprises:
Multiple light-duty inertial navigation sensing input equipments, are separately fixed on performing artist's head, trunk and upper limbs and lower limb kinematics attitude training data or the attitude field data in the each joint of each described light-duty inertial navigation sensing input equipment Real-time Collection performing artist;
3D reconfiguration unit, according to described attitude training data reconstruct performing artist's three-dimensional preset model, according to the on-the-spot model of described attitude field data reconstruct performing artist's three-dimensional;
Described default behavior is preset posture, and described on-the-spot behavior is on-the-spot attitude, and three-dimensional preset model creates performing artist's attitude training set described in described action learning unit by using, and utilizes described attitude training set to obtain performing artist's preset posture; Utilize described attitude field data to obtain performing artist's on-the-spot attitude.
In said system, described behavior collecting unit comprises:
3D body sense video camera, collection performing artist's expression and voice training data or expression and voice field data;
Described default behavior is default expression, described on-the-spot behavior is on-the-spot expression, expression and voice training data creation performing artist's expression training set and voice training collection described in described action learning unit by using, and utilize described expression training set to obtain performing artist's default expression and default voice; Utilize described expression and voice field data to obtain performing artist's scene expression and on-the-spot voice.
In said system, described behavior collecting unit comprises:
Motion capture unit, gathers performing artist's action training data or the field data of moving, and described action training data or action field data comprise position data, directional data and the speed data that in performing artist's motion process, health moves;
Described default behavior is deliberate action, and described on-the-spot behavior is on-the-spot action, action training data creation performing artist's action training collection described in described action learning unit by using, and utilize described action training collection to obtain performing artist's deliberate action; Utilize described action field data to obtain performing artist's scene action.
In said system, on described behavior collecting unit, be also provided with correction module, utilize the intersegmental spatial relationship of certain known light-duty inertial navigation sensing input equipment and corresponding health body and human-body biological mechanical model to proofread and correct other light-duty inertial navigation sensing input equipments.
In said system, on described behavior collecting unit, be also provided with expression processing unit, described expression processing unit comprises:
Demarcating module, gathers respectively the speckle pattern data of each reference planes, and described reference planes are to be from the close-by examples to those far off disposed on successively the virtual vertical plane on stage along the lens direction of described 3D body sense video camera;
Memory module, stores the described speckle pattern data of reference planes described in each;
Interpolation calculation module, according to described expression training data and the computing cross-correlation result of speckle pattern data described in each, obtains performing artist's training expression; Obtain described on-the-spot expression according to described expression field data.
In said system, on described behavior collecting unit, be also provided with image calculation processing unit, described image calculation processing unit comprises:
Demarcating module, by the demarcation of described motion capture recognition unit being drawn to the station-keeping data between itself and projection image,
Image processing module, carries out computational analysis according to the consecutive image data to described performing artist's motion process, draws in performing artist's motion process, health moves in described motion capture recognition unit position data, directional data and speed data,
Coordinate transferring, is converted to projected position data, projecting direction data and the projection speed data in projecting cell image coordinate system by described position data, directional data and speed data in motion capture recognition unit image coordinate system.
In said system, described setting unit comprises several projector and bright band cancellation module, the motion image that described projector calls corresponding virtual environment image and Objects In A Virtual Environment according to described video calling output signal carries out projection, and described bright band cancellation module is eliminated the bright band producing because drop shadow spread merges between several projector.
The utility model, according to performing artist's current on-the-spot behavior and the corresponding relation of described default behavior, obtain performing artist's performance intention, again according to performance intention, select corresponding scene image projection to stage, described scene comprises corresponding sound, light, tableaux and dynamic menu, thereby has realized the stage effect that virtual image combines with real scene.
Accompanying drawing explanation
Fig. 1 is structural representation of the present utility model;
Fig. 2 is the structural representation of behavior collecting unit in the utility model.
Embodiment
The mutual integrated system of stage that the utility model provides a kind of virtual image to combine with real scene, realize according to on-the-spot behavior such as performing artist's attitude, expression etc., transfer corresponding sound, optical, electrical and scene, realize the real-time, interactive of performing artist and stage set.Below in conjunction with specification drawings and specific embodiments, the utility model is described in detail.
As shown in Figure 1, the mutual integrated system of stage that the plan image that the utility model provides combines with real scene comprises behavior collecting unit 10, action learning unit 20, performance intention analytic unit 30 and setting unit 40.
Wherein, behavior collecting unit 10 is for gathering training data or the field data of the performance behavior for generating performing artist;
Action learning unit 20 utilizes support vector machine classifier to learn described training data, obtains performing artist's default behavior; Utilize described field data to obtain performing artist's on-the-spot behavior;
Performance intention analytic unit 30, according to performing artist's current on-the-spot behavior and the corresponding relation of corresponding default behavior, obtains performing artist's performance intention;
On setting unit 40, preserve with performing artist's performance and be intended to corresponding multiple scenes, described setting unit is intended to according to described performance, selects corresponding scene to be loaded on stage; Described scene comprises corresponding sound, light, tableaux and dynamic menu.
Behavior collecting unit in the utility model can gather performing artist's attitude, expression and action.Introduced respectively below.
As shown in Figure 2, behavior collecting unit comprises multiple light-duty inertial navigation sensing input equipments 101 and 3D reconfiguration unit 102.
Each light-duty inertial navigation sensing input equipment is all integrated with one 3 dimension acceleration transducer, 3 dimension gyroscopes and one 3 dimension magnetometric sensor.Multiple inertial navigation sensing input equipments 101 are separately fixed at performing artist's upper limbs, lower limb, trunk and head, in the present embodiment, the quantity of being used to light-duty inertial navigation sensing input equipment is 12-16, wherein, 1 light-duty inertial navigation sensing input equipment is fixed on performing artist's head, 1-5 light-duty inertial navigation sensing input equipment is fixed on performing artist's trunk dispersedly, 4 light-duty inertial navigation sensing input equipments are separately fixed on performing artist's the upper and lower arm of upper arm of two upper limbs, 6 light-duty inertial navigation sensing input equipments are separately fixed at the thigh of two lower limb of performing artist, on shank and pin.Each light-duty inertial navigation sensing input equipment 101 Real-time Collection performing artists' each articular kinesiology attitude training data or attitude field data.
3D reconfiguration unit 102 is according to performing artist's attitude training data reconstruct performing artist's three-dimensional preset model, according to the on-the-spot model of performing artist's attitude field data reconstruct performing artist's three-dimensional.
The three-dimensional preset model of action learning unit by using creates performing artist's attitude training set, and utilizes attitude training set to obtain performing artist's preset posture; Utilize attitude field data to obtain performing artist's on-the-spot attitude.
Because light-duty inertial navigation sensing input equipment appends to performing artist with it time, initial position the unknown intersegmental with human body body, is difficult to estimate by the method for acceleration value integration the distance that human body body is intersegmental.Therefore, should take suitable correction to determine sensor and the intersegmental spatial relationship of body and the dimensional information of health.For this reason, in the utility model, be also provided with correcting unit, utilize the intersegmental spatial relationship of certain known light-duty inertial navigation sensing input equipment and corresponding health body and human-body biological mechanical model to proofread and correct other light-duty inertial navigation sensing input equipments.Wherein, known light-duty inertial navigation sensing input equipment can select to be arranged on the light-duty inertial navigation sensing input equipment of head.The concrete practice is: each sensor signal and human 3d model in light-duty inertial navigation sensing input equipment are described as to random occurrence, build the sensor fusion process that contains prediction and aligning step it is carried out to polymerization.In forecasting process, the signal of each sensor all passes through inertial guidance system (INS) algorithm to be processed, and utilizes thereafter the intersegmental spatial relationship of certain known sensor and corresponding health body and human-body biological mechanical model to predict the mechanical motion of body section.Carry out after above-mentioned processing procedure in the long period, owing to there being the factors such as sensor noise, signal skew and attitude mistake, inertial sensor data is carried out to integration and can cause drift error.For correcting the estimator such as direction, speed, displacement one class, sensor fusion process can be constantly updated these estimators.Trimming process can comprise above-mentioned Data Update, these upgrade based on human-body biological kinematics character, main joint, the body section tie point that comprises extraneous position and speed restriction factor and detect, and the result feedback of estimation is in the INS algorithm and body section motion process of next frame.
As shown in Figure 2, behavior collecting unit 10 also comprises 3D body sense video camera 111 and expression processing unit 112, and 3D body sense video camera 111 gathers performing artist's expression voice training data or expression voice field data.Expression processing unit 112 comprises demarcating module, memory module and interpolation calculation module.Demarcating module gathers respectively the speckle pattern data of each reference planes, and reference planes are to be from the close-by examples to those far off disposed on successively the virtual vertical plane on stage along the lens direction of 3D body sense video camera; Memory module is stored the speckle pattern data of each reference planes; Interpolation calculation module, according to the computing cross-correlation result of performing artist's expression training data and each speckle pattern data, obtains performing artist's training expression.Interpolation calculation module obtains performing artist's scene expression according to performing artist's expression field data.
Action learning unit by using performing artist's expression and voice training data creation performing artist's expression training set and voice training collection, and utilize expression training set to obtain performing artist's default expression, utilize voice training collection to obtain performing artist's default voice; Action learning unit by using performing artist's expression and voice field data obtain performing artist's scene expression and on-the-spot voice.
As shown in Figure 2, behavior collecting unit also comprises motion capture unit 121 and image calculation processing unit 122, motion capture unit 121 gathers performing artist's action training data or action field data, and performing artist's action training data or action field data comprise position data, directional data and the speed data that in performing artist's motion process, health moves.
Image calculation processing unit 122 comprises demarcating module, image processing module and coordinate transferring, demarcating module is by drawing the station-keeping data between itself and projection image to the demarcation of motion capture recognition unit, image processing module is according to the consecutive image data of performing artist's motion process are carried out to computational analysis, draw the position data that in performing artist's motion process, health moves, directional data and speed data, coordinate transferring is by the position data in motion capture recognition unit image coordinate system, directional data and speed data are converted to the projected position data in setting cell picture coordinate system, projecting direction data and projection speed data.
Action learning unit by using performing artist's action training data creation performing artist's action training collection, and utilize action training collection to obtain performing artist's deliberate action; Action learning unit by using performing artist's action field data obtains performing artist's scene action.
Setting unit also comprises several projector and bright band cancellation module, on setting unit, preserve with performing artist's performance and be intended to corresponding virtual environment image and the motion image of Objects In A Virtual Environment, setting unit calls the motion image of corresponding virtual environment image and Objects In A Virtual Environment according to performing artist's performance intention, and the coordinate conversion result obtaining according to image calculation processing unit, project to the relevant position on stage by corresponding projector, bright band cancellation module is eliminated the bright band producing because drop shadow spread merges between multiple projector.
The utility model is not limited to above-mentioned preferred forms, and anyone should learn the structural change of making under enlightenment of the present utility model, every with the utlity model has identical or close technical scheme, within all falling into protection domain of the present utility model.

Claims (8)

1. the mutual integrated system of stage that virtual image combines with real scene, is characterized in that, comprising:
Behavior collecting unit, gathers the training data or the field data that are used for the performance behavior that generates performing artist;
Action learning unit, utilizes support vector machine classifier to learn described training data, obtains performing artist's default behavior; Utilize described field data to obtain performing artist's on-the-spot behavior;
Performance intention analytic unit, according to performing artist's current on-the-spot behavior and the corresponding relation of corresponding default behavior, obtains performing artist's performance intention;
Setting unit, preserves on it with performing artist's performance and is intended to corresponding multiple scenes, and described setting unit is intended to according to described performance, selects corresponding scene to be loaded on stage; Described scene comprises corresponding sound, light, tableaux and dynamic menu.
2. the system as claimed in claim 1, is characterized in that, described behavior collecting unit comprises:
Multiple light-duty inertial navigation sensing input equipments, are separately fixed on performing artist's head, trunk and upper limbs and lower limb kinematics attitude training data or the attitude field data in the each joint of each described light-duty inertial navigation sensing input equipment Real-time Collection performing artist;
3D reconfiguration unit, according to described attitude training data reconstruct performing artist's three-dimensional preset model, according to the on-the-spot model of described attitude field data reconstruct performing artist's three-dimensional;
Described default behavior is preset posture, and described on-the-spot behavior is on-the-spot attitude, and three-dimensional preset model creates performing artist's attitude training set described in described action learning unit by using, and utilizes described attitude training set to obtain performing artist's preset posture; Utilize described attitude field data to obtain performing artist's on-the-spot attitude.
3. the system as claimed in claim 1, is characterized in that, described behavior collecting unit comprises:
3D body sense video camera, collection performing artist's expression and voice training data or expression and voice field data;
Described default behavior is default expression and default voice, described on-the-spot behavior is on-the-spot expression and on-the-spot voice, expression and voice training data creation performing artist's expression and voice training collection described in described action learning unit by using, and utilize described expression and voice training collection to obtain performing artist's default expression and default voice; Utilize described expression and voice field data to obtain performing artist's scene expression and on-the-spot voice.
4. the system as claimed in claim 1, is characterized in that, described behavior collecting unit comprises:
Motion capture unit, gathers performing artist's action training data or the field data of moving, and described action training data or action field data comprise position data, directional data and the speed data that in performing artist's motion process, health moves;
Described default behavior is deliberate action, and described on-the-spot behavior is on-the-spot action, action training data creation performing artist's action training collection described in described action learning unit by using, and utilize described action training collection to obtain performing artist's deliberate action; Utilize described action field data to obtain performing artist's scene action.
5. system as claimed in claim 2, it is characterized in that, on described behavior collecting unit, be also provided with correction module, utilize the intersegmental spatial relationship of certain known light-duty inertial navigation sensing input equipment and corresponding health body and human-body biological mechanical model to proofread and correct other light-duty inertial navigation sensing input equipments.
6. system as claimed in claim 3, is characterized in that, on described behavior collecting unit, is also provided with expression processing unit, and described expression processing unit comprises:
Demarcating module, gathers respectively the speckle pattern data of each reference planes, and described reference planes are to be from the close-by examples to those far off disposed on successively the virtual vertical plane on stage along the lens direction of described 3D body sense video camera;
Memory module, stores the described speckle pattern data of reference planes described in each;
Interpolation calculation module, according to described expression training data and the computing cross-correlation result of speckle pattern data described in each, obtains performing artist's training expression; Obtain described on-the-spot expression according to described expression field data.
7. system as claimed in claim 4, is characterized in that, on described behavior collecting unit, is also provided with image calculation processing unit, and described image calculation processing unit comprises:
Demarcating module, by the demarcation of described motion capture recognition unit being drawn to the station-keeping data between itself and projection image,
Image processing module, carries out computational analysis according to the consecutive image data to described performing artist's motion process, draws in performing artist's motion process, health moves in described motion capture recognition unit position data, directional data and speed data,
Coordinate transferring, is converted to projected position data, projecting direction data and the projection speed data in projecting cell image coordinate system by described position data, directional data and speed data in motion capture recognition unit image coordinate system.
8. system as claimed in claim 7, is characterized in that,
Setting unit also comprises several projector and bright band cancellation module, on setting unit, preserve with performing artist's performance and be intended to corresponding virtual environment image and the motion image of Objects In A Virtual Environment, setting unit calls the motion image of corresponding virtual environment image and Objects In A Virtual Environment according to performing artist's performance intention, and the coordinate conversion result obtaining according to image calculation processing unit, project to the relevant position on stage by corresponding projector, bright band cancellation module is eliminated the bright band producing because drop shadow spread merges between multiple projector.
CN201320751532.XU 2013-11-25 2013-11-25 Virtual image and real scene combined stage interaction integrating system Expired - Fee Related CN203630822U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201320751532.XU CN203630822U (en) 2013-11-25 2013-11-25 Virtual image and real scene combined stage interaction integrating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201320751532.XU CN203630822U (en) 2013-11-25 2013-11-25 Virtual image and real scene combined stage interaction integrating system

Publications (1)

Publication Number Publication Date
CN203630822U true CN203630822U (en) 2014-06-04

Family

ID=50817279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201320751532.XU Expired - Fee Related CN203630822U (en) 2013-11-25 2013-11-25 Virtual image and real scene combined stage interaction integrating system

Country Status (1)

Country Link
CN (1) CN203630822U (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578135A (en) * 2013-11-25 2014-02-12 恒德数字舞美科技有限公司 Virtual image and real scene combined stage interaction integrating system and realizing method thereof
CN105031943A (en) * 2015-08-04 2015-11-11 浙江大丰实业股份有限公司 Stage self-adaption data extraction system
CN105045242A (en) * 2015-08-04 2015-11-11 浙江大丰实业股份有限公司 Stage self-adaptive multi-dimensional transmission control system
CN105031942A (en) * 2015-07-22 2015-11-11 浙江大丰实业股份有限公司 Stage data extraction and transmission control system
CN104102146B (en) * 2014-07-08 2016-09-07 苏州乐聚一堂电子科技有限公司 Virtual accompanying dancer's general-purpose control system
CN109841196A (en) * 2018-12-24 2019-06-04 武汉西山艺创文化有限公司 A kind of virtual idol presentation system based on transparent liquid crystal display

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578135A (en) * 2013-11-25 2014-02-12 恒德数字舞美科技有限公司 Virtual image and real scene combined stage interaction integrating system and realizing method thereof
CN103578135B (en) * 2013-11-25 2017-01-04 恒德数字舞美科技有限公司 The mutual integrated system of stage that virtual image combines with real scene and implementation method
CN104102146B (en) * 2014-07-08 2016-09-07 苏州乐聚一堂电子科技有限公司 Virtual accompanying dancer's general-purpose control system
CN105031942A (en) * 2015-07-22 2015-11-11 浙江大丰实业股份有限公司 Stage data extraction and transmission control system
CN105031943A (en) * 2015-08-04 2015-11-11 浙江大丰实业股份有限公司 Stage self-adaption data extraction system
CN105045242A (en) * 2015-08-04 2015-11-11 浙江大丰实业股份有限公司 Stage self-adaptive multi-dimensional transmission control system
CN109841196A (en) * 2018-12-24 2019-06-04 武汉西山艺创文化有限公司 A kind of virtual idol presentation system based on transparent liquid crystal display
CN109841196B (en) * 2018-12-24 2021-09-28 武汉西山艺创文化有限公司 Virtual idol broadcasting system based on transparent liquid crystal display

Similar Documents

Publication Publication Date Title
CN103578135A (en) Virtual image and real scene combined stage interaction integrating system and realizing method thereof
US10674142B2 (en) Optimized object scanning using sensor fusion
CN203630822U (en) Virtual image and real scene combined stage interaction integrating system
CN111738220B (en) Three-dimensional human body posture estimation method, device, equipment and medium
US10628675B2 (en) Skeleton detection and tracking via client-server communication
CN107688391B (en) Gesture recognition method and device based on monocular vision
US20180186452A1 (en) Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation
JP2023175052A (en) Estimating pose in 3d space
KR101135186B1 (en) System and method for interactive and real-time augmented reality, and the recording media storing the program performing the said method
WO2020240284A3 (en) Vehicle environment modeling with cameras
RU2708027C1 (en) Method of transmitting motion of a subject from a video to an animated character
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
US20120162372A1 (en) Apparatus and method for converging reality and virtuality in a mobile environment
WO2014071254A4 (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
CN111294665B (en) Video generation method and device, electronic equipment and readable storage medium
CN104252712B (en) Video generation device, image generating method and recording medium
CN101520902A (en) System and method for low cost motion capture and demonstration
CN112270754A (en) Local grid map construction method and device, readable medium and electronic equipment
CN110211222B (en) AR immersion type tour guide method and device, storage medium and terminal equipment
KR20000017755A (en) Method for Acquisition of Data About Motion
CN111194122A (en) Somatosensory interactive light control system
CN113129411A (en) Bionic animation generation method and electronic equipment
Reimat et al. Cwipc-sxr: Point cloud dynamic human dataset for social xr
CN109407824B (en) Method and device for synchronous motion of human body model
Schönauer et al. Wide area motion tracking using consumer hardware

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 401 Qingdao economic and Technological Development Zone, Shandong, No. 266000 Hao Shan Road, room 578

Patentee after: Hengde Technology Co. Ltd.

Address before: 401 Qingdao economic and Technological Development Zone, Shandong, No. 266000 Hao Shan Road, room 578

Patentee before: HENGDE DIGITAL WUMEI TECHNOLOGY CO., LTD.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140604

Termination date: 20181125