WO2022055419A2 - 文字的显示方法、装置、电子设备及存储介质 - Google Patents
文字的显示方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2022055419A2 WO2022055419A2 PCT/SG2021/050491 SG2021050491W WO2022055419A2 WO 2022055419 A2 WO2022055419 A2 WO 2022055419A2 SG 2021050491 W SG2021050491 W SG 2021050491W WO 2022055419 A2 WO2022055419 A2 WO 2022055419A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text
- special effect
- display
- dynamic special
- processing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000000694 effects Effects 0.000 claims abstract description 159
- 238000012545 processing Methods 0.000 claims abstract description 98
- 230000003190 augmentative effect Effects 0.000 claims abstract description 24
- 238000005516 engineering process Methods 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 14
- 238000013507 mapping Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000008921 facial expression Effects 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 23
- 230000006870 function Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000001953 sensory effect Effects 0.000 description 5
- 230000004927 fusion Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/109—Font handling; Temporal or kinetic typography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/004—Annotating, labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/024—Multi-user, collaborative environment
Definitions
- an embodiment of the present disclosure provides a method for displaying text, including: acquiring a real-life shot image; acquiring text to be displayed; calling text motion trajectory data to perform dynamic special effects processing on the text to be displayed; The processed text is displayed on the live-action image.
- an embodiment of the present disclosure provides a text display device, including: a communication module, used for acquiring real-scene captured images, and acquiring text to be displayed; a processing module, used for invoking text motion trajectory data, The text to be displayed is processed with dynamic special effects; The display module is configured to display the text processed by the dynamic special effect on the real scene shot image.
- an embodiment of the present disclosure provides an electronic device, including: at least one processor and a memory; the memory stores computer-executable instructions; the at least one processor executes computer-executable instructions stored in the memory, so that the memory At least one processor executes the above first aspect and various possible display methods related to the text as described above.
- an embodiment of the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first aspect above are implemented.
- the display method of the text is described.
- the text display method, device, electronic device, and storage medium provided by the embodiments of the present disclosure, by acquiring a real-life shot image and text to be displayed, calling text motion trajectory data, performing dynamic special effects processing on the text to be displayed, and processing the dynamic special effects
- the text is displayed on the real-scene shot image, thereby realizing the function of displaying the text with dynamic special effects in the virtual augmented reality display, making the display effect of the text more vivid, and the display method can be widely used in various application scenarios. Users bring better visual sensory experience.
- FIG. 1 is a schematic diagram of a network architecture on which the disclosure is based;
- FIG. 2 is a schematic diagram of a first scenario based on a method for displaying text;
- FIG. 3 is a schematic diagram of another network architecture based on the disclosure;
- FIG. 4 is a text
- FIG. 5 is a schematic flowchart of a text display method provided by an embodiment of the present disclosure;
- FIG. 6 is a schematic diagram of a first interface of the text display method provided by the present disclosure;
- FIG. 1 is a schematic diagram of a network architecture on which the disclosure is based
- FIG. 2 is a schematic diagram of a first scenario based on a method for displaying text
- FIG. 3 is a schematic diagram of another network architecture based on the disclosure
- FIG. 4 is a text
- FIG. 5 is a schematic flowchart of a text display method provided by an embodiment of the present disclosure
- FIG. 8 is a schematic diagram of a third interface of the method for displaying text provided by the present disclosure
- 9 is a schematic diagram of a fourth interface of a text display method provided by the present disclosure
- FIG. 10 is a structural block diagram of a text display device provided by an embodiment of the present disclosure
- DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure clearer, the following will clearly and completely describe the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure.
- the embodiments of the present disclosure are some, but not all, embodiments of the present disclosure.
- Augmented reality Augmented Reality, AR for short
- display technology is a technology that skillfully integrates virtual information with the real world. More special effects are incorporated into the virtual augmented reality display technology to better present application scenarios.
- the application of text in virtual and augmented reality display is an important part of virtual and augmented reality display technology.
- a static text display mode is generally adopted, which makes the display effect of the text relatively rigid and the display mode is relatively simple.
- an embodiment of the present disclosure provides a text display method.
- FIG. 1 is a schematic diagram of a network architecture on which the disclosure is based.
- the network architecture shown in FIG. 1 may specifically include a terminal 1 , a text display device 2 and a server 3 .
- the terminal 1 may be a hardware device such as a user's mobile phone, a smart home device, a tablet computer, etc.
- the text display device 2 may be a client integrated or installed on the terminal 1.
- the server 3 may be a server cluster provided in the cloud including storing various types of text motion trajectory data.
- the text display device 2 can run on the terminal 1 and provide the terminal 1 with a display page for the terminal 1 to display the page provided by the text display device 2 to the user by using its screen or display component.
- the text display device 2 can also use the network component of the terminal 1 to interact with the server 3 to acquire pre-stored text motion trajectory data in the server 3 .
- the terminal 1 may also cache various types of text motion trajectory data in the server 3 for ease of use.
- the terminal 1 may store various types of text motion trajectory data, and by calling the text motion trajectory data, dynamic special effects processing is performed on the text to be displayed, and the text after the dynamic special effect processing is displayed in the real-life shot image captured by the terminal 1 superior.
- the architecture shown in FIG. 1 is applicable to various application scenarios
- FIG. 2 is a schematic diagram of a first scenario based on which the text display method is based.
- the user can activate the virtual augmented reality display function provided by the text display device, and send an operation instruction for displaying the text to be displayed to the text display device 2 through the terminal 1 to
- the display device 2 for text interacts with the server 3 to obtain corresponding text motion trajectory data.
- the text display device 2 uses the text motion trajectory data to process the text to be displayed, and displays the text after the display processing in the real-life shot image obtained by the terminal.
- the user can perform operations such as screen recording on the processed real-scene shooting image to obtain an image work with a personal style, and can also use the processed real-scene shooting image as an illustration of a navigation scene displayed in virtual augmented reality, virtual Illustrations of tourist scenes displayed by augmented reality, etc.
- FIG. 3 is a schematic diagram of another network architecture on which the present disclosure is based, and the network architecture shown in FIG. 3 may specifically include a plurality of terminals 1, a text display device 2, a server 3 and a camera, a system 4 Different from the architecture shown in FIG.
- the text display device 2 is integrated in the server 3 .
- the photographing system 4 can interact with the server 3 to provide a real-time photographed image for the text display device 2 therein, and the text display device 2 will use the text display method provided by the present disclosure to
- the real-life captured images are processed, and the processed images are sent to a plurality of terminals 1 for viewing and acquisition by the terminals 1 .
- the photographing system 4 may be constituted by a plurality of photographing devices arranged in the same photographing area, and the plurality of photographing devices will photograph the photographing area at different photographing angles.
- FIG. 4 is a schematic diagram of a second scene on which the text display method is based.
- the multi-angle real-life shooting images captured by the shooting system will be processed by the text display device in the server and sent to the terminals 1 at different locations for viewing by each user.
- the following will take the structure shown in FIG. 1 as an example to further describe the text display method provided by the present disclosure.
- FIG. 5 is a schematic flowchart of a method for displaying text according to an embodiment of the present disclosure.
- the method for displaying text provided by the embodiment of the present disclosure includes: Step 101: Acquire a real-scene photographed image.
- the text to be displayed may be acquired in various ways: In an optional implementation manner, the text to be displayed is directly acquired by receiving text information input by the user.
- the display device may acquire the voice input by the user, perform voice conversion processing on the voice, and obtain the text to be displayed.
- the limb information input by the user can also be obtained, and according to the preset mapping relationship between the limb information and the text, the text corresponding to the limb information is determined, and the text corresponding to the limb information is used as The text to be displayed.
- the body information includes one or more types of information among sign language information, gesture information, and facial expression information.
- Step 103 Call the text motion trajectory data, and perform dynamic special effects processing on the text to be displayed.
- Step 104 Display the text processed by the dynamic special effect on the real-life shot image.
- the execution body of the processing method provided in this example is the display device of the aforementioned text.
- the display device for the text can be installed in the terminal or in the server. No matter what kind of device it is installed in, the user-triggered operation can be received through the installed device (terminal or server). , and perform corresponding processing to send the processing result to the terminal and display it.
- the solution based on the present disclosure includes: calling the text motion trajectory data, performing dynamic special effect processing on the text to be displayed, and displaying the text after the dynamic special effect processing in the real scene shot image steps above.
- FIG. 6 is a schematic diagram of a first interface of the method for displaying text provided by the present disclosure
- FIG. 7 is a schematic diagram of a second interface of the method for displaying text provided by the present disclosure.
- the text display device will first interact with the shooting component of the terminal or through the shooting system to obtain a live shot image. Then the display device will acquire the text to be displayed, call the text motion trajectory data, and perform dynamic special effects processing on the text to be displayed, as shown in FIG.
- step 103 can be implemented in the following manner: Step 1031, receiving a user's selection instruction for the type of dynamic special effect; Step 1032, the text display device can select the type of dynamic special effect from the text movement track database according to the selected dynamic special effect type. Invoke the corresponding type of text motion trajectory data O, wherein the dynamic special effect type specifically refers to the text effect provided by the text display device for the user to be selected by the user, and different dynamic special effect types will perform different dynamic special effects processing on the text, so that it presents different dynamic motion trajectories and different rendering results.
- the text motion trajectory data corresponding to different types of dynamic special effects is pre-designed by the developer and stored in the text motion trajectory database of the server for the display device of the text to call at any time.
- Step 1023 Generate a three-dimensional text modeling of the text to be displayed according to the selected dynamic special effect type, and obtain modeling data of the text.
- Step 1034 Process the modeling data of the character by using the obtained character motion trajectory data to obtain the character after dynamic special effect processing. Further, according to the selected dynamic special effect type, a three-dimensional text modeling of the text to be displayed is generated, and modeling data of the text is obtained, wherein the modeling data of the text can be specifically composed of three-dimensional text coordinates of the text. Taking Fig.
- the text motion track data includes text position coordinates under different motion frames.
- the text motion trajectory data includes the text position coordinates under each motion frame: such as the text position coordinates in the first motion frame [( xll,yll,zll ) , ( xl2,yl2,zl2 ) ]; Among them, ( xll,yll,zll ) is used to represent the text position coordinates of "you" in the first motion frame, and ( xl2,yl2,zl2 ) is used The coordinates of the text position representing "good" in the first motion frame.
- the text position coordinates will be [( xNl,yNl,z21 ),( xN2,yN2,zN2 )]; where ( xNl,yNl,zNl ) is used to represent "you” in the Nth motion frame
- the text position coordinates of , and ( xN2, yN2, zN2 ) are used to represent the "good” text position coordinates in the Nth motion frame. That is to say, for "you", by ( xll, yll, zll ),
- (xN1, yN1, z21) and the set of text position coordinates under N motion frames will become its motion trajectory under the duration corresponding to the N motion frames; similarly, for "good", by ( xl2,yl2,zl2 ) , ( x22,y22,z22 )( xN2,yN2,zN2 ) and other text position coordinates under N motion frames will be the set of text position coordinates under the corresponding duration of the N motion frames movement trajectory.
- the character position coordinates are used to represent the position coordinates of each character. That is to say, when the same dynamic special effect type is used, the character position coordinates in the character motion track data corresponding to characters with different numbers of characters are also different.
- Step 1023 Process the modeling data of the text by using the obtained text motion trajectory data to obtain the text after dynamic special effect processing. Specifically, in this step, the display device will use a preset coordinate mapping script to map the three-dimensional text coordinates of the text to the coordinate system on which the text motion trajectory data is based, so as to ensure that the two use the same coordinate system the coordinates below.
- the display device performs coordinate alignment processing on the three-dimensional text coordinates of the mapped text and the text position coordinates under each motion frame in the text motion track data, so that the three-dimensional text coordinates of the center point of each text in the text are aligned to the pair of texts.
- the corresponding text position coordinates; finally, the aligned text is regarded as the text processed by the dynamic special effect.
- the dynamic special effect processing on the text further includes: performing special effect processing on the text shape of the text. Based on the different dynamic effect types selected by the user, The setting will use different special effect processing algorithms to process the text shape of the text.
- the three-dimensional model can also be processed with special effects, such as artistic word processing to make
- the 3D model of the text has a certain artistic style.
- a step of displaying the text processed by the dynamic special effect on the real scene shot image is also included.
- this step can be realized by using a virtual reality enhancement algorithm, such as a fusion algorithm of SLAM, that is, the text after the dynamic special effect processing is fused with the real-scene captured image, and the fusion-processed real-scene captured image is displayed.
- a virtual reality enhancement algorithm such as a fusion algorithm of SLAM
- the SLAM fusion algorithm is a known algorithm model that fuses virtual information and real-world images to display.
- the fusion algorithm and the three-dimensional text coordinates of the aligned text under each motion frame are used to achieve text fusion.
- the display method further includes a function of selecting the special effect display area.
- displaying the text processed by the dynamic special effect on the live-action shot image further includes: determining a special effect display area in the live-action shot image according to the selected dynamic special effect type; The text is displayed on the special effect display area of the real-life shot image.
- FIG. 8 is a schematic diagram of a third interface of the text display method provided by the present disclosure. As shown in FIG. 8 , in this scene, the display device can be enabled by turning on the front camera of the terminal.
- the type A is a text glasses special effect
- the special effect display area in which the text is generated in the real scene shot image, such as the real scene Capture the area where the eyes are on the face in the image
- call the text motion trajectory data corresponding to the special effect type and perform dynamic special effects processing on the text (such as "GOOD GIRL” input by the user)
- display the processed text In the special effect display area obtained above such as the area where the eyes are located.
- a related virtual object may also be added to the special effect display area, and the processed text is displayed on the virtual object. example For example, in FIG.
- FIG. 9 is a schematic diagram of a fourth interface of the text display method provided by the present disclosure.
- the display device will perform recognition processing on the face in the current live-action captured image based on the dynamic special effect type, so as to determine whether the eyes in the face are in the face.
- the position area in the image, and then the special effect display area is determined according to the area where the eyes are located.
- the special effect display area will also change accordingly, so as to obtain the schematic diagram shown in FIG. 9 .
- the text display device may acquire the text to be displayed in various ways: In an optional implementation manner, the text to be displayed is directly acquired by receiving text information input by the user. In another optional implementation manner, the display device may acquire the voice input by the user, perform voice conversion processing on the voice, and obtain the text to be displayed. In yet another optional implementation manner, the display device may also acquire body information input by the user, and according to a preset mapping relationship between the body information and the text, determine that the body information corresponds to the text, and the body information corresponds to the text. Text as the text to be displayed.
- the body information includes one or more types of information among sign language information, gesture information, and facial expression information.
- the text display method provided by the embodiment of the present disclosure, by acquiring a real-scene captured image, calling the text motion trajectory data, performing dynamic special effects processing on the text to be displayed, and displaying the text after the dynamic special effect processing on the real-scene captured image, so as to realize
- the function of displaying the text with dynamic special effects in the virtual augmented reality display makes the display effect of the text more vivid, and the display method can be widely used in various application scenarios to bring a better visual sensory experience to the user.
- FIG. 10 is a structural block diagram of a text display device provided by an embodiment of the present disclosure.
- the text display device includes: a communication module 10 , a processing module 20 and a display module 30 .
- the communication module 10 is used to obtain the real-time shot image and the text to be displayed;
- the processing module 20 is used to call the text motion trajectory data to perform dynamic special effects processing on the text to be displayed;
- the display module 30 is used to process the dynamic special effects After the text is displayed in the live-action image superior.
- the communication module 10 is configured to receive a user's selection instruction for the type of dynamic special effects; the processing module 20 is further configured to determine the special effect display area in the live-action captured image according to the selected dynamic special effect type; the display module 30 , which is used to display the text after the dynamic special effect processing on the special effect display area of the real scene shooting image.
- the processing module 20 is further configured to perform target recognition processing on the live-action shot image according to the selected dynamic special effect type, and determine the image area where the target to be identified is located in the live-action shot image; The image area where the target is located determines the special effect display area.
- the communication module 10 is used to receive a user's selection instruction for the type of dynamic special effects; the processing module 20 is specifically used to call the corresponding type of text movement trajectory data from the text movement trajectory database according to the selected dynamic special effect type. and generating three-dimensional character modeling of the characters to be displayed according to the selected dynamic special effect type, and obtaining modeling data of the characters; processing the modeling data of the characters by using the character motion trajectory data to obtain the dynamic Text after effect processing.
- the modeling data of the text includes three-dimensional text coordinates of the text; the text motion trajectory data includes the text position coordinates under different motion frames; the processing module 20 is specifically configured to use a preset coordinate mapping script , mapping the three-dimensional text coordinates of the text to the coordinate system on which the text motion track data is based; performing coordinate alignment processing on the three-dimensional text coordinates of the mapped text and the text position coordinates under each motion frame in the text motion track data; The text after the alignment processing is used as the text processed by the dynamic special effect.
- the processing module 20 is further configured to perform special effect processing on the character shape of the character.
- the display module 30 is specifically configured to, based on the augmented reality display technology, perform fusion processing on the text after the dynamic special effect processing and the real-scene captured image, and display the fusion-processed real-scene captured image .
- the communication module 10 is further configured to acquire the voice input by the user, perform voice conversion processing on the voice, and obtain the text to be displayed.
- the communication module 10 is further configured to acquire the limb information input by the user, determine the text corresponding to the limb information according to the preset mapping relationship between the limb information and the text, and use the text corresponding to the limb information as the text corresponding to the limb information. The text to be displayed.
- the body information includes one or more types of information among sign language information, gesture information, and facial expression information.
- the text display device provided by the embodiment of the present disclosure, by acquiring a real-life shot image and acquiring the text to be displayed, calling the text motion trajectory data, performing dynamic special effects processing on the text to be displayed, and displaying the text after the dynamic special effect processing in the On the real scene shooting image, the function of displaying the text with dynamic special effects in the virtual augmented reality display is realized, which makes the display effect of the text more vivid. Visual sensory experience.
- the electronic device provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and the implementation principle and technical effect thereof are similar, and details are not described herein again in this embodiment. Referring to FIG.
- the terminal device may include, but is not limited to, a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet computer (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), mobile terminals such as in-vehicle terminals (eg, in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
- PDA Personal Digital Assistant
- PMP portable multimedia player
- mobile terminals such as in-vehicle terminals (eg, in-vehicle navigation terminals), etc.
- stationary terminals such as digital TVs, desktop computers, and the like.
- the electronic device 900 may include a text display method (eg, a central processing unit, a graphics processing unit, etc.) 901, which may be based on a program stored in a read only memory (Read Only Memory, ROM for short) 902 or from The storage device 908 loads a program in a random access memory (Random Access Memory, RAM for short) 903 to execute various appropriate actions and processes. In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored.
- the text display method 901 , the ROM 902 and the RAM 903 are connected to each other through a bus 904 .
- I/O interface 905 is also connected to bus 904 .
- I/O interface 905 the following devices can be connected to I/O interface 905: Input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including Output devices 907, such as screens, speakers, vibrators, etc.; storage devices 908, including, for example, tapes, hard drives, etc.;
- FIG. 11 shows the electronic device 900 having various means, it should be understood that not all of the illustrated means are required to be implemented or available. More or fewer devices may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via the communication device 909 , or from the storage device 908 , or from the ROM 902 .
- the computer program is executed by the text display method 901
- the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
- the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
- the above computer readable medium carries one or more programs, and when the above one or more programs are executed by the electronic device, the electronic device causes the electronic device to execute the methods shown in the above embodiments.
- Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural programming language - such as the "C" language or similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or media library.
- the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN for short) or a Wide Area Network (WAN for short), or it can be connected to an external computer (eg, using an Internet service provider to connect through the Internet) o the flowcharts and block diagrams in the accompanying drawings illustrating the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure .
- LAN Local Area Network
- WAN Wide Area Network
- each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions for implementing the specified executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with dedicated hardware-based systems that perform the specified functions or operations , or can be implemented using a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
- the name of the unit does not constitute a limitation on the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit for obtaining at least two Internet Protocol addresses".
- the functions described herein above may be performed, at least in part, by one or more hardware logic components.
- exemplary types of hardware logic components include: Programmable Gate Array (FPGA), Application-Specific Integrated Circuit (ASIC), Application-Specific Standard Product (ASSP), System-on-Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- a method for displaying text includes: acquiring a real-life shot image; acquiring text to be displayed; Perform dynamic special effect processing; and display the text after the dynamic special effect processing on the real scene shot image.
- the displaying the text processed by the dynamic special effect on the real-scene shooting image includes: receiving a user's instruction for selecting a type of dynamic special effect; The special effect display area; displaying the text processed by the dynamic special effect on the special effect display area of the real-life shot image.
- the determining the special effect display area in the real-life shooting image according to the selected dynamic special effect type includes: performing target recognition processing on the real-scene shooting image according to the selected dynamic special effect type, and determining The image area where the target to be recognized is located in the real-life shot image; and the special effect display area is determined according to the image area where the target to be recognized is located.
- the text movement track data is called, and the text to be displayed is moved
- the dynamic special effect processing includes: receiving a user's selection instruction for the dynamic special effect type; calling the text motion trajectory data of the corresponding type from the text motion trajectory database according to the selected dynamic special effect type; and, according to the selected dynamic special effect type, generating the Three-dimensional character modeling of the characters to be displayed, and obtaining modeling data of the characters; and processing the modeling data of the characters by using the character motion trajectory data to obtain the characters processed by the dynamic special effects.
- the modeling data of the text includes the three-dimensional text coordinates of the text; the text motion trajectory data includes the text position coordinates of the motion trajectory in different motion frames; the use of the text motion trajectory data for the text
- the modeling data of the text is processed to obtain the text after dynamic special effects processing, including: using a preset coordinate mapping script to map the three-dimensional text coordinates of the text to the coordinate system based on the text motion trajectory data; Coordinate alignment processing is performed between the three-dimensional text coordinates of the text and the text position coordinates under each motion frame in the text motion track data; and the text after the alignment processing is used as the text after the dynamic special effect processing.
- the dynamic special effect processing further includes: performing special effect processing on the text shape of the text.
- the displaying the text after the dynamic special effect processing on the real scene shooting image includes: based on an augmented reality display technology, performing fusion processing on the text after the dynamic special effect processing and the real scene shooting image , and display the real scene shot image after fusion processing.
- the method further includes: acquiring the voice input by the user, and performing voice conversion processing on the voice to obtain the text to be displayed.
- the method further includes: acquiring the limb information input by the user, determining the text corresponding to the limb information according to a preset mapping relationship between the limb information and the text, and using the text corresponding to the limb information as the Describes the text to be displayed.
- a text display device includes: a communication module for acquiring a real-life shot image and text to be displayed; a processing module for calling a text motion trajectory data, and perform dynamic special effect processing on the text to be displayed; and a display module, configured to display the text after the dynamic special effect processing on the real-scene shot image.
- the communication module is used to receive a user's selection instruction for the type of dynamic special effect; the processing module is further used to determine the special effect display area in the real-life shot image according to the selected type of dynamic special effect; the display module is used to display The text processed by the dynamic special effect is displayed on the special effect display area of the real-life shot image.
- the processing module is further configured to perform target recognition processing on the live-action captured image according to the selected dynamic special effect type, and determine the image area where the target to be identified is located in the live-action captured image; according to the target to be identified The image area in which it is located determines the special effect display area.
- the communication module is used to receive a user's selection instruction on the type of dynamic special effects; the processing module is specifically used to call the text motion trajectory data of the corresponding type from the text motion trajectory database according to the selected dynamic special effect type; and According to the selected dynamic special effect type, generate a three-dimensional text modeling of the text to be displayed, and obtain the modeling data of the text; use the text motion trajectory data to process the modeling data of the text to obtain the dynamic special effect processing
- the modeling data of the text includes three-dimensional text coordinates of the text; the text motion trajectory data includes the text position coordinates under different motion frames; the processing module is specifically configured to use preset coordinates A mapping script, which maps the three-dimensional text coordinates of the text to the coordinate system on which the text motion trajectory data is based; aligns the three-dimensional text coordinates of the mapped text with the text position coordinates under each motion frame in the text motion trajectory data.
- the processing module is further configured to perform special effect processing on the text shape of the text.
- the display module is specifically configured to display technology based on augmented reality, The text after the dynamic special effect processing is fused with the real-scene captured image, and the fused-processed real-scene captured image is displayed.
- the communication module is further configured to obtain the voice input by the user, and perform voice conversion processing on the voice to obtain the text to be displayed.
- the communication module is further configured to acquire the limb information input by the user, determine the text corresponding to the limb information according to the preset mapping relationship between the limb information and the text, and use the text corresponding to the limb information as the text corresponding to the limb information. Describes the text to be displayed.
- the body information includes one or more types of information among sign language information, gesture information, and facial expression information.
- an electronic device includes: at least one processor and a memory; the memory stores computer-executed instructions; the at least one processor executes a computer stored in the memory Executing the instruction causes the at least one processor to execute the method for displaying text as described in any preceding item.
- a computer-readable storage medium stores computer-executable instructions, and when a processor executes the computer-executable instructions, it implements the following:
- the display method of the characters described in the preceding item is merely a preferred embodiment of the present disclosure and an illustration of the technical principles employed.
- Those skilled in the art should understand that the scope of disclosure involved in the present disclosure is not limited to the technical solutions formed by the specific combination of the above-mentioned technical features, and should also cover the technical solutions made of the above-mentioned technical features or Other technical solutions formed by any combination of its equivalent features.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21867239.2A EP4170599A4 (en) | 2020-09-10 | 2021-08-23 | CHARACTER DISPLAY METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIA |
JP2023504123A JP7574400B2 (ja) | 2020-09-10 | 2021-08-23 | 文字の表示方法、装置、電子機器及び記憶媒体 |
US18/060,454 US11836437B2 (en) | 2020-09-10 | 2022-11-30 | Character display method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010948338.5A CN112053450B (zh) | 2020-09-10 | 2020-09-10 | 文字的显示方法、装置、电子设备及存储介质 |
CN202010948338.5 | 2020-09-10 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/060,454 Continuation US11836437B2 (en) | 2020-09-10 | 2022-11-30 | Character display method and apparatus, electronic device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2022055419A2 true WO2022055419A2 (zh) | 2022-03-17 |
WO2022055419A3 WO2022055419A3 (zh) | 2022-05-05 |
Family
ID=73610437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SG2021/050491 WO2022055419A2 (zh) | 2020-09-10 | 2021-08-23 | 文字的显示方法、装置、电子设备及存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11836437B2 (zh) |
EP (1) | EP4170599A4 (zh) |
CN (1) | CN112053450B (zh) |
WO (1) | WO2022055419A2 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117676227A (zh) * | 2023-12-08 | 2024-03-08 | 腾讯科技(深圳)有限公司 | 数据处理方法及相关设备 |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101075349A (zh) * | 2007-06-22 | 2007-11-21 | 珠海金山软件股份有限公司 | 一种在svg中表达演示动画效果的方法 |
WO2014048497A1 (en) * | 2012-09-28 | 2014-04-03 | Metaio Gmbh | Method of image processing for an augmented reality application |
CN103729878A (zh) | 2013-12-19 | 2014-04-16 | 江苏锐天信息科技有限公司 | 一种基于wpf的三维图形实现方法及三维文字实现方法 |
US10146318B2 (en) * | 2014-06-13 | 2018-12-04 | Thomas Malzbender | Techniques for using gesture recognition to effectuate character selection |
EP3317858B1 (en) * | 2015-06-30 | 2022-07-06 | Magic Leap, Inc. | Technique for more efficiently displaying text in virtual image generation system |
CN105184840A (zh) * | 2015-07-17 | 2015-12-23 | 天脉聚源(北京)科技有限公司 | 动画显示拼字的方法和装置 |
CN106100983A (zh) * | 2016-08-30 | 2016-11-09 | 黄在鑫 | 一种基于增强现实与gps定位技术的移动社交网络系统 |
US10402211B2 (en) * | 2016-10-21 | 2019-09-03 | Inno Stream Technology Co., Ltd. | Method for processing innovation-creativity data information, user equipment and cloud server |
US10914957B1 (en) * | 2017-05-30 | 2021-02-09 | Apple Inc. | Video compression methods and apparatus |
CN107590860A (zh) * | 2017-09-07 | 2018-01-16 | 快创科技(大连)有限公司 | 一种基于ar技术的ar名片数据管理系统 |
CN108337547B (zh) * | 2017-11-27 | 2020-01-14 | 腾讯科技(深圳)有限公司 | 一种文字动画实现方法、装置、终端和存储介质 |
US10565761B2 (en) * | 2017-12-07 | 2020-02-18 | Wayfair Llc | Augmented reality z-stack prioritization |
CN108022306B (zh) * | 2017-12-30 | 2021-09-21 | 华自科技股份有限公司 | 基于增强现实的场景识别方法、装置、存储介质和设备 |
CN110858903B (zh) * | 2018-08-22 | 2022-07-12 | 华为技术有限公司 | 色度块预测方法及装置 |
CN109035421A (zh) * | 2018-08-29 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | 图像处理方法、装置、设备及存储介质 |
CN110874859A (zh) * | 2018-08-30 | 2020-03-10 | 三星电子(中国)研发中心 | 一种生成动画的方法和设备 |
US11080330B2 (en) * | 2019-02-26 | 2021-08-03 | Adobe Inc. | Generation of digital content navigation data |
CN110738737A (zh) * | 2019-10-15 | 2020-01-31 | 北京市商汤科技开发有限公司 | 一种ar场景图像处理方法、装置、电子设备及存储介质 |
CN111274910B (zh) * | 2020-01-16 | 2024-01-30 | 腾讯科技(深圳)有限公司 | 场景互动方法、装置及电子设备 |
CN111311757B (zh) * | 2020-02-14 | 2023-07-18 | 惠州Tcl移动通信有限公司 | 一种场景合成方法、装置、存储介质及移动终端 |
CN111476911B (zh) * | 2020-04-08 | 2023-07-25 | Oppo广东移动通信有限公司 | 虚拟影像实现方法、装置、存储介质与终端设备 |
CN111415422B (zh) * | 2020-04-17 | 2022-03-18 | Oppo广东移动通信有限公司 | 虚拟对象调整方法、装置、存储介质与增强现实设备 |
CN111586426B (zh) * | 2020-04-30 | 2022-08-09 | 广州方硅信息技术有限公司 | 全景直播的信息展示方法、装置、设备及存储介质 |
CN111640193A (zh) * | 2020-06-05 | 2020-09-08 | 浙江商汤科技开发有限公司 | 文字处理方法、装置、计算机设备及存储介质 |
-
2020
- 2020-09-10 CN CN202010948338.5A patent/CN112053450B/zh active Active
-
2021
- 2021-08-23 EP EP21867239.2A patent/EP4170599A4/en active Pending
- 2021-08-23 WO PCT/SG2021/050491 patent/WO2022055419A2/zh active Application Filing
-
2022
- 2022-11-30 US US18/060,454 patent/US11836437B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112053450A (zh) | 2020-12-08 |
WO2022055419A3 (zh) | 2022-05-05 |
US20230177253A1 (en) | 2023-06-08 |
EP4170599A2 (en) | 2023-04-26 |
US11836437B2 (en) | 2023-12-05 |
JP2023542598A (ja) | 2023-10-11 |
EP4170599A4 (en) | 2023-08-30 |
CN112053450B (zh) | 2024-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022166872A1 (zh) | 一种特效展示方法、装置、设备及介质 | |
WO2023051185A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
WO2022100735A1 (zh) | 视频处理方法、装置、电子设备及存储介质 | |
WO2022068479A1 (zh) | 图像处理方法、装置、电子设备及计算机可读存储介质 | |
WO2022089178A1 (zh) | 视频处理方法及设备 | |
WO2020248900A1 (zh) | 全景视频的处理方法、装置及存储介质 | |
WO2023179346A1 (zh) | 特效图像处理方法、装置、电子设备及存储介质 | |
WO2022055421A1 (zh) | 基于增强现实的显示方法、设备及存储介质 | |
US12019669B2 (en) | Method, apparatus, device, readable storage medium and product for media content processing | |
WO2023103720A1 (zh) | 视频特效处理方法、装置、电子设备及程序产品 | |
CN111862349A (zh) | 虚拟画笔实现方法、装置和计算机可读存储介质 | |
WO2022132033A1 (zh) | 基于增强现实的显示方法、装置、设备及存储介质 | |
WO2023226628A1 (zh) | 图像展示方法、装置、电子设备及存储介质 | |
WO2022093112A1 (zh) | 图像合成方法、设备及存储介质 | |
WO2023121569A2 (zh) | 粒子特效渲染方法、装置、设备及存储介质 | |
WO2022088908A1 (zh) | 视频播放方法、装置、电子设备及存储介质 | |
US11836437B2 (en) | Character display method and apparatus, electronic device, and storage medium | |
WO2024051540A1 (zh) | 特效处理方法、装置、电子设备及存储介质 | |
WO2024027819A1 (zh) | 图像处理方法、装置、设备及存储介质 | |
WO2022237435A1 (zh) | 更换画面中的背景的方法、设备、存储介质及程序产品 | |
WO2022151687A1 (zh) | 合影图像生成方法、装置、设备、存储介质、计算机程序及产品 | |
JP7214926B1 (ja) | 画像処理方法、装置、電子機器及びコンピュータ読み取り可能な記憶媒体 | |
JP7574400B2 (ja) | 文字の表示方法、装置、電子機器及び記憶媒体 | |
CN112486380A (zh) | 一种显示界面的处理方法、装置、介质和电子设备 | |
CN114339356B (zh) | 视频录制方法、装置、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21867239 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 2023504123 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202327004110 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 2021867239 Country of ref document: EP Effective date: 20230120 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |