CN111429585A - Image generation method and device, electronic equipment and computer readable storage medium - Google Patents
Image generation method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111429585A CN111429585A CN202010238072.5A CN202010238072A CN111429585A CN 111429585 A CN111429585 A CN 111429585A CN 202010238072 A CN202010238072 A CN 202010238072A CN 111429585 A CN111429585 A CN 111429585A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- map
- rendering
- generation method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000009877 rendering Methods 0.000 claims abstract description 92
- 230000000694 effects Effects 0.000 claims abstract description 23
- 238000002156 mixing Methods 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 3
- 230000003190 augmentative effect Effects 0.000 abstract description 12
- 230000003993 interaction Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 241000282414 Homo sapiens Species 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000012466 permeate Substances 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the disclosure discloses an image generation method and device, electronic equipment and a computer-readable storage medium. The image generation method comprises the following steps: a first image acquired from an image source; receiving first information; rendering the first information on a surface of a first object; the first object is loaded into a first image to generate a second image. By receiving the first information and rendering the first information to the surface of the first object and then loading the first object in the first image in the method, the technical problems of single effect and inflexible interaction of displaying an object in augmented reality in the prior art are solved.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image generation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of internet technology, the propagation form of the internet is constantly changing, from an early PC computer end to a current smart phone end, the way of accessing the internet by a netizen is more and more convenient, people enter a mobile internet era, mobile terminals represented by smart phones, tablet computers and the like are increasingly popular, the application of the mobile internet gradually permeates the daily life of people, and people can enjoy the convenience brought by a new technology anytime and anywhere.
Among them, Augmented Reality (AR) technology is one of the above new technologies. The augmented reality technology is a new technology generated along with the development of the virtual reality technology in recent years, a virtual scene is generated by utilizing computer rendering, the virtual scene is accurately fused with a real world, and finally the scene after virtual and real fusion is presented to a user by utilizing video display equipment, so that the visual experience of human beings is greatly improved. The augmented reality technology can generate real-time immersion, so that people can interact with a computer in a more natural mode, and the augmented reality technology is a man-machine interaction technology which emphasizes human subjects.
However, in the current augmented reality, the object effect can be rendered into a real image by the operation of a user only after being developed in advance by a developer, the effect is single, and the user cannot modify the effect; the interaction with the object in the augmented reality is limited to the manipulation of the object and is not flexible enough.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides an image generation method, including:
a first image acquired from an image source;
receiving first information;
rendering the first information on a surface of a first object;
the first object is loaded into a first image to generate a second image.
In a second aspect, an embodiment of the present disclosure provides an image generating apparatus, including:
the first image acquisition module is used for acquiring a first image from an image source;
the first information receiving module is used for receiving first information;
the rendering module is used for rendering the first information on the surface of the first object;
a loading module to load the first object into a first image to generate a second image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the image generation methods of the preceding first aspect.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions for causing a computer to execute the image generation method of any one of the foregoing first aspects.
The embodiment of the disclosure discloses an image generation method and device, electronic equipment and a computer-readable storage medium. The image generation method comprises the following steps: a first image acquired from an image source; receiving first information; rendering the first information on a surface of a first object; the first object is loaded into a first image to generate a second image. By receiving the first information and rendering the first information to the surface of the first object and then loading the first object in the first image in the method, the technical problems of single effect and inflexible interaction of displaying an object in augmented reality in the prior art are solved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of an image generation method provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a specific example of step S103 in the image generation method according to the embodiment of the disclosure;
fig. 3 is a schematic diagram of a specific example of step S202 in the image generation method according to the embodiment of the disclosure;
fig. 4 is a schematic diagram of a specific example of step S203 in the image generation method according to the embodiment of the disclosure;
fig. 5 is a schematic diagram of a specific example of step S204 in the image generation method according to the embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an embodiment of an image generating apparatus provided in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of an embodiment of an image generation method provided in an embodiment of the present disclosure, where the image generation method provided in this embodiment may be executed by an image generation apparatus, and the image generation apparatus may be implemented as software, or implemented as a combination of software and hardware, and the image generation apparatus may be integrated in some device in an image generation system, such as an image generation server or an image generation terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, acquiring a first image from an image source;
in an embodiment of the present disclosure, the first image is an image frame of a video image directly acquired from an image source.
In an embodiment of the present disclosure, the image source may be a local storage space or a network storage space, the acquiring a video image from the image source includes acquiring a video image from the local storage space or acquiring a video image from the network storage space, where the video image is acquired, a storage address of the video image is preferably required to be acquired, and then the video image is acquired from the storage address, where the video image includes a plurality of image frames, the video image may be a video or a picture with a dynamic effect, and any image with a plurality of frames may be a video image in the present disclosure.
In the present disclosure, the image source may be an image sensor, and the acquiring of the video image from the image source includes capturing the video image from the image sensor. The image sensor refers to various devices capable of acquiring images, and typical image sensors are video cameras, cameras and the like. In this embodiment, the image sensor may be a camera on the mobile terminal, such as a front-facing or rear-facing camera on a smart phone, and a video image acquired by the camera may be directly displayed on a display screen of the smart phone.
The first image may be an image frame of the video image, such as an image frame of a video directly acquired from the image sensor, and a plurality of the first images constitute the video image.
Step S102, receiving first information;
in an embodiment of the present disclosure, the first information is personalized information of a user, and for example, the first information includes: one or more of a text entered by the user, a name of the user, a time of the user, a facial image of the user. It is to be understood that the above first information is only an example and does not constitute a limitation of the present disclosure.
Optionally, the step S102 includes:
responding to the first trigger condition being met, and displaying a human-computer interface;
and receiving first information input by a user through the man-machine interface.
The first trigger condition includes that the user opens the augmented reality client, the user clicks a predetermined position in a screen, the user opens an information input interface, and the like, which are not described herein again. And when the first trigger condition is met, displaying a man-machine interface of first information, wherein the man-machine interface can be a text input box, a picture uploading box, a voice receiving prompt box and the like, for example, a user can input the first information in various man-machine interaction modes through the man-machine interface, and a terminal device receives the first information input by the user through the man-machine interface.
Step S103, rendering the first information on the surface of a first object;
in this step, first information input by the user is rendered to a surface of the first object to form the first object with the user-personalized features. The first object is a virtual object that needs to be loaded into the first image.
Optionally, in this step, the first information may be directly mixed with the surface color of the first object to obtain a new surface of the first object.
Optionally, the step S103 may further include:
step S201, obtaining a base color map of the first object;
step S202, generating a first information map according to the first information;
step S203, mixing the first information map and the bottom color map to obtain a rendering map;
and step S204, rendering the surface of the object to be rendered according to the rendering map to obtain a first object.
In this embodiment, the object to be rendered is a 3D object used in augmented reality, where the 3D object includes a base color map, and the base color map represents a basic color of the 3D object and is rendered first when a surface of the 3D object is rendered. The base color map is a preset map and can be directly obtained through the attribute of the object to be rendered.
Generating a first information map from the first information, the first information map being required to be rendered to a surface of the 3D object. Optionally, the step S202 includes: and generating the first information map according to the size of the bottom color map and the display position of the first information in the first object. Since the first information map needs to be rendered on the surface of the 3D object, it can be set to be the same as the size of the base color map, so that the first information map can be completely rendered on the surface of the 3D object, and in addition, the position of the first information on the surface of the 3D object can also be set through the human-machine interface, so that the corresponding first information map is generated according to the size of the base color map and the display position of the first information in the first object.
In addition, the first information can also be preprocessed into first information with various effects, and then the first information is rendered on the surface of the first object, if the first object is a lamp plate, the first information is characters, the characters can be processed to obtain a luminous effect of the characters, and then the first information is rendered on the lamp plate to obtain a luminous character effect on the lamp plate, so that the effect is more real. Therefore, optionally, the step S202 may further include:
step S301, generating a second information map according to the first information;
step S302, a first processing is carried out on the second information map to obtain a first information map, wherein the first processing obtains that the first information has a preset effect.
In step S302, a first process is performed on the second information map to obtain a first information map, where the first process is a special effect process, which can make the first information have a preset effect. Optionally, the preset effect is associated with the attribute of the first object, and if the first object itself emits light, the first information may be processed into a light-emitting effect.
In the embodiment of the present disclosure, the first information map may not completely cover the base color map, otherwise, the first object loses its original features, and therefore, in step S203, the base color map and the first information map are mixed to obtain a rendering map used in the final rendering. Optionally, the step S203 includes:
step S401, acquiring a mixing weight of a first information map;
step S402, mixing the color value of the first information map and the color value of the bottom map according to the mixing weight of the first information map to obtain the rendering map.
In this embodiment, the ratio of the color of the first information map and the color of the bottom map in the final rendered map is adjusted by blending weights α, where 0< α <1, the color value of the pixel of the first information map is x, the color value of the pixel of the bottom map is y, and the color value of the pixel of the rendered map is z, and z is α x + (1- α) y, so that the color information of the rendered map, including the first map, also includes the color information of the bottom map.
In step S204, rendering the object to be rendered according to the generated rendering map to obtain the first object. Optionally, the step S204 includes:
step S501, acquiring the corresponding relation between the pixels of the rendering map and the pixels of the object to be rendered;
step S502, obtaining the rendering color of the pixel of the rendering object from the rendering map according to the corresponding relation;
step S503, setting a color value of a pixel of the rendering object according to the rendering color to obtain the first object.
In step S501, the corresponding relationship is a UV coordinate of the pixel of the object to be rendered, the color value of the pixel of the object to be rendered is searched for from the rendering map through the UV coordinate, then the color value of the pixel of the object to be rendered is set to the searched color value, and when the color values of all the pixels are set, the first object is obtained.
On the basis of the first object obtained through rendering, other processing, such as rendering of material, may be further performed on the first object, so that the surface of the first object has the effect of other materials, which is not described herein again.
It is understood that the specific manner of rendering the first information on the surface of the first object is only an example, and does not limit the disclosure.
Step S104, loading the first object into a first image to generate a second image;
optionally, the step S104 includes:
acquiring loading attribute information of the first object;
and loading the first object into the first image according to the loading attribute information to generate a second image.
Wherein the obtaining of the loading attribute information of the first object includes: the loading position and the display angle of the first object are acquired. Illustratively, the first object is a 3D object in augmented reality, and therefore needs to be loaded at a position and an angle when being loaded into a video image, and illustratively, the loaded position may be a certain plane identified in the video image, such as a wall surface in the video image, and the like, and coordinates thereof on the plane, so that the 3D object may be loaded on a predetermined position of the plane when the plane appears in the video. In addition, a display angle of the first object needs to be acquired, where the display angle is related to a shooting angle of a terminal device of a user, and when the first object is loaded, a surface of the 3D object corresponding to the shooting angle is displayed to simulate a viewing angle when the 3D object is actually observed.
Optionally, the method further comprises changing the angle of the first object loaded into the first image in response to the display angle of the first image changing. Illustratively, the display angle of the first image is related to the shooting angle of the image sensor, and when the user moves the image sensor, the loading angle of the first object changes, so that the display angle of the first object changes in real time according to the angle of the image sensor to simulate the real observation angle, and it can be understood that the display angle of the first information on the first object also changes.
By the scheme, the user can generate the first object with personalized characteristics by inputting information, and the effect of the first object can be generated more flexibly.
The embodiment of the disclosure discloses an image generation method, which comprises the following steps: a first image acquired from an image source; receiving first information; rendering the first information on a surface of a first object; the first object is loaded into a first image to generate a second image. By receiving the first information and rendering the first information to the surface of the first object and then loading the first object in the first image in the method, the technical problems of single effect and inflexible interaction of displaying an object in augmented reality in the prior art are solved.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
Fig. 6 is a schematic structural diagram of an embodiment of an image generating apparatus provided in an embodiment of the present disclosure, and as shown in fig. 6, the apparatus 600 includes: a first image acquisition module 601, a first information receiving module 602, a rendering module 603, and a loading module 604. Wherein,
a first image obtaining module 601, configured to obtain a first image from an image source;
a first information receiving module 602, configured to receive first information;
a rendering module 603 configured to render the first information on a surface of a first object;
a loading module 604 for loading the first object into the first image to generate a second image.
Further, the first information receiving module 602 is further configured to:
responding to the first trigger condition being met, and displaying a human-computer interface;
and receiving first information input by a user through the man-machine interface.
Further, the rendering module 603 further includes:
the bottom color map obtaining module is used for obtaining a bottom color map of the first object;
the first information map generating module is used for generating a first information map according to the first information;
the mixing module is used for mixing the first information map and the bottom color map to obtain a rendering map;
and the rendering submodule is used for rendering the surface of the object to be rendered according to the rendering map to obtain a first object.
Further, the first information map generating module is further configured to:
and generating the first information map according to the size of the bottom color map and the display position of the first information in the first object.
Further, the first information map generating module is further configured to:
generating a second information map according to the first information;
and carrying out first processing on the second information map to obtain a first information map, wherein the first processing obtains that the first information has a preset effect.
Further, the mixing module is further configured to:
acquiring the mixing weight of the first information map;
and mixing the color value of the first information map and the color value of the bottom map according to the mixing weight of the first information map to obtain the rendering map.
Further, the rendering sub-module is further configured to:
acquiring the corresponding relation between the pixels of the rendering map and the pixels of the object to be rendered;
acquiring the rendering color of the pixel of the rendering object from the rendering map according to the corresponding relation;
and setting the color value of the pixel of the rendering object according to the rendering color to obtain the first object.
Further, the loading module 604 is further configured to:
acquiring loading attribute information of the first object;
and loading the first object into the first image according to the loading attribute information to generate a second image.
Further, the loading module 604 is further configured to:
the loading position and the display angle of the first object are acquired.
Further, the image generating apparatus 600 further includes:
and the angle adjusting module is used for responding to the change of the display angle of the first image and changing the angle of the first object loaded into the first image.
The apparatus shown in fig. 6 can perform the method of the embodiment shown in fig. 1-5, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-5. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to 5, and are not described herein again.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 707 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., storage devices 708 including, for example, magnetic tape, hard disk, etc., and communication devices 709. communication devices 709 may allow electronic device 700 to communicate wirelessly or wiredly with other devices to exchange data.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). examples of communications networks include local area networks ("L AN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: a first image acquired from an image source; receiving first information; rendering the first information on a surface of a first object; the first object is loaded into a first image to generate a second image.
Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including but not limited to AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex programmable logic devices (CP L D), and so forth.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image generation method including:
a first image acquired from an image source;
receiving first information;
rendering the first information on a surface of a first object;
the first object is loaded into a first image to generate a second image.
Further, the receiving the first information includes:
responding to the first trigger condition being met, and displaying a human-computer interface;
and receiving first information input by a user through the man-machine interface.
Further, the rendering the first information on the surface of the first object includes:
acquiring a base color map of the first object;
generating a first information map according to the first information;
mixing the first information map and the bottom color map to obtain a rendering map;
and rendering the surface of the object to be rendered according to the rendering map to obtain a first object.
Further, the generating a first information map according to the first information includes:
and generating the first information map according to the size of the bottom color map and the display position of the first information in the first object.
Further, the generating a first information map according to the first information includes:
generating a second information map according to the first information;
and carrying out first processing on the second information map to obtain a first information map, wherein the first processing obtains that the first information has a preset effect.
Further, the mixing the first information map and the base color map to obtain a rendering map includes:
acquiring the mixing weight of the first information map;
and mixing the color value of the first information map and the color value of the bottom map according to the mixing weight of the first information map to obtain the rendering map.
Further, the rendering the object to be rendered according to the rendering map to obtain the first object includes:
acquiring the corresponding relation between the pixels of the rendering map and the pixels of the object to be rendered;
acquiring the rendering color of the pixel of the rendering object from the rendering map according to the corresponding relation;
and setting the color value of the pixel of the rendering object according to the rendering color to obtain the first object.
Further, the loading the first object into the first image to generate the second image includes:
acquiring loading attribute information of the first object;
and loading the first object into the first image according to the loading attribute information to generate a second image.
Further, the obtaining of the loading attribute information of the first object includes:
the loading position and the display angle of the first object are acquired.
Further, the method further comprises:
changing an angle of the first object loaded into the first image in response to a change in a display angle of the first image.
According to one or more embodiments of the present disclosure, there is provided an image generation apparatus including:
the first image acquisition module is used for acquiring a first image from an image source;
the first information receiving module is used for receiving first information;
the rendering module is used for rendering the first information on the surface of the first object;
a loading module to load the first object into a first image to generate a second image.
Further, the first information receiving module is further configured to:
responding to the first trigger condition being met, and displaying a human-computer interface;
and receiving first information input by a user through the man-machine interface.
Further, the rendering module further includes:
the bottom color map obtaining module is used for obtaining a bottom color map of the first object;
the first information map generating module is used for generating a first information map according to the first information;
the mixing module is used for mixing the first information map and the bottom color map to obtain a rendering map;
and the rendering submodule is used for rendering the surface of the object to be rendered according to the rendering map to obtain a first object.
Further, the first information map generating module is further configured to:
and generating the first information map according to the size of the bottom color map and the display position of the first information in the first object.
Further, the first information map generating module is further configured to:
generating a second information map according to the first information;
and carrying out first processing on the second information map to obtain a first information map, wherein the first processing obtains that the first information has a preset effect.
Further, the mixing module is further configured to:
acquiring the mixing weight of the first information map;
and mixing the color value of the first information map and the color value of the bottom map according to the mixing weight of the first information map to obtain the rendering map.
Further, the rendering sub-module is further configured to:
acquiring the corresponding relation between the pixels of the rendering map and the pixels of the object to be rendered;
acquiring the rendering color of the pixel of the rendering object from the rendering map according to the corresponding relation;
and setting the color value of the pixel of the rendering object according to the rendering color to obtain the first object.
Further, the loading module is further configured to:
acquiring loading attribute information of the first object;
and loading the first object into the first image according to the loading attribute information to generate a second image.
Further, the loading module is further configured to:
the loading position and the display angle of the first object are acquired.
Further, the image generation apparatus further includes:
and the angle adjusting module is used for responding to the change of the display angle of the first image and changing the angle of the first object loaded into the first image.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the image generation methods of the preceding first aspect.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium characterized by storing computer instructions for causing a computer to perform the image generation method of any one of the preceding first aspects.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (13)
1. An image generation method, comprising:
a first image acquired from an image source;
receiving first information;
rendering the first information on a surface of a first object;
the first object is loaded into a first image to generate a second image.
2. The image generation method of claim 1, wherein the receiving first information comprises:
responding to the first trigger condition being met, and displaying a human-computer interface;
and receiving first information input by a user through the man-machine interface.
3. The image generation method of claim 1, wherein said rendering the first information to a surface of a first object comprises:
acquiring a base color map of the first object;
generating a first information map according to the first information;
mixing the first information map and the bottom color map to obtain a rendering map;
and rendering the surface of the object to be rendered according to the rendering map to obtain a first object.
4. The image generation method of claim 3, wherein the generating a first information map from the first information comprises:
and generating the first information map according to the size of the bottom color map and the display position of the first information in the first object.
5. The image generation method of claim 3, wherein the generating a first information map from the first information comprises:
generating a second information map according to the first information;
and carrying out first processing on the second information map to obtain a first information map, wherein the first processing obtains that the first information has a preset effect.
6. The image generation method of claim 3, wherein the blending the first information map with the base color map to obtain a rendering map comprises:
acquiring the mixing weight of the first information map;
and mixing the color value of the first information map and the color value of the bottom map according to the mixing weight of the first information map to obtain the rendering map.
7. The image generation method of claim 3, wherein rendering the object to be rendered from the rendering map results in a first object, comprising:
acquiring the corresponding relation between the pixels of the rendering map and the pixels of the object to be rendered;
acquiring the rendering color of the pixel of the rendering object from the rendering map according to the corresponding relation;
and setting the color value of the pixel of the rendering object according to the rendering color to obtain the first object.
8. The image generation method of claim 1, wherein said loading the first object into a first image to generate a second image comprises:
acquiring loading attribute information of the first object;
and loading the first object into the first image according to the loading attribute information to generate a second image.
9. The image generation method of claim 8, wherein the obtaining loading attribute information of the first object comprises:
the loading position and the display angle of the first object are acquired.
10. The image generation method of claim 1, wherein the method further comprises:
changing an angle of the first object loaded into the first image in response to a change in a display angle of the first image.
11. An image generation apparatus, comprising:
the first image acquisition module is used for acquiring a first image from an image source;
the first information receiving module is used for receiving first information;
the rendering module is used for rendering the first information on the surface of the first object;
a loading module to load the first object into a first image to generate a second image.
12. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when running implements the image generation method of any of claims 1-10.
13. A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform the image generation method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010238072.5A CN111429585A (en) | 2020-03-30 | 2020-03-30 | Image generation method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010238072.5A CN111429585A (en) | 2020-03-30 | 2020-03-30 | Image generation method and device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111429585A true CN111429585A (en) | 2020-07-17 |
Family
ID=71555542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010238072.5A Pending CN111429585A (en) | 2020-03-30 | 2020-03-30 | Image generation method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111429585A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103959288A (en) * | 2011-11-30 | 2014-07-30 | 诺基亚公司 | Method and apparatus for WEB-based augmented reality application viewer |
CN106383587A (en) * | 2016-10-26 | 2017-02-08 | 腾讯科技(深圳)有限公司 | Augmented reality scene generation method, device and equipment |
CN106815881A (en) * | 2017-04-13 | 2017-06-09 | 腾讯科技(深圳)有限公司 | The color control method and device of a kind of actor model |
CN107705355A (en) * | 2017-09-08 | 2018-02-16 | 郭睿 | A kind of 3D human body modeling methods and device based on plurality of pictures |
US20180322703A1 (en) * | 2015-12-02 | 2018-11-08 | Devar Entertainment Limited | Method of controlling a device for generating an augmented reality environment |
CN109218630A (en) * | 2017-07-06 | 2019-01-15 | 腾讯科技(深圳)有限公司 | A kind of method for processing multimedia information and device, terminal, storage medium |
CN109696953A (en) * | 2017-10-19 | 2019-04-30 | 华为技术有限公司 | The method, apparatus and virtual reality device of virtual reality text importing |
CN110310175A (en) * | 2018-06-27 | 2019-10-08 | 北京京东尚科信息技术有限公司 | System and method for mobile augmented reality |
CN110443898A (en) * | 2019-08-12 | 2019-11-12 | 北京枭龙科技有限公司 | A kind of AR intelligent terminal target identification system and method based on deep learning |
CN110784733A (en) * | 2019-11-07 | 2020-02-11 | 广州虎牙科技有限公司 | Live broadcast data processing method and device, electronic equipment and readable storage medium |
CN110852143A (en) * | 2018-08-21 | 2020-02-28 | 脸谱公司 | Interactive text effects in augmented reality environments |
-
2020
- 2020-03-30 CN CN202010238072.5A patent/CN111429585A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103959288A (en) * | 2011-11-30 | 2014-07-30 | 诺基亚公司 | Method and apparatus for WEB-based augmented reality application viewer |
US20180322703A1 (en) * | 2015-12-02 | 2018-11-08 | Devar Entertainment Limited | Method of controlling a device for generating an augmented reality environment |
CN106383587A (en) * | 2016-10-26 | 2017-02-08 | 腾讯科技(深圳)有限公司 | Augmented reality scene generation method, device and equipment |
CN106815881A (en) * | 2017-04-13 | 2017-06-09 | 腾讯科技(深圳)有限公司 | The color control method and device of a kind of actor model |
CN109218630A (en) * | 2017-07-06 | 2019-01-15 | 腾讯科技(深圳)有限公司 | A kind of method for processing multimedia information and device, terminal, storage medium |
CN107705355A (en) * | 2017-09-08 | 2018-02-16 | 郭睿 | A kind of 3D human body modeling methods and device based on plurality of pictures |
CN109696953A (en) * | 2017-10-19 | 2019-04-30 | 华为技术有限公司 | The method, apparatus and virtual reality device of virtual reality text importing |
CN110310175A (en) * | 2018-06-27 | 2019-10-08 | 北京京东尚科信息技术有限公司 | System and method for mobile augmented reality |
CN110852143A (en) * | 2018-08-21 | 2020-02-28 | 脸谱公司 | Interactive text effects in augmented reality environments |
CN110443898A (en) * | 2019-08-12 | 2019-11-12 | 北京枭龙科技有限公司 | A kind of AR intelligent terminal target identification system and method based on deep learning |
CN110784733A (en) * | 2019-11-07 | 2020-02-11 | 广州虎牙科技有限公司 | Live broadcast data processing method and device, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112929582A (en) | Special effect display method, device, equipment and medium | |
CN111243049B (en) | Face image processing method and device, readable medium and electronic equipment | |
CN111796825B (en) | Bullet screen drawing method, bullet screen drawing device, bullet screen drawing equipment and storage medium | |
WO2023138559A1 (en) | Virtual reality interaction method and apparatus, and device and storage medium | |
CN114842120B (en) | Image rendering processing method, device, equipment and medium | |
CN116301530A (en) | Virtual scene processing method and device, electronic equipment and storage medium | |
CN114900625A (en) | Subtitle rendering method, device, equipment and medium for virtual reality space | |
CN110070617B (en) | Data synchronization method, device and hardware device | |
CN114742856A (en) | Video processing method, device, equipment and medium | |
CN111862342B (en) | Augmented reality texture processing method and device, electronic equipment and storage medium | |
CN113961280A (en) | View display method and device, electronic equipment and computer-readable storage medium | |
CN116596748A (en) | Image stylization processing method, apparatus, device, storage medium, and program product | |
CN112231023A (en) | Information display method, device, equipment and storage medium | |
CN115988255A (en) | Special effect generation method and device, electronic equipment and storage medium | |
CN115578299A (en) | Image generation method, device, equipment and storage medium | |
CN111489428B (en) | Image generation method, device, electronic equipment and computer readable storage medium | |
CN116385469A (en) | Special effect image generation method and device, electronic equipment and storage medium | |
CN116527993A (en) | Video processing method, apparatus, electronic device, storage medium and program product | |
CN112053450B (en) | Text display method and device, electronic equipment and storage medium | |
CN114723600A (en) | Method, device, equipment, storage medium and program product for generating cosmetic special effect | |
CN111429585A (en) | Image generation method and device, electronic equipment and computer readable storage medium | |
CN113837918A (en) | Method and device for realizing rendering isolation by multiple processes | |
CN111324404B (en) | Information acquisition progress display method and device, electronic equipment and readable medium | |
CN114357348B (en) | Display method and device and electronic equipment | |
CN112395826B (en) | Text special effect processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |