CN111833459A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111833459A
CN111833459A CN202010662796.2A CN202010662796A CN111833459A CN 111833459 A CN111833459 A CN 111833459A CN 202010662796 A CN202010662796 A CN 202010662796A CN 111833459 A CN111833459 A CN 111833459A
Authority
CN
China
Prior art keywords
image
augmented reality
reality model
depth
set object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010662796.2A
Other languages
Chinese (zh)
Other versions
CN111833459B (en
Inventor
李云玖
陈志立
陈怡�
蒋颂晟
任龙
刘舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010662796.2A priority Critical patent/CN111833459B/en
Publication of CN111833459A publication Critical patent/CN111833459A/en
Application granted granted Critical
Publication of CN111833459B publication Critical patent/CN111833459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses an image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object; acquiring a preset model motion track, wherein the model motion track is used for indicating position information and depth information of the augmented reality model in each frame of the first image; and according to the position information and the depth information, overlaying the augmented reality model to the first image to obtain a second image, and displaying the second image. The embodiment of the disclosure can realize that the augmented reality model takes the set object as the background, enriches the image display effect according to the motion effect of the motion track of the model, solves the problem that the display effect is single in the current shooting scene, and provides a novel playing method to improve the user experience.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The AR (Augmented Reality) technology is a technology that can combine a real environment and virtual information to realize display of an overlay image of an AR model and an image about the real world on a screen of an intelligent terminal.
At present, video shooting through an intelligent terminal is only used for carrying out image recording on a shot object, and the intelligent terminal with the augmented reality function can only provide some simple application scenes such as background replacement, sticker addition and the like, is single in display effect and cannot meet the requirement of a user on pursuing a novel playing method.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device, an electronic device and a storage medium, which can enrich the display effect of a shot image.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including:
acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object;
acquiring a preset model motion track, wherein the model motion track is used for indicating position information and depth information of the augmented reality model in each frame of the first image;
and according to the position information and the depth information, overlaying the augmented reality model to the first image to obtain a second image, and displaying the second image.
In a second aspect, an embodiment of the present disclosure further provides an image processing apparatus, including:
the model acquisition module is used for acquiring a first image, identifying a set object in the first image and acquiring an augmented reality model corresponding to the set object;
the track acquisition module is used for acquiring a preset model motion track, and the model motion track is used for indicating the position information and the depth information of the augmented reality model in each frame of the first image;
and the image superposition module is used for superposing the augmented reality model to the first image according to the position information and the depth information to obtain a second image and displaying the second image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an image processing method as provided by any of the embodiments of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the image processing method as provided in any of the embodiments of the present disclosure.
The embodiment of the disclosure provides an image processing method, an image processing device, an electronic device and a storage medium, wherein an augmented reality model and a preset model motion track corresponding to a set object are obtained by identifying the set object in a first image, the augmented reality model is superposed on the first image according to position information and depth information in the model motion track to obtain a second image, and the second image is displayed. Because the depth of field of the pixel points of the augmented reality model and the depth of field of the pixel points in the first image are considered in the superposition process of the augmented reality model and the image, the augmented reality model can take the set object as the background, the image display effect is enriched according to the motion effect of the motion track of the model, the problem that the display effect is single in the current shooting scene is solved, and a novel playing method is provided to improve the user experience.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of a model superimposing method in an image processing method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of another image processing method provided by the disclosed embodiments;
FIG. 4 is a flowchart of another image processing method provided by the embodiments of the present disclosure;
fig. 5 is a block diagram of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device according to this functional embodiment.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Fig. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure, which may be executed by an image processing apparatus, which may be implemented by software and/or hardware and is generally disposed in an electronic device. As shown in fig. 1, the method includes:
step 110, acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object.
It should be noted that the electronic device in the embodiments of the present disclosure may include a mobile terminal such as a smart phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), and the like, and a fixed terminal such as a desktop computer, and the like.
In the embodiment of the present disclosure, the first image may be an image about the real world captured by a camera of the electronic device. For example, a plurality of frames of original images captured by a smartphone are taken as the first image.
A model (which may be a three-dimensional model or a two-dimensional model, and is not limited herein) is created in advance for some object in the real world, and an object having an augmented reality model may be referred to as a set object as an augmented reality model. In the embodiment of the disclosure, a 3D model is created for a landmark building in advance, and is used as an augmented reality model corresponding to the landmark building. The augmented reality model can be constructed for different objects according to actual needs, and the embodiment of the disclosure does not limit the types of the objects.
Illustratively, a first image is acquired at a set period during the duration of a shooting event; identifying the first image, and judging whether the first image contains a set object according to an identification result; and if so, acquiring the augmented reality model corresponding to the set object. In the embodiment of the present disclosure, the step of obtaining the augmented reality model corresponding to the set object includes, but is not limited to: and acquiring the augmented reality model corresponding to the set object by the resource library of the client. Alternatively, the intelligent terminal requests the server for an augmented reality model corresponding to the set object. For example, a resource library is built in a client downloaded by an intelligent terminal, the resource library comprises some common augmented reality models, and when a new resource exists at a server, the server can issue an update notification to the client, so as to remind the client of updating the built-in resource library. Optionally, if the user needs to download a new resource, the resources in the download list may be sorted according to the user's usage preference to preferentially display the resources that meet the user's usage preference.
In an exemplary embodiment, after the client identifies the setting object in the first image, the client sends a model request to the server to obtain, by the server, an augmented reality model corresponding to the setting object. Optionally, the downloaded augmented reality model may be cached locally for next use.
The setting period is a preset empirical value, and the setting periods in different shooting scenes can be the same or different. The shooting scene may be a sunrise scene, a cloudy scene, a sunny scene, a daytime scene, or a dim light scene, and the embodiment of the disclosure is not particularly limited.
And step 120, acquiring a preset model motion track.
It should be noted that the model motion trajectory is used to indicate the position information and the depth information of the augmented reality model in each frame of the first image. In the embodiment of the present disclosure, the position information may be coordinate information of a pixel point in the augmented reality model, or other information indicating a position of a pixel point in the augmented reality model. The depth information may be depth information of a pixel in the augmented reality model, or other information indicating whether a pixel in the augmented reality model is a foreground pixel or a background pixel.
The model motion trajectory is preset, and after the augmented reality model is built, the model motion trajectory is associated with the built augmented reality model. In order to be able to display an augmented reality model on a display screen of an electronic device, the augmented reality model is processed based on a specific transformation matrix, which may be transformed from a model coordinate system to screen coordinates. The method is adopted to process the augmented reality model corresponding to each position on the preset model motion track, and position information and depth information of the augmented reality model in each frame of first image are obtained. The first image is shot by the camera, and then is processed by the transformation matrix, so that the transformation from world coordinates to screen coordinates is completed.
Optionally, when the server obtains the augmented reality model, the client may download data corresponding to a model motion trajectory associated with the augmented reality model, and store the data in the built-in resource library. Or, storing identification information of the model motion trail associated with the augmented reality model in a built-in resource library of the client, so that when the augmented reality model needs to be used, the server acquires data corresponding to the model motion trail according to the identification information.
Illustratively, according to the acquired augmented reality model, a model motion track corresponding to the augmented reality model is determined, and data corresponding to the model motion track is acquired.
And step 130, superimposing the augmented reality model on the first image according to the position information and the depth information to obtain a second image.
Exemplarily, according to the position information and the depth information included in the model motion trajectory, determining a superposition region of the augmented reality model corresponding to each frame of the first image, respectively superposing the augmented reality model to the superposition region, obtaining a plurality of frames of second images, and displaying the second images according to a set order. Wherein the set order may be an acquisition order of the first images. Alternatively, the order of generation of the second image is used. Alternatively, other custom sequences are possible, and the disclosed embodiments are not limited in this respect.
Fig. 2 is a flowchart of a method for superimposing a model in an image processing method according to an embodiment of the present disclosure. As shown in fig. 2, the model superposition method includes:
and step 131, determining an overlapping area corresponding to the augmented reality model in each frame of the first image according to the position information.
For example, for each frame of the first image, the augmented reality model may have different positions in the first image according to the motion trajectory of the model. And obtaining coordinate information of each pixel point in the augmented reality model, and determining the superposition area of the augmented reality model in each frame of the first image according to the coordinate information.
Step 132, determining the depth-of-field relationship between the pixel point of the set object in each frame of the first image and the pixel point of the augmented reality model according to the depth information.
For example, for any frame of the first image, depth information of a pixel point of the setting object is acquired. Determining a target pixel point with the same coordinate as the pixel point of the first image in the pixel points of the augmented reality model, acquiring depth of field information of the target pixel point, calculating a depth of field difference value of the target pixel point and the pixel point with the same coordinate of the set object, and obtaining a depth of field relation between the pixel point of the set object and the target pixel point in the current first image.
And step 133, superposing the pixel points of the augmented reality model to the superposed region according to the depth-of-field relation to obtain a second image.
For example, for any frame of the first image, the augmented reality model is superimposed on the superimposed region according to the depth difference, so as to obtain a frame of the second image. And obtaining a plurality of frames of second images by adopting the same mode for the first images of other frames.
According to the embodiment of the disclosure, the augmented reality model and the preset model motion track corresponding to the set object are obtained by identifying the set object in the first image, and the augmented reality model is superimposed on the first image according to the position information and the depth information in the model motion track to obtain the second image. Because the depth of field of the pixel points of the augmented reality model and the depth of field of the pixel points in the first image are considered in the superposition process of the augmented reality model and the image, the augmented reality model can take the set object as the background, the image display effect is enriched according to the motion effect of the motion track of the model, the problem that the display effect is single in the current shooting scene is solved, and a novel playing method is provided to improve the user experience.
Fig. 3 is a flowchart of another image processing method provided in the embodiment of the present disclosure, and as shown in fig. 3, the method includes:
step 310, acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object.
And step 320, acquiring a preset model motion track.
And step 330, determining an overlay region corresponding to the augmented reality model in each frame of the first image according to the position information.
Step 340, determining a depth-of-field relationship between the pixel point of the set object and the pixel point of the augmented reality model in each frame of the first image according to the depth information.
And 350, adding the pixel points of the augmented reality model to the superposition area according to the depth-of-field relation.
And step 360, determining a projection area of the augmented reality model on the surface of the set object.
Illustratively, the projection area of the augmented reality model on the surface of the set object is determined according to the coordinate information.
Step 370, adjusting the pixel points of the set object in the projection area according to the depth of field information of the target pixel points of the augmented reality model to obtain a second image.
For example, the depth of field of the pixel point of the set object in the projection region may be adjusted according to the depth of field information of the template pixel point of the augmented reality model to obtain the second image, so that the surface of the set object in the projection region deforms along with the augmented reality model. If the augmented reality model protrudes from the set object, the surface of the set object is correspondingly deformed in the process that the augmented reality model is far away from the surface of the set object. If the augmented reality model retracts into the setting object, the surface of the setting object is correspondingly deformed during the process that the augmented reality model approaches the surface of the setting object.
And 380, rendering the second image to a display interface, and displaying the motion process of the augmented reality model with the set object as the background.
For example, a plurality of frames of second images are sequentially rendered to the display interface, and a video of the motion process of the augmented reality model with the set object as the background can be displayed.
In the embodiment of the disclosure, a projection region of an augmented reality model on the surface of a set object is determined by adding a pixel point of the augmented reality model to a superposition region; and adjusting the pixel points of the set object in the projection area according to the depth of field information of the target pixel points of the augmented reality model to obtain a second image, so that the surface of the set object in the projection area deforms along with the augmented reality model, and a novel display special effect is provided.
Fig. 4 is a flowchart of another image processing method provided by the embodiment of the present disclosure, and as shown in fig. 4, the method includes:
step 401, acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object.
And 402, acquiring a preset model motion track.
And step 403, determining a superposition area corresponding to the augmented reality model in each frame of the first image according to the position information.
And step 404, determining the depth-of-field relationship between the pixel point of the set object in each frame of the first image and the pixel point of the augmented reality model according to the depth information.
And 405, judging whether the depth of field relation meets the set condition, if not, executing the step 406, otherwise, executing the step 407.
Illustratively, the depth of field difference is compared with a set threshold, and whether the depth of field relation meets the set condition is judged according to the comparison result. The setting condition may be that if the depth-of-field difference is greater than a setting threshold, it is determined that the depth-of-field relationship satisfies the setting condition; and if the depth of field difference value is less than or equal to the set threshold, determining that the depth of field relation does not meet the set condition.
The set threshold value can be set according to the actual application scenario. In the embodiment of the present disclosure, the set threshold may be zero, that is, when at least one surface of the augmented reality model protrudes from the set object, the corresponding pixel point of the augmented reality model is used to replace the pixel point of the overlay region in the first image, so as to present an effect that the augmented reality model blocks the overlay region in the first image.
Step 406, setting the pixel points of the augmented reality model corresponding to the superimposition area to be transparent, superimposing the pixel points of the augmented reality model to the superimposition area to obtain a second image, and then executing step 410.
And 407, replacing the pixels in the superposition area in the first image with the pixels of the augmented reality model corresponding to the superposition area to obtain a second image.
And step 408, determining a projection area of the augmented reality model on the surface of the set object.
For example, the projected area of the augmented reality model on the surface of the set object is determined in real time during the duration of the capture event.
And step 409, acquiring texture information of the projection area to render the augmented reality model according to the texture information.
For example, for each newly determined projection region, the texture information of the set object corresponding to the projection region is obtained in real time, so as to render the augmented reality model according to the texture information obtained in real time.
And step 410, rendering the second image to a display interface, and displaying the motion process of the augmented reality model with the set object as the background.
In the embodiment of the disclosure, the projection area of the augmented reality model on the surface of the set object is determined in real time, the texture information of the set object corresponding to the projection area is obtained, the augmented reality model is rendered according to the texture information, and the second image is rendered and displayed, so that an effect that the augmented reality model is a part of the set object, the set object is used as a background, and the augmented reality model moves according to the motion trajectory of the model is presented.
In an exemplary embodiment, the step 408 may be replaced by: when the texture acquisition event is detected, determining a projection area of the augmented reality model on the surface of the set object. It should be noted that there are many ways to trigger a texture fetching event, and the embodiments of the present disclosure are not limited in particular. For example, when the algorithm runs steadily, a texture fetch event is triggered. Or, when the frame loss rate in a period of time is less than a set threshold, triggering a texture acquisition event. Alternatively, a texture fetch event is triggered when a shot stabilization is detected. Or, when the user setting operation is detected, triggering a texture obtaining event. Wherein the set threshold may be a system default. The setting operation may be a user clicking a screen, shaking the electronic device, or specifying a gesture, etc. According to the embodiment of the disclosure, when the texture acquisition event is detected, the projection area of the augmented reality model on the surface of the set object is determined, the texture information of the projection area is acquired, and the augmented reality model is rendered according to the texture information, so that the problem that rendering effect is affected due to factors such as unstable algorithm or unstable shooting can be avoided. Meanwhile, a new interactive function can be provided, so that the user can select which texture is adopted to render the augmented reality model.
Fig. 5 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure. The apparatus can be implemented by software and/or hardware, and is generally integrated in an electronic device, and enriches the display effect of a photographed image by performing the image processing method of the embodiment of the present disclosure. As shown in fig. 5, the apparatus includes:
a model obtaining module 510, configured to obtain a first image, identify a set object in the first image, and obtain an augmented reality model corresponding to the set object;
a track obtaining module 520, configured to obtain a preset model motion track, where the model motion track is used to indicate position information and depth information of the augmented reality model in each frame of the first image;
an image overlaying module 530, configured to overlay the augmented reality model onto the first image according to the position information and the depth information to obtain a second image, and display the second image.
The image processing apparatus provided in the embodiments of the present disclosure is configured to implement an image processing method, and the implementation principle and technical effect of the image processing apparatus are similar to those of the image processing method, and are not described herein again.
Fig. 6 is a block diagram of an electronic device according to this functional embodiment. Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object;
acquiring a preset model motion track, wherein the model motion track is used for indicating position information and depth information of the augmented reality model in each frame of the first image;
and according to the position information and the depth information, overlaying the augmented reality model to the first image to obtain a second image, and displaying the second image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of a module in some cases does not constitute a limitation on the module itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, an image processing method is provided, in which acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object includes:
acquiring a first image according to a set period in the duration of a shooting event;
identifying the first image, and judging whether the first image contains a set object according to an identification result;
and if so, acquiring an augmented reality model corresponding to the set object.
According to one or more embodiments of the present disclosure, an image processing method is provided, in which superimposing the augmented reality model on the first image to obtain a second image according to the position information and the depth information includes:
determining an overlapping area corresponding to the augmented reality model in each frame of the first image according to the position information;
determining the depth-of-field relation between the pixel point of the set object and the pixel point of the augmented reality model in each frame of the first image according to the depth information;
and superposing the pixel points of the augmented reality model to the superposed region according to the depth-of-field relation to obtain a second image.
According to one or more embodiments of the present disclosure, an image processing method is provided, where determining, according to the depth information, a depth-of-field relationship between a pixel point of the set object and a pixel point of the augmented reality model in each frame of the first image includes:
acquiring depth-of-field information of pixel points of the set object for any frame of the first image;
determining a target pixel point in the pixels of the augmented reality model, wherein the target pixel point and the pixels of the set object have the same coordinate;
and acquiring the depth of field information of the target pixel point, and calculating the depth of field difference value of the target pixel point and the pixel point with the same coordinate of the set object to obtain the depth of field relation between the pixel point of the set object and the target pixel point in the current first image.
According to one or more embodiments of the present disclosure, the image processing method is provided, where the superimposing a pixel point of the augmented reality model to the superimposed region according to the depth-of-field relationship to obtain a second image, including:
adding the pixel points of the augmented reality model to the superposition area according to the depth-of-field relation;
determining a projection area of the augmented reality model on the surface of the set object;
and adjusting the depth of field of the pixel point of the set object in the projection area according to the depth of field information of the target pixel point to obtain a second image.
According to one or more embodiments of the present disclosure, the image processing method is provided, where the superimposing a pixel point of the augmented reality model to the superimposed region according to the depth-of-field relationship to obtain a second image, including:
judging whether the depth-of-field relation meets a set condition;
if yes, replacing the superposition area pixel points in the first image with the pixel points of the augmented reality model corresponding to the superposition area to obtain a second image;
and if not, setting the pixel points of the augmented reality model corresponding to the superposition area to be transparent, and superposing the pixel points of the augmented reality model to the superposition area to obtain a second image.
According to one or more embodiments of the present disclosure, the image processing method is provided, wherein the determining whether the depth-of-field relationship satisfies a set condition includes:
if the depth of field difference value is larger than a set threshold value, determining that the depth of field relation meets a set condition;
and if the depth of field difference value is smaller than or equal to a set threshold value, determining that the depth of field relation does not meet a set condition.
According to one or more embodiments of the present disclosure, there is provided an image processing method, after the step of superimposing the augmented reality model on the first image to obtain a second image, the method further includes:
determining a projection area of the augmented reality model on the surface of the set object;
and acquiring texture information of the projection area so as to render the augmented reality model according to the texture information.
According to one or more embodiments of the present disclosure, there is provided an image processing method, after the step of superimposing the augmented reality model on the first image to obtain a second image, the method further includes:
when a texture acquisition event is detected, determining a projection area of the augmented reality model on the surface of the set object;
and acquiring texture information of the projection area so as to render the augmented reality model according to the texture information.
According to one or more embodiments of the present disclosure, there is provided an image processing method, wherein,
and rendering the second image to a display interface, and displaying the motion process of the augmented reality model with the set object as the background.
According to one or more embodiments of the present disclosure, the present disclosure provides an image processing apparatus, wherein the model obtaining module is specifically configured to:
acquiring a first image according to a set period in the duration of a shooting event;
identifying the first image, and judging whether the first image contains a set object according to an identification result;
and if so, acquiring an augmented reality model corresponding to the set object.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the image superimposition module includes:
the region determining submodule is used for determining a superposition region corresponding to the augmented reality model in each frame of the first image according to the position information;
the relation determining submodule is used for determining the depth-of-field relation between the pixel point of the set object in each frame of the first image and the pixel point of the augmented reality model according to the depth information;
and the model superposition submodule is used for superposing the pixel points of the augmented reality model to the superposition area according to the depth-of-field relation to obtain a second image.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the relationship determination submodule is specifically configured to:
acquiring depth-of-field information of pixel points of the set object for any frame of the first image;
determining a target pixel point in the pixels of the augmented reality model, wherein the target pixel point and the pixels of the set object have the same coordinate;
and acquiring the depth of field information of the target pixel point, and calculating the depth of field difference value of the target pixel point and the pixel point with the same coordinate of the set object to obtain the depth of field relation between the pixel point of the set object and the target pixel point in the current first image.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the model superposition sub-module is specifically configured to:
adding the pixel points of the augmented reality model to the superposition area according to the depth-of-field relation;
determining a projection area of the augmented reality model on the surface of the set object;
and adjusting the depth of field of the pixel point of the set object in the projection area according to the depth of field information of the target pixel point to obtain a second image.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the model superposition sub-module is specifically configured to:
judging whether the depth-of-field relation meets a set condition;
if yes, replacing the superposition area pixel points in the first image with the pixel points of the augmented reality model corresponding to the superposition area to obtain a second image;
and if not, setting the pixel points of the augmented reality model corresponding to the superposition area to be transparent, and superposing the pixel points of the augmented reality model to the superposition area to obtain a second image.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, wherein the determining whether the depth-of-field relationship satisfies a set condition includes:
if the depth of field difference value is larger than a set threshold value, determining that the depth of field relation meets a set condition;
and if the depth of field difference value is smaller than or equal to a set threshold value, determining that the depth of field relation does not meet a set condition.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, further including:
a first projection region determining module, configured to determine a projection region of the augmented reality model on the surface of the set object after the step of superimposing the augmented reality model on the first image to obtain a second image;
and the first texture acquisition module is used for acquiring the texture information of the projection area so as to render the augmented reality model according to the texture information.
According to one or more embodiments of the present disclosure, the present disclosure provides an image processing method, the apparatus further including:
a second projection region determining module, configured to determine, after the step of superimposing the augmented reality model on the first image to obtain a second image, a projection region of the augmented reality model on the surface of the set object when a texture acquisition event is detected;
and the second texture obtaining module is used for obtaining the texture information of the projection area so as to render the augmented reality model according to the texture information.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus, the image superimposing module being further configured to:
and rendering the second image to a display interface, and displaying the motion process of the augmented reality model with the set object as the background.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (13)

1. An image processing method, comprising:
acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object;
acquiring a preset model motion track, wherein the model motion track is used for indicating position information and depth information of the augmented reality model in each frame of the first image;
and according to the position information and the depth information, overlaying the augmented reality model to the first image to obtain a second image, and displaying the second image.
2. The method of claim 1, wherein the acquiring a first image, identifying a set object in the first image, and acquiring an augmented reality model corresponding to the set object comprises:
acquiring a first image according to a set period in the duration of a shooting event;
identifying the first image, and judging whether the first image contains a set object according to an identification result;
and if so, acquiring an augmented reality model corresponding to the set object.
3. The method according to claim 1, wherein superimposing the augmented reality model onto the first image according to the position information and the depth information to obtain a second image comprises:
determining an overlapping area corresponding to the augmented reality model in each frame of the first image according to the position information;
determining the depth-of-field relation between the pixel point of the set object and the pixel point of the augmented reality model in each frame of the first image according to the depth information;
and superposing the pixel points of the augmented reality model to the superposed region according to the depth-of-field relation to obtain a second image.
4. The method according to claim 3, wherein the determining a depth-of-field relationship between the pixel point of the set object and the pixel point of the augmented reality model in each frame of the first image according to the depth information comprises:
acquiring depth-of-field information of pixel points of the set object for any frame of the first image;
determining a target pixel point in the pixels of the augmented reality model, wherein the target pixel point and the pixels of the set object have the same coordinate;
and acquiring the depth of field information of the target pixel point, and calculating the depth of field difference value of the target pixel point and the pixel point with the same coordinate of the set object to obtain the depth of field relation between the pixel point of the set object and the target pixel point in the current first image.
5. The method according to claim 4, wherein the superimposing, according to the depth-of-field relationship, a pixel point of the augmented reality model on the superimposed region to obtain a second image includes:
adding the pixel points of the augmented reality model to the superposition area according to the depth-of-field relation;
determining a projection area of the augmented reality model on the surface of the set object;
and adjusting the depth of field of the pixel point of the set object in the projection area according to the depth of field information of the target pixel point to obtain a second image.
6. The method according to claim 3, wherein the superimposing a pixel point of the augmented reality model to the superimposed region according to the depth-of-field relationship to obtain a second image includes:
judging whether the depth-of-field relation meets a set condition;
if yes, replacing the superposition area pixel points in the first image with the pixel points of the augmented reality model corresponding to the superposition area to obtain a second image;
and if not, setting the pixel points of the augmented reality model corresponding to the superposition area to be transparent, and superposing the pixel points of the augmented reality model to the superposition area to obtain a second image.
7. The method according to claim 6, wherein the determining whether the depth-of-field relationship satisfies a predetermined condition includes:
if the depth of field difference value is larger than a set threshold value, determining that the depth of field relation meets a set condition;
and if the depth of field difference value is smaller than or equal to a set threshold value, determining that the depth of field relation does not meet a set condition.
8. The method of claim 1, after superimposing the augmented reality model onto the first image to obtain a second image, further comprising:
determining a projection area of the augmented reality model on the surface of the set object;
and acquiring texture information of the projection area so as to render the augmented reality model according to the texture information.
9. The method of claim 1, after superimposing the augmented reality model onto the first image to obtain a second image, further comprising:
when a texture acquisition event is detected, determining a projection area of the augmented reality model on the surface of the set object;
and acquiring texture information of the projection area so as to render the augmented reality model according to the texture information.
10. The method of any of claims 1-9, wherein said displaying the second image comprises:
and rendering the second image to a display interface, and displaying the motion process of the augmented reality model with the set object as the background.
11. An image processing apparatus characterized by comprising:
the model acquisition module is used for acquiring a first image, identifying a set object in the first image and acquiring an augmented reality model corresponding to the set object;
the track acquisition module is used for acquiring a preset model motion track, and the model motion track is used for indicating the position information and the depth information of the augmented reality model in each frame of the first image;
and the image superposition module is used for superposing the augmented reality model to the first image according to the position information and the depth information to obtain a second image and displaying the second image.
12. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 10.
CN202010662796.2A 2020-07-10 2020-07-10 Image processing method and device, electronic equipment and storage medium Active CN111833459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010662796.2A CN111833459B (en) 2020-07-10 2020-07-10 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010662796.2A CN111833459B (en) 2020-07-10 2020-07-10 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111833459A true CN111833459A (en) 2020-10-27
CN111833459B CN111833459B (en) 2024-04-26

Family

ID=72900408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010662796.2A Active CN111833459B (en) 2020-07-10 2020-07-10 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111833459B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672185A (en) * 2020-12-18 2021-04-16 脸萌有限公司 Augmented reality-based display method, device, equipment and storage medium
WO2022120533A1 (en) * 2020-12-07 2022-06-16 深圳市大疆创新科技有限公司 Motion trajectory display system and method, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106308946A (en) * 2016-08-17 2017-01-11 清华大学 Augmented reality device applied to stereotactic surgical robot and method of augmented reality device
CN106774937A (en) * 2017-01-13 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Image interactive method and its device in a kind of augmented reality
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107680164A (en) * 2016-08-01 2018-02-09 中兴通讯股份有限公司 A kind of virtual objects scale adjusting method and device
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality
CN110162258A (en) * 2018-07-03 2019-08-23 腾讯数码(天津)有限公司 The processing method and processing device of individual scene image
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680164A (en) * 2016-08-01 2018-02-09 中兴通讯股份有限公司 A kind of virtual objects scale adjusting method and device
CN106308946A (en) * 2016-08-17 2017-01-11 清华大学 Augmented reality device applied to stereotactic surgical robot and method of augmented reality device
CN106774937A (en) * 2017-01-13 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Image interactive method and its device in a kind of augmented reality
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality
CN110162258A (en) * 2018-07-03 2019-08-23 腾讯数码(天津)有限公司 The processing method and processing device of individual scene image
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022120533A1 (en) * 2020-12-07 2022-06-16 深圳市大疆创新科技有限公司 Motion trajectory display system and method, and storage medium
CN112672185A (en) * 2020-12-18 2021-04-16 脸萌有限公司 Augmented reality-based display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111833459B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN110062176B (en) Method and device for generating video, electronic equipment and computer readable storage medium
CN111833461B (en) Method and device for realizing special effect of image, electronic equipment and storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110728622A (en) Fisheye image processing method and device, electronic equipment and computer readable medium
CN112348748A (en) Image special effect processing method and device, electronic equipment and computer readable storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN112396676B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN114742856A (en) Video processing method, device, equipment and medium
CN116934577A (en) Method, device, equipment and medium for generating style image
CN113315924A (en) Image special effect processing method and device
CN112351221B (en) Image special effect processing method, device, electronic equipment and computer readable storage medium
CN112492230B (en) Video processing method and device, readable medium and electronic equipment
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN115082368A (en) Image processing method, device, equipment and storage medium
CN110807728B (en) Object display method and device, electronic equipment and computer-readable storage medium
CN114422698A (en) Video generation method, device, equipment and storage medium
CN113703704A (en) Interface display method, head-mounted display device and computer readable medium
CN112070903A (en) Virtual object display method and device, electronic equipment and computer storage medium
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN112668474B (en) Plane generation method and device, storage medium and electronic equipment
CN115937383B (en) Method, device, electronic equipment and storage medium for rendering image
CN115619918A (en) Image rendering method, device and equipment and storage medium
CN116360661A (en) Special effect processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant