WO2019119815A1 - Ar service processing method, apparatus, device and computer readable storage medium - Google Patents

Ar service processing method, apparatus, device and computer readable storage medium Download PDF

Info

Publication number
WO2019119815A1
WO2019119815A1 PCT/CN2018/098145 CN2018098145W WO2019119815A1 WO 2019119815 A1 WO2019119815 A1 WO 2019119815A1 CN 2018098145 W CN2018098145 W CN 2018098145W WO 2019119815 A1 WO2019119815 A1 WO 2019119815A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
terminal
target object
virtual object
collected
Prior art date
Application number
PCT/CN2018/098145
Other languages
French (fr)
Chinese (zh)
Inventor
查俊莉
Original Assignee
广州市动景计算机科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市动景计算机科技有限公司 filed Critical 广州市动景计算机科技有限公司
Publication of WO2019119815A1 publication Critical patent/WO2019119815A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Definitions

  • the embodiments of the present invention relate to the field of computer technologies, and in particular, to an AR service processing method, apparatus, device, and computer readable storage medium.
  • AR Augmented Reality
  • real world information e.g., Visual information, sound information, etc.
  • virtual world information e.g., Visual information, sound information, etc.
  • AR technology has been applied to all aspects of live broadcast, for example, sending gifts to the anchor through AR technology, red packets to fans on the anchor, and so on.
  • the application of the existing AR technology in the live broadcast only enhances the interaction between the anchor and the fans, and improves the interest of the live broadcast, but the interaction screen between the anchor and the fans is relatively simple, and each live broadcast is a monolithic anchor interface. Can not meet the personalized needs of the live broadcast.
  • the embodiments of the present invention provide an AR service processing method, apparatus, device, and computer readable storage medium, so as to solve the problem that the existing live broadcast picture is single and cannot meet the user's personalized demand for live broadcast.
  • an AR service processing method including: acquiring a first image collected by a first terminal and a second image collected by a second terminal according to a setting identifier; Obtaining a target object, drawing the target object into the first image; displaying the drawn first image in the first terminal.
  • an AR service processing apparatus including: a first acquiring module, configured to acquire, according to a setting identifier, a first image collected by a first terminal and a second image collected by a second terminal a drawing module, configured to acquire a target object from the second image, and draw the target object into the first image; a display module, configured to display the drawn image in the first terminal An image.
  • an augmented reality AR service processing device including:
  • the computer program is stored in the memory and configured to be executed by the processor to implement the AR service processing method as described in the first aspect.
  • a computer readable storage medium having stored thereon a computer program
  • the computer program is executed by a processor to implement the AR service processing method as described in the first aspect.
  • the image collected by the first terminal (such as the fan terminal) in real time (ie, the first image) is obtained according to the setting identifier.
  • the second terminal (such as the anchor end) collects the image in real time (ie, the second image); and further, extracts the target object (such as the image of the anchor) from the image acquired by the second terminal, and draws it in the image of the first terminal.
  • the first terminal may set a private scene (such as a fan's living room scene) that implements the AR service, and project the target object in the private scene, and the private scene may be visible only to the first terminal. Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
  • the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image.
  • the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
  • FIG. 1 is a flow chart showing the steps of an AR service processing method according to Embodiment 1 of the present invention.
  • FIG. 2 is a flow chart showing the steps of an AR service processing method according to Embodiment 2 of the present invention.
  • FIG. 3 is a structural block diagram of an AR service processing apparatus according to Embodiment 3 of the present invention.
  • FIG. 4 is a structural block diagram of an AR service processing apparatus according to Embodiment 4 of the present invention.
  • FIG. 5 is a schematic structural diagram of a terminal according to Embodiment 5 of the present invention.
  • FIG. 6 is a structural diagram of an augmented reality AR service processing apparatus according to an exemplary embodiment of the present invention.
  • FIG. 1 a flow chart of steps of an AR service processing method according to a first embodiment of the present invention is shown.
  • Step S102 Acquire a first image collected by the first terminal and a second image collected by the second terminal according to the setting identifier.
  • the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal.
  • the image acquired by the first terminal is acquired, and the image collected by the second terminal is acquired.
  • the specific implementation form of the setting identifier can be appropriately set by a person skilled in the art according to actual needs, and the embodiment of the present invention does not limit this.
  • the setting identifier can be a two-dimensional code.
  • the first image collected by the first terminal and the second image collected by the second terminal are real-time images, and the first terminal and the second terminal may be mobile terminals at the same time, or may be non-mobile terminals at the same time, or one mobile terminal. One is a non-mobile terminal.
  • the first terminal may be a terminal of the fan party, and the second terminal may be a terminal of the host party.
  • the first terminal collects the scene image of the fan side in real time, and the second terminal collects the scene image of the anchor party in real time.
  • Step S104 Acquire a target object from the second image, and draw the target object into the first image.
  • the target object may be an image object corresponding to the subject having the action and the emotional expression, for example, the anchor image of the anchor in the live broadcast in the video.
  • the specific implementation of obtaining the target object from the second image may be implemented by any suitable means according to actual conditions by a person skilled in the art, for example, by means of a map, or by means of feature extraction, and the like.
  • the target object may be drawn into the first image.
  • the acquired two-dimensional target object may be directly superimposed into the first image, or the three-dimensional reconstruction may be used to create a three-dimensional image for the target object.
  • the 3D target object is drawn into the first image.
  • the specific drawing mode can be implemented by any suitable method according to the actual needs of the person skilled in the art, for example, the OpenGL mode, etc., which is not limited by the embodiment of the present invention.
  • the image of the anchor is extracted from the anchor screen and then drawn into the scene image of the fan side to realize the personalized live broadcast of the anchor in the scene set by the fan side; or, according to the image extracted from the anchor screen.
  • the image of the anchor image (such as feature information), generate a three-dimensional model of the anchor, and then draw the three-dimensional model of the anchor into the scene image of the fan side, as if the anchor image is projected to the setting scene of the fan home, such as the living room, thereby Realize the personalized live broadcast of the fans.
  • Step S106 Display the first image after drawing in the first terminal.
  • the first image after the default drawing will be displayed only in the first terminal, and the other terminals including the second terminal are not displayed, so as to implement the private live broadcast scene of the first terminal user.
  • the first terminal may also share the first image with the second terminal in real time, but the terminal other than the first terminal and the second terminal will not be displayed, and the first terminal user is also implemented. Live scene.
  • the image of the anchor is projected into the fan's living room, and the anchor can only be seen live in the living room through the first terminal of the fan party, or only the fan party and the anchor party see the anchor in the fan home. Live in the living room. Thereby, the user experience is further improved.
  • the AR service when the AR service is implemented, if the AR service is implemented in the live broadcast scenario, the image collected by the first terminal in real time (that is, the first image) and the image collected by the second terminal in real time are obtained according to the setting identifier (ie, Second image); further, the target object is extracted from the image acquired by the second terminal and drawn in the image of the first terminal.
  • the first terminal may set a private scenario for implementing the AR service, and project the target object in the private scenario, and the private scenario may be visible only to the first terminal, or may be visible only to the first terminal and the second terminal. . Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
  • the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image.
  • the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
  • the AR service processing method in this embodiment may be performed by any suitable terminal device having data processing capability, including but not limited to: a mobile terminal, such as a tablet computer, a mobile phone, a desktop computer, or the like.
  • a mobile terminal such as a tablet computer, a mobile phone, a desktop computer, or the like.
  • FIG. 2 a flow chart of steps of an AR service processing method according to Embodiment 2 of the present invention is shown.
  • Step S202 Identify the setting identifier.
  • the setting identifier may be generated according to the user identifier of the user of the first terminal and the user identifier of the user of the second terminal.
  • the setting identifier may be a two-dimensional code. However, it is not limited to the form of the two-dimensional code, and other suitable forms are also applicable.
  • the first terminal and the second terminal can perform corresponding data access and processing to implement the AR service according to the data in the two terminals.
  • each fan can have its own unique QR code (the set identifier), generated according to the ID of the fan and the anchor, and the exclusive QR code enters the entrance of the live room set by the fan. .
  • Each fan logs in to the live stream, has the fan's own platform ID information, and each anchor also has platform ID information.
  • the unique QR code of the fan entering the live room is generated by the QRID generation algorithm through the platform ID information of the fan and the anchor, and the exclusive two is generated.
  • the dimension code is saved to the playlist.
  • Fans enter the live room set by the fans themselves through the exclusive QR code entry. When a fan does not enter the set live broadcast room for the first time, it can directly enter through the generated unique QR code. All the information in the live broadcast room is the private information of the saved fan.
  • Step S204 Receive an AR scene setting instruction, collect a scene image according to the setting instruction, and/or set a virtual object; store the scene image and/or the set virtual object corresponding to the setting identifier.
  • the user of the first terminal uses the AR service for the first time, it can set its own dedicated scene and/or virtual object to generate a usage scenario of the AR service that conforms to its own preference. Further, the usage scenario may be stored corresponding to the setting identifier, so that the usage scenario is directly used when the AR service is performed again.
  • the live broadcast environment of the anchor when a fan enters a set live broadcast room for the first time through a dedicated QR code, the live broadcast environment of the anchor must first be set.
  • the anchor can be projected in a corner of the fan's own home, and the name of the live broadcast scene, that is, the name of the set live broadcast room.
  • the anchor's image After the setup is complete, the anchor's image will appear in the fan home to complete the live broadcast.
  • the fan side terminal will save the live broadcast scene settings.
  • fans can arrange a warm small room for the anchor in the live scene set by themselves.
  • virtual dolls, virtual puppies, virtual Christmas hats, virtual cookware, etc. are added through AR technology.
  • the fan and the anchor's personalized scene interaction such as: the anchor to help fans to walk the dog, the anchor to help fans cook.
  • the specific settings (such as the ornament image and location) of the live room are saved as templates.
  • the arrangement of the set virtual objects (such as decorations) and the usage scenarios of the AR business (such as the fan's living room) are separately set. If the fan changes the usage scene, you can choose to restore the virtual object settings such as the decoration layout, or you can choose to abandon the previous virtual object settings such as the decoration layout.
  • each fan has its own live broadcast scene, and can use AR technology to dynamically decorate and save the anchor image.
  • this step is an optional step. It is executed when the usage scenario of the AR service needs to be set. After the usage scenario setting is completed, if the reset command is not received, it will be used.
  • Step S206 triggering the first terminal to collect the scene image according to the setting identifier; determining whether the collected scene image is consistent with the stored scene image; if yes, executing step S208; if not, performing step S210.
  • the setting identifier is recognized and then the usage scenario is entered, the image collection of the usage scene is performed, and the collected scene image is stored before.
  • the scene images are compared to determine whether the scenes are the same.
  • the first terminal does not enter the set live broadcast room for the first time, and the first terminal actively prompts the matching scene to perform live broadcast according to the scene name. If you want to replace the live scene, you can discard the match and reset the usage scene. After that, you can enter the normal live broadcast.
  • the image of the captured scene is consistent with the stored scene image by any suitable image similarity matching algorithm, such as SIFT (Scale-invariant feature transform), optical flow method, and the like.
  • SIFT Scale-invariant feature transform
  • optical flow method optical flow method
  • Step S208 If the collected scene image is consistent with the stored scene image, the first terminal is instructed to continue to collect the scene image, and the captured scene image is used as the first image, and the second image collected by the second terminal is acquired. Then, step S212 is performed.
  • the first image collected by the first terminal is a real-time image of the scene where the first terminal is located
  • the second image collected by the second terminal is a real-time image of the scene where the second terminal is located.
  • the first video stream collected by the first terminal and the second video stream collected by the second terminal in real time are obtained according to the setting identifier; the first image is obtained from the first video stream, and the second video stream is obtained from the second video stream. Second image.
  • the image acquisition device such as the camera
  • the image is continuously acquired and transmitted to the target end, that is, the video stream.
  • the video stream is composed of consecutive images, and therefore, each frame of the video stream can be acquired and processed, and of course, it can be acquired and processed separately.
  • Step S210 Send a prompt message to the first terminal to prompt whether to perform AR scene resetting; if yes, go to step S204; if no, go to step S212.
  • the first terminal will actively match the usage scenario according to the scene name. If the match is unsuccessful, the prompt message is used to access whether the fan replaces the live broadcast scenario. If you want to replace the live scene, reset the live scene. After that, you can enter the normal live broadcast. If you do not change the live scene, you can prompt an error.
  • Step S212 Acquire a target object from the second image, and draw the target object into the first image.
  • the two-dimensional image of the target object may be drawn into the first image, or the three-dimensional image of the target object may be drawn into the first image after the target object is three-dimensionally reconstructed.
  • the first terminal performs the usage scenario setting of the AR service and the virtual object is set in the usage scenario
  • the information of the set virtual object needs to be acquired;
  • the virtual object is acquired and the virtual object is drawn onto the first image.
  • the target object can be acquired from the second image, and the target object is drawn into the first image in which the virtual object is drawn.
  • the target image may be detected by the first terminal on the second image, and the target object is acquired according to the detection result.
  • the method for detecting the target object can be implemented by any suitable method according to the actual situation in the art, which is not limited by the embodiment of the present invention.
  • the first terminal may acquire data of the target object from the server, and acquire the target object according to the data of the received target object. That is, in this manner, the detection and extraction of the target object is completed by the server, and the first terminal obtains the detected and extracted data from the server to obtain the target object.
  • the target object is drawn into the first image.
  • the fan of the first terminal further sets the virtual object in the scene through the scene setting, both the virtual object and the target object in the scene are Need to draw into the first image.
  • the display position of the target object may be determined according to the position information of the virtual object in the first image; the target object is obtained from the second image, and the target object is drawn into the first image. Placement.
  • the virtual object in the first image needs to be detected in position to obtain the location information thereof, and then the display position of the target object is determined according to the location information. For example, if the virtual object in the first image is a virtual puppy, after determining the position of the virtual puppy in the first image, it may be determined that the image of the target object, such as the anchor, is drawn at any appropriate position around the virtual puppy. and many more.
  • the triggering operation of the virtual object in the first image may be received, and an operation instruction is sent to the second terminal according to the triggering operation, and the user corresponding to the target object is instructed to perform the operation instruction.
  • Corresponding operation For example, taking the virtual object as a virtual puppy as an example, assuming that the virtual puppy is located on the sofa of the fan's living room in the current image frame, when the fan clicks on the virtual puppy, the first terminal sends the virtual puppy to the second terminal.
  • the triggered instruction may carry operation information indicating that the anchor performs corresponding operations, such as sitting and stroking the puppy; the instruction may also notify the anchor only that the virtual puppy is triggered, and the second terminal where the anchor is located may
  • the server or the operation corresponding to the virtual puppy stored locally from the second terminal selects an operation for the virtual puppy.
  • the first terminal of the fan side After receiving the image frame sent by the second terminal, the first terminal of the fan side extracts the anchor image and detects it, thereby determining the display position of the anchor image, and rendering the anchor image to the display in the current image.
  • the anchor is sitting on the sofa in the fan's living room and stroking the image of the virtual puppy.
  • the fan may also invite the anchor to perform corresponding interactive operations by voice or text.
  • the target object may also be acquired from the second image, and the target object is subjected to motion and/or expression detection; the virtual object is updated according to the detection result; and the target object and the updated virtual object are drawn into the first image.
  • the update to the virtual object may include at least one of the following: an image of the virtual object, a placement of the virtual object, and a presentation level of the virtual object.
  • the anchor is still operated by the anchor, and the second terminal of the anchor party collects the image of the action of the anchor to perform the sitting and touching, and transmits the image to the first terminal of the fan side;
  • a terminal extracts the image of the anchor from the image and detects the motion and expression of the anchor in the image; and detects that the motion of the anchor is a stroke motion, and the expression is a smile, correspondingly, the image of the virtual puppy is lying down Updated to squat and look up at the anchor's image.
  • the virtual puppy is updated from sitting on the sofa to jump to the front of the sofa, that is, The image of the virtual puppy is updated from the squat image to the standing image, and the virtual puppy's display position is updated from the corresponding position on the original sofa to the position in front of the sofa.
  • the display level of the virtual puppy is updated from the same level as the anchor image to the image below the anchor image. At the level, the part in which the virtual puppy is blocked by the anchor image is not displayed, and the image that is still not displayed in the unoccluded part is closer to the actual living scene.
  • an interactive AR is generated for the anchor to help the dog.
  • the first terminal may reconstruct the three-dimensional image of the anchor by using a dynamic character three-dimensional reconstruction technology, and then the three-dimensional image is drawn into the first image.
  • Three-dimensional reconstruction is a mathematical process and computer technology that uses three-dimensional images to recover three-dimensional information (shapes, etc.) of objects, including image acquisition, preprocessing, point cloud stitching, and feature analysis. Through the three-dimensional reconstruction, the same live broadcast situation as the anchor in the live broadcast scene can be seen on the first terminal of the fan side.
  • the second terminal of the anchor party collects the real-time video image frame of the anchor, and the captured video image frame is extracted by video compression technology to extract key information and spread efficiently in the network, so that the information transmitted in the network is rarely And it's fast.
  • the transmission frame rate reaches 60fps or more, the network transmission delay is not perceived by the user, and the cardon and delay are not felt, and the normal live state of the anchor can be seen in real time.
  • the interaction between the anchor and the fans can also be initiated by the anchor.
  • the anchor prompts the fans to do the action of walking the dog.
  • the fans can add the virtual puppy's 3D rendering animation to the specified location through the AR in the scene interface to achieve the anchor. Interactive operation.
  • the real-time interactive scene of the AR may be captured by the first terminal screenshot for sharing.
  • the real-time image of the first terminal may also be displayed in the live interface of the second terminal of the anchor.
  • the real-time of the first terminal may be displayed in the live interface of the second terminal.
  • the small window of the image may also be a full-screen display or the like, which is not limited in the embodiment of the present invention.
  • the live broadcast portal of the fan party is a dedicated two-dimensional code, and other people cannot enter, and the privacy information of the fan is protected;
  • the live broadcast room set by the fan is exclusive (other fans) Only the live broadcast screen of the anchor can be shared; the live broadcast room set by the fans can be used to arbitrarily distribute the anchor image through AR technology, and can be dynamically decorated and permanently saved (for example, each time the anchor is placed in a corner of the fan home);
  • the live room set by the fans has dynamic variability, and the scene can be flexibly set as needed, and will be saved as a template for fans to use again for more flexibility. As a result, the individual needs of the live broadcast of the fans are greatly satisfied, and the interaction is better.
  • the AR service when the AR service is implemented, if the AR service is implemented in the live broadcast scenario, the image collected by the first terminal in real time (that is, the first image) and the image collected by the second terminal in real time are obtained according to the setting identifier (ie, Second image); further, the target object is extracted from the image acquired by the second terminal and drawn in the image of the first terminal.
  • the first terminal may set a private scenario for implementing the AR service, and project the target object in the private scenario, and the private scenario may be visible only to the first terminal, or may be visible only to the first terminal and the second terminal. . Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
  • the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image.
  • the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
  • the AR service processing method in this embodiment may be performed by any suitable terminal device having data processing capability, including but not limited to: a mobile terminal, such as a tablet computer, a mobile phone, a desktop computer, or the like.
  • a mobile terminal such as a tablet computer, a mobile phone, a desktop computer, or the like.
  • FIG. 3 a block diagram of a structure of an AR service processing apparatus according to Embodiment 3 of the present invention is shown.
  • the AR service processing apparatus of this embodiment includes: a first obtaining module 302, configured to acquire a first image collected by the first terminal and a second image collected by the second terminal according to the setting identifier; and a drawing module 304, configured to be used by the second The target object is obtained in the image, and the target object is drawn into the first image; the display module 306 is configured to display the drawn first image in the first terminal.
  • the AR service processing apparatus in this embodiment is used to implement the corresponding AR service processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
  • FIG. 4 a block diagram of a structure of an AR service processing apparatus according to Embodiment 4 of the present invention is shown.
  • the AR service processing apparatus of this embodiment includes: a first acquiring module 402, configured to acquire a first image collected by the first terminal and a second image collected by the second terminal according to the setting identifier; and a drawing module 404, configured to be used from the second The target object is obtained in the image, and the target object is drawn into the first image.
  • the display module 406 is configured to display the first image after the drawing in the first terminal.
  • the setting identifier is generated according to a user identifier of a user of the first terminal and a user identifier of a user of the second terminal.
  • the setting identifier is a two-dimensional code.
  • the AR service processing apparatus of this embodiment further includes: a second obtaining module 408, configured to acquire information of the set virtual object before the drawing module 404 acquires the target object from the second image; and according to the information of the virtual object Obtaining a virtual object and drawing the virtual object onto the first image; the drawing module 404 is configured to acquire the target object from the second image and draw the target object into the first image in which the virtual object is drawn.
  • a second obtaining module 408 configured to acquire information of the set virtual object before the drawing module 404 acquires the target object from the second image; and according to the information of the virtual object Obtaining a virtual object and drawing the virtual object onto the first image
  • the drawing module 404 is configured to acquire the target object from the second image and draw the target object into the first image in which the virtual object is drawn.
  • the drawing module 404 is configured to determine a display position of the target object according to the position information of the virtual object in the first image; acquire the target object from the second image, and draw the target object to the display position in the first image .
  • the AR service processing apparatus of the embodiment further includes: an instruction module 410, configured to receive a trigger operation on the virtual object, send an operation instruction to the second terminal according to the triggering operation, and indicate that the user corresponding to the target object performs the operation instruction Corresponding operation.
  • an instruction module 410 configured to receive a trigger operation on the virtual object, send an operation instruction to the second terminal according to the triggering operation, and indicate that the user corresponding to the target object performs the operation instruction Corresponding operation.
  • the drawing module 404 includes: a detecting module 4042, configured to acquire a target object from the second image, and perform an action and/or an expression detection on the target object; and an updating module 4044, configured to update the virtual object according to the detection result;
  • the post rendering module 4046 is configured to draw the target object and the updated virtual object into the first image.
  • the updating module 4044 is configured to perform at least one of the following updates on the virtual object according to the detection result: an image of the virtual object, a display position of the virtual object, and a display level of the virtual object.
  • the AR service processing apparatus of this embodiment further includes: a setting module 412, configured to: before the first acquiring module 402 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier Receiving an AR scene setting instruction, collecting a scene image and/or setting a virtual object according to the setting instruction, and storing the scene image and/or the set virtual object corresponding to the setting identifier.
  • a setting module 412 configured to: before the first acquiring module 402 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier Receiving an AR scene setting instruction, collecting a scene image and/or setting a virtual object according to the setting instruction, and storing the scene image and/or the set virtual object corresponding to the setting identifier.
  • the first acquiring module 402 is configured to trigger the first terminal to collect the scene image according to the setting identifier, and determine whether the collected scene image is consistent with the stored scene image; if they are consistent, the first terminal is instructed to continue collecting the scene image. And acquiring the captured scene image as the first image, and acquiring the second image collected by the second terminal.
  • the drawing module 404 is configured to perform target object detection on the second image, acquire the target object according to the detection result, or acquire the target object according to the received data of the target object; and draw the target object into the first image.
  • the first obtaining module 402 is configured to acquire, according to the setting identifier, the first video stream collected by the first terminal in real time and the second video stream collected by the second terminal in real time; and acquiring the first image from the first video stream, The second image is acquired in the second video stream.
  • the AR service processing apparatus in this embodiment is used to implement the corresponding AR service processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
  • FIG. 5 a schematic structural diagram of a terminal according to Embodiment 5 of the present invention is shown.
  • the specific implementation of the present invention does not limit the specific implementation of the terminal.
  • the terminal may include a processor 502, a communications interface 504, a memory 506, and a communication bus 508.
  • Processor 502, communication interface 504, and memory 506 complete communication with one another via communication bus 508.
  • the communication interface 504 is configured to communicate with other terminals or servers.
  • the processor 502 is configured to execute the program 510, and specifically, the related steps in the foregoing AR service processing method embodiment.
  • program 510 can include program code, the program code including computer operating instructions.
  • the processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • the one or more processors included in the terminal may be the same type of processor, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 506 is configured to store the program 510.
  • Memory 506 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
  • the program 510 may be specifically configured to: the processor 502: obtain the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier; acquire the target object from the second image, and draw the target object Up to the first image; the first image after the rendering is displayed in the first terminal.
  • the setting identifier is generated according to a user identifier of a user of the first terminal and a user identifier of a user of the second terminal.
  • the setting identifier is a two-dimensional code.
  • the program 510 is further configured to: when the processor 502 acquires the target object from the second image, acquire information of the set virtual object; according to the information of the virtual object, acquire the virtual object and The object is drawn onto the first image; the target object is obtained from the second image, and the target object is drawn into the first image in which the virtual object is drawn.
  • the program 510 is further configured to cause the processor 502 to acquire the target object from the second image and draw the target object into the first image in which the virtual object is drawn, according to the first image.
  • the program 510 is further configured to: the processor 502 receives a triggering operation on the virtual object, and sends an operation instruction to the second terminal according to the triggering operation, indicating that the user corresponding to the target object performs the operation instruction. Operation.
  • the program 510 is further configured to cause the processor 502 to acquire the target object from the second image and draw the target object into the first image in which the virtual object is drawn, from the second image. Obtaining the target object, and performing motion and/or expression detection on the target object; updating the virtual object according to the detection result; and drawing the target object and the updated virtual object into the first image.
  • the program 510 is further configured to: when the processor 502 updates the virtual object according to the detection result, perform at least one of the following updates on the virtual object: an image of the virtual object, a placement of the virtual object, and a virtual The presentation level of the object.
  • the program 510 is further configured to: when the processor 502 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier, the AR scenario setting instruction is received, where Acquiring a scene image and/or setting a virtual object according to the setting instruction; storing the scene image and/or the set virtual object corresponding to the setting identifier.
  • the program 510 is further configured to: when the processor 502 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier, according to the setting identifier Triggering the first terminal to collect the scene image; determining whether the collected scene image is consistent with the stored scene image; if they are consistent, instructing the first terminal to continue collecting the scene image, using the collected scene image as the first image, and acquiring the second terminal. The second image.
  • the program 510 is further configured to: when the processor 502 acquires the target object from the second image, perform target object detection on the second image, acquire the target object according to the detection result; or, according to the receiving The target object's data is obtained from the target object.
  • the program 510 is further configured to: when the processor 502 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier, acquiring the first The first video stream collected by the terminal in real time and the second video stream collected by the second terminal in real time; the first image is obtained from the first video stream, and the second image is obtained from the second video stream.
  • the terminal in this embodiment can be used as the first terminal.
  • the AR service when the AR service is implemented, if the AR service is implemented in the live broadcast scenario, the image (ie, the first image) and the second terminal that are collected by the first terminal (such as the fan) in real time are obtained according to the setting identifier. (such as the anchor end) the image acquired in real time (ie, the second image); further, the target object (such as the image of the anchor) is extracted from the image acquired by the second terminal and drawn in the image of the first terminal.
  • the first terminal may set a private scene (such as a fan's living room scene) that implements the AR service, and project the target object in the private scene, and the private scene may be visible only to the first terminal. Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
  • the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image.
  • the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
  • FIG. 6 is a structural diagram of an augmented reality AR service processing apparatus according to an exemplary embodiment of the present invention.
  • the augmented reality AR service processing device includes:
  • the computer program is stored in the memory and configured to be executed by the processor to implement any of the augmented reality AR service processing methods described above.
  • the embodiment further provides a computer readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor to implement any of the augmented reality AR service processing methods described above.
  • the above method according to an embodiment of the present invention may be implemented in hardware, firmware, or implemented as software or computer code that may be stored in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or implemented by
  • the network downloads computer code originally stored in a remote recording medium or non-transitory machine readable medium and stored in a local recording medium so that the methods described herein can be stored using a general purpose computer, a dedicated processor or programmable
  • Such software processing on a recording medium of dedicated hardware such as an ASIC or an FPGA.
  • a computer, processor, microprocessor controller or programmable hardware includes storage components (eg, RAM, ROM, flash memory, etc.) that can store or receive software or computer code, when the software or computer code is The AR service processing method described herein is implemented when the processor or hardware accesses and executes. Moreover, when a general purpose computer accesses code for implementing the AR business processing method shown herein, execution of the code converts the general purpose computer into a special purpose computer for executing the AR business processing method shown herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiments of the present application provide an AR service processing method, an apparatus, a device and a computer readable storage medium. The AR service processing method comprises: acquiring, according to a provided identifier, a first image acquired by a first terminal and a second image acquired by a second terminal; acquiring a target from the second image, and drawing the target into the first image; and displaying, in the first terminal, the drawn first image. In the embodiments of the present invention, a user may provide an appropriate AR service usage scenario according to his/her own preferences, effectively solving the problem of the existing live broadcast picture being monotonous, not being able to satisfy users' personalized requirements on live broadcast.

Description

AR业务处理方法、装置、设备及计算机可读存储介质AR service processing method, device, device and computer readable storage medium 技术领域Technical field
本发明实施例涉及计算机技术领域,尤其涉及一种AR业务处理方法、装置、设备及计算机可读存储介质。The embodiments of the present invention relate to the field of computer technologies, and in particular, to an AR service processing method, apparatus, device, and computer readable storage medium.
背景技术Background technique
AR(Augmented Reality,增强现实)技术是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,其把原本在现实世界的一定时间间范围内很难体验到的信息(如,视觉信息、声音信息等),模拟仿真后叠加到真实的信息中,真实的环境和虚拟的物体实时地叠加到了同一个画面或间同时存在。AR (Augmented Reality) technology is a new technology that integrates real world information and virtual world information "seamlessly", which makes it difficult to experience information within a certain period of time in the real world (eg, Visual information, sound information, etc.), superimposed on the real information after the simulation, the real environment and the virtual object are superimposed on the same picture or in the same time in real time.
随着AR技术的发展,AR技术已经应用于直播的各个方面,例如,通过AR技术给主播送礼物、主播向粉丝发红包,等等。但现有的AR技术在直播中的应用都只是增强主播和粉丝之间的互动性,提高直播的趣味,但主播与粉丝之间的交互画面都比较单一,每次直播都是千篇一律的主播界面,无法达到直播的个性化需求。With the development of AR technology, AR technology has been applied to all aspects of live broadcast, for example, sending gifts to the anchor through AR technology, red packets to fans on the anchor, and so on. However, the application of the existing AR technology in the live broadcast only enhances the interaction between the anchor and the fans, and improves the interest of the live broadcast, but the interaction screen between the anchor and the fans is relatively simple, and each live broadcast is a monolithic anchor interface. Can not meet the personalized needs of the live broadcast.
发明内容Summary of the invention
有鉴于此,本发明实施例提供一种AR业务处理方法、装置、设备及计算机可读存储介质,以解决现有的直播画面单一,无法满足用户对直播的个性化需求的问题。In view of this, the embodiments of the present invention provide an AR service processing method, apparatus, device, and computer readable storage medium, so as to solve the problem that the existing live broadcast picture is single and cannot meet the user's personalized demand for live broadcast.
根据本发明实施例的第一方面,提供了一种AR业务处理方法,包括:根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像;从所述第二图像中获取目标对象,将所述目标对象绘制至所述第一图像中;在所述第一终端中展示绘制后的所述第一图像。According to a first aspect of the present invention, an AR service processing method is provided, including: acquiring a first image collected by a first terminal and a second image collected by a second terminal according to a setting identifier; Obtaining a target object, drawing the target object into the first image; displaying the drawn first image in the first terminal.
根据本发明实施例的第二方面,提供了一种AR业务处理装置,包括:第一获取模块,用于根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像;绘制模块,用于从所述第二图像中获取目标对象,将 所述目标对象绘制至所述第一图像中;展示模块,用于在所述第一终端中展示绘制后的所述第一图像。According to a second aspect of the embodiments of the present invention, an AR service processing apparatus is provided, including: a first acquiring module, configured to acquire, according to a setting identifier, a first image collected by a first terminal and a second image collected by a second terminal a drawing module, configured to acquire a target object from the second image, and draw the target object into the first image; a display module, configured to display the drawn image in the first terminal An image.
根据本发明实施例的第三方面,提供了一种增强现实AR业务处理设备,包括:According to a third aspect of the embodiments of the present invention, an augmented reality AR service processing device is provided, including:
存储器;Memory
处理器;以及Processor;
计算机程序;Computer program;
其中,所述计算机程序存储在所述存储器中,并配置为由所述处理器执行以实现如第一方面所述的AR业务处理方法。Wherein the computer program is stored in the memory and configured to be executed by the processor to implement the AR service processing method as described in the first aspect.
根据本发明实施例的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序,According to a fourth aspect of the embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program,
所述计算机程序被处理器执行以实现如第一方面所述的AR业务处理方法。The computer program is executed by a processor to implement the AR service processing method as described in the first aspect.
根据本发明实施例提供的方案,在实现AR业务时,如在直播场景中实现AR业务时,会根据设定标识获取第一终端(如粉丝端)实时采集的图像(即第一图像)和第二终端(如主播端)实时采集的图像(即第二图像);进而,将目标对象(如主播的形象)从第二终端采集的图像中提取出来,绘制在第一终端的图像中。基于该方案,第一终端可以设置实现AR业务的私人场景(如粉丝家的客厅场景),并将目标对象投射在该私人场景中,该私人场景可以仅对第一终端可见。由此,第一终端的用户可以根据自身的偏好设置适当的AR业务使用场景,有效解决了现有的直播画面单一,无法满足用户对直播的个性化需求的问题。According to the solution provided by the embodiment of the present invention, when the AR service is implemented, if the AR service is implemented in the live broadcast scenario, the image collected by the first terminal (such as the fan terminal) in real time (ie, the first image) is obtained according to the setting identifier. The second terminal (such as the anchor end) collects the image in real time (ie, the second image); and further, extracts the target object (such as the image of the anchor) from the image acquired by the second terminal, and draws it in the image of the first terminal. Based on the solution, the first terminal may set a private scene (such as a fan's living room scene) that implements the AR service, and project the target object in the private scene, and the private scene may be visible only to the first terminal. Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
此外,设定标识与第一终端的AR业务处理和第二终端的AR业务处理均相关,因此,当设定标识被触发时,可以既获取第一终端采集的图像也获取第二终端采集的图像。通过设定标识,能够实现终端信息的快速获取和确定,提高AR业务实现速度,提升用户使用体验。In addition, the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image. By setting the identifier, the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地, 下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is a few embodiments described in the embodiments of the present invention, and other drawings can be obtained according to the drawings for those skilled in the art.
图1为根据本发明实施例一的一种AR业务处理方法的步骤流程图;1 is a flow chart showing the steps of an AR service processing method according to Embodiment 1 of the present invention;
图2为根据本发明实施例二的一种AR业务处理方法的步骤流程图;2 is a flow chart showing the steps of an AR service processing method according to Embodiment 2 of the present invention;
图3为根据本发明实施例三的一种AR业务处理装置的结构框图;3 is a structural block diagram of an AR service processing apparatus according to Embodiment 3 of the present invention;
图4为根据本发明实施例四的一种AR业务处理装置的结构框图;4 is a structural block diagram of an AR service processing apparatus according to Embodiment 4 of the present invention;
图5为根据本发明实施例五的一种终端的结构示意图;FIG. 5 is a schematic structural diagram of a terminal according to Embodiment 5 of the present invention; FIG.
图6为本发明一示例性实施例示出的增强现实AR业务处理设备的结构图。FIG. 6 is a structural diagram of an augmented reality AR service processing apparatus according to an exemplary embodiment of the present invention.
具体实施方式Detailed ways
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。For a better understanding of the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described in conjunction with the accompanying drawings in the embodiments of the present invention. The embodiments are only a part of the embodiments of the embodiments of the invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art should be within the scope of protection of the embodiments of the present invention based on the embodiments in the embodiments of the present invention.
下面结合本发明实施例附图进一步说明本发明实施例具体实现。The specific implementation of the embodiments of the present invention is further described below in conjunction with the accompanying drawings.
实施例一Embodiment 1
参照图1,示出了根据本发明实施例一的一种AR业务处理方法的步骤流程图。Referring to FIG. 1, a flow chart of steps of an AR service processing method according to a first embodiment of the present invention is shown.
本实施例的AR业务处理方法包括以下步骤:The AR service processing method in this embodiment includes the following steps:
步骤S102:根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像。Step S102: Acquire a first image collected by the first terminal and a second image collected by the second terminal according to the setting identifier.
其中,设定标识与第一终端的AR业务处理和第二终端的AR业务处理均相关,当设定标识被触发时,既获取第一终端采集的图像也获取第二终端采集的图像。设定标识的具体实现形式可以由本领域技术人员根据实际需求适当设置,本发明实施例对此不作限制,如,设定标识可以为二维码。The setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. When the setting identifier is triggered, the image acquired by the first terminal is acquired, and the image collected by the second terminal is acquired. The specific implementation form of the setting identifier can be appropriately set by a person skilled in the art according to actual needs, and the embodiment of the present invention does not limit this. For example, the setting identifier can be a two-dimensional code.
第一终端采集的第一图像和第二终端采集的第二图像均为实时图像,第一终端和第二终端可以同时为移动终端,也可以同时为非移动终端,还 可以一个为移动终端,一个为非移动终端。The first image collected by the first terminal and the second image collected by the second terminal are real-time images, and the first terminal and the second terminal may be mobile terminals at the same time, or may be non-mobile terminals at the same time, or one mobile terminal. One is a non-mobile terminal.
在直播场景中,第一终端可以为粉丝方的终端,第二终端可以为主播方的终端。第一终端实时采集粉丝方的场景图像,第二终端实时采集主播方的场景图像。In a live broadcast scenario, the first terminal may be a terminal of the fan party, and the second terminal may be a terminal of the host party. The first terminal collects the scene image of the fan side in real time, and the second terminal collects the scene image of the anchor party in real time.
步骤S104:从第二图像中获取目标对象,将目标对象绘制至第一图像中。Step S104: Acquire a target object from the second image, and draw the target object into the first image.
本发明实施例中,目标对象可以为具有动作和感情表达的主体对应的图像对象,如,直播中的主播在视频中的主播形象。In the embodiment of the present invention, the target object may be an image object corresponding to the subject having the action and the emotional expression, for example, the anchor image of the anchor in the live broadcast in the video.
从第二图像中获取目标对象的具体实现可以由本领域技术人员根据实际情况采用任意适当方式实现,例如,可以通过抠图的方式,或者通过特征提取的方式等等。The specific implementation of obtaining the target object from the second image may be implemented by any suitable means according to actual conditions by a person skilled in the art, for example, by means of a map, or by means of feature extraction, and the like.
在获取了目标对象后,可以将目标对象绘制至第一图像中,例如,可以直接将获取的二维目标对象叠加绘制至第一图像中,也可以通过三维重建的方式,为目标对象创建三维模型后,将三维目标对象绘制至第一图像中。其中,具体的绘制方式可以由本领域技术人员根据实际需求采用任意适当的方法实现,如,OpenGL方式等,本发明实施例对此不作限制。After the target object is acquired, the target object may be drawn into the first image. For example, the acquired two-dimensional target object may be directly superimposed into the first image, or the three-dimensional reconstruction may be used to create a three-dimensional image for the target object. After the model, the 3D target object is drawn into the first image. The specific drawing mode can be implemented by any suitable method according to the actual needs of the person skilled in the art, for example, the OpenGL mode, etc., which is not limited by the embodiment of the present invention.
例如,在直播中,将主播的形象从主播画面中提取出来后绘制至粉丝方的场景图像中,实现主播在粉丝方设置的场景中的个性化直播;或者,根据从主播画面中提取出的主播的形象的信息(如特征信息),生成主播的三维模型,然后将主播的三维模型绘制至粉丝方的场景图像中,就好像将主播形象投射到粉丝家的设定场景如客厅中,从而实现粉丝方的个性化直播。For example, in the live broadcast, the image of the anchor is extracted from the anchor screen and then drawn into the scene image of the fan side to realize the personalized live broadcast of the anchor in the scene set by the fan side; or, according to the image extracted from the anchor screen. The image of the anchor image (such as feature information), generate a three-dimensional model of the anchor, and then draw the three-dimensional model of the anchor into the scene image of the fan side, as if the anchor image is projected to the setting scene of the fan home, such as the living room, thereby Realize the personalized live broadcast of the fans.
步骤S106:在第一终端中展示绘制后的第一图像。Step S106: Display the first image after drawing in the first terminal.
需要说明的是,如无进一步设置,默认绘制后的第一图像将仅在第一终端中展示,包括第二终端在内的其它终端均不进行显示,以实现第一终端用户的私人直播场景。但不限于此,第一终端也可实时与第二终端分享第一图像,但对于除第一终端和第二终端之外的其它终端,将不会进行显示,同样实现第一终端用户的私人直播场景。It should be noted that, if there is no further setting, the first image after the default drawing will be displayed only in the first terminal, and the other terminals including the second terminal are not displayed, so as to implement the private live broadcast scene of the first terminal user. . However, the first terminal may also share the first image with the second terminal in real time, but the terminal other than the first terminal and the second terminal will not be displayed, and the first terminal user is also implemented. Live scene.
例如,直播场景中,主播的形象被投射到粉丝家客厅中,仅可以通过粉丝方的第一终端看到主播在自家客厅中进行直播,或者,仅粉丝方和主 播方看到主播在粉丝家客厅中直播。由此,进一步提升了用户的使用体验。For example, in a live scene, the image of the anchor is projected into the fan's living room, and the anchor can only be seen live in the living room through the first terminal of the fan party, or only the fan party and the anchor party see the anchor in the fan home. Live in the living room. Thereby, the user experience is further improved.
通过本实施例,在实现AR业务时,如在直播场景中实现AR业务时,会根据设定标识获取第一终端实时采集的图像(即第一图像)和第二终端实时采集的图像(即第二图像);进而,将目标对象从第二终端采集的图像中提取出来,绘制在第一终端的图像中。基于该方案,第一终端可以设置实现AR业务的私人场景,并将目标对象投射在该私人场景中,该私人场景可以仅对第一终端可见,或者,仅对第一终端和第二终端可见。由此,第一终端的用户可以根据自身的偏好设置适当的AR业务使用场景,有效解决了现有的直播画面单一,无法满足用户对直播的个性化需求的问题。In this embodiment, when the AR service is implemented, if the AR service is implemented in the live broadcast scenario, the image collected by the first terminal in real time (that is, the first image) and the image collected by the second terminal in real time are obtained according to the setting identifier (ie, Second image); further, the target object is extracted from the image acquired by the second terminal and drawn in the image of the first terminal. Based on the solution, the first terminal may set a private scenario for implementing the AR service, and project the target object in the private scenario, and the private scenario may be visible only to the first terminal, or may be visible only to the first terminal and the second terminal. . Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
此外,设定标识与第一终端的AR业务处理和第二终端的AR业务处理均相关,因此,当设定标识被触发时,可以既获取第一终端采集的图像也获取第二终端采集的图像。通过设定标识,能够实现终端信息的快速获取和确定,提高AR业务实现速度,提升用户使用体验。In addition, the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image. By setting the identifier, the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
本实施例的AR业务处理方法可以由任意适当的具有数据处理能力的终端设备执行,包括但不限于:移动终端,如平板电脑、手机,台式机等。The AR service processing method in this embodiment may be performed by any suitable terminal device having data processing capability, including but not limited to: a mobile terminal, such as a tablet computer, a mobile phone, a desktop computer, or the like.
实施例二Embodiment 2
参照图2,示出了根据本发明实施例二的一种AR业务处理方法的步骤流程图。Referring to FIG. 2, a flow chart of steps of an AR service processing method according to Embodiment 2 of the present invention is shown.
本实施例的AR业务处理方法包括以下步骤:The AR service processing method in this embodiment includes the following steps:
步骤S202:识别设定标识。Step S202: Identify the setting identifier.
其中,设定标识可以根据第一终端的用户的用户标识和第二终端的用户的用户标识生成。可选地,该设定标识可以为二维码。但不限于二维码的形式,其它适当形式也同样适用。The setting identifier may be generated according to the user identifier of the user of the first terminal and the user identifier of the user of the second terminal. Optionally, the setting identifier may be a two-dimensional code. However, it is not limited to the form of the two-dimensional code, and other suitable forms are also applicable.
通过识别所述设定标识,可以对第一终端和第二终端进行相应的数据访问和处理,以根据两个终端中的数据实现AR业务。By identifying the setting identifier, the first terminal and the second terminal can perform corresponding data access and processing to implement the AR service according to the data in the two terminals.
例如,在直播中,每个粉丝可以拥有自己的专属二维码(即所述设定标识),根据粉丝和主播的ID生成,该专属二维码作为粉丝进入自己设定的直播间的入口。每个粉丝登录直播,有粉丝自己的平台ID信息,每个主播也有平台ID信息。当粉丝第一次点击进入属于该粉丝的直播间时,就通过粉丝和主播的平台ID信息,通过二维码生成算法,生成该粉丝进 入该直播间的专属二维码,并将该专属二维码保存到播放列表中。粉丝通过专属二维码入口进入粉丝自己设定的直播间。粉丝非第一次进入设定的直播间时,可通过生成的专属二维码直接进入,该直播间内的所有信息都是保存的该粉丝的私有信息。For example, in the live broadcast, each fan can have its own unique QR code (the set identifier), generated according to the ID of the fan and the anchor, and the exclusive QR code enters the entrance of the live room set by the fan. . Each fan logs in to the live stream, has the fan's own platform ID information, and each anchor also has platform ID information. When the fan clicks into the live room belonging to the fan for the first time, the unique QR code of the fan entering the live room is generated by the QRID generation algorithm through the platform ID information of the fan and the anchor, and the exclusive two is generated. The dimension code is saved to the playlist. Fans enter the live room set by the fans themselves through the exclusive QR code entry. When a fan does not enter the set live broadcast room for the first time, it can directly enter through the generated unique QR code. All the information in the live broadcast room is the private information of the saved fan.
步骤S204:接收AR场景设置指令,根据所述设置指令采集场景图像和/或设置虚拟对象;将所述场景图像和/或设置的虚拟对象与所述设定标识对应存储。Step S204: Receive an AR scene setting instruction, collect a scene image according to the setting instruction, and/or set a virtual object; store the scene image and/or the set virtual object corresponding to the setting identifier.
第一终端的用户在首次使用该AR业务时,可以设置自己的专属场景和/或虚拟对象,以生成符合自身偏好的AR业务的使用场景。进一步地,可以将该使用场景与所述设定标识对应存储,以在再次进行该AR业务时,直接使用该使用场景。When the user of the first terminal uses the AR service for the first time, it can set its own dedicated scene and/or virtual object to generate a usage scenario of the AR service that conforms to its own preference. Further, the usage scenario may be stored corresponding to the setting identifier, so that the usage scenario is directly used when the AR service is performed again.
例如,在直播场景中,当粉丝通过专属二维码第一次进入设定的直播间后,首先要设置主播的直播环境。通过AR技术,可以将主播投影在粉丝自己家的某一角落,并设置直播场景名称,即设定的直播间的名称。设置完成后,主播的形象就会出现在粉丝家完成直播。粉丝方的终端会保存本次直播场景设置。For example, in a live broadcast scenario, when a fan enters a set live broadcast room for the first time through a dedicated QR code, the live broadcast environment of the anchor must first be set. Through AR technology, the anchor can be projected in a corner of the fan's own home, and the name of the live broadcast scene, that is, the name of the set live broadcast room. After the setup is complete, the anchor's image will appear in the fan home to complete the live broadcast. The fan side terminal will save the live broadcast scene settings.
正常直播中,粉丝可以在自己设置的直播场景中,给主播布置温馨小房间。例如,通过AR技术增加虚拟娃娃,虚拟小狗,虚拟圣诞帽,虚拟炊具等。通过这些设置,实现粉丝与主播的个性化场景互动,例如:主播帮粉丝遛狗,主播帮粉丝做饭。In the normal live broadcast, fans can arrange a warm small room for the anchor in the live scene set by themselves. For example, virtual dolls, virtual puppies, virtual Christmas hats, virtual cookware, etc. are added through AR technology. Through these settings, the fan and the anchor's personalized scene interaction, such as: the anchor to help fans to walk the dog, the anchor to help fans cook.
在粉丝每次从设定的直播间退出时,会保存直播间的具体设置(例如,装饰物图片和位置)为模板。其中,设置的虚拟对象(如装饰物)和AR业务的使用场景(如粉丝家客厅)的布置是分开设置的。如果粉丝更改使用场景,可以选择恢复虚拟对象设置如装饰物布置,也可选择放弃之前的虚拟对象设置如装饰物布置。Each time the fan exits from the set live room, the specific settings (such as the ornament image and location) of the live room are saved as templates. Among them, the arrangement of the set virtual objects (such as decorations) and the usage scenarios of the AR business (such as the fan's living room) are separately set. If the fan changes the usage scene, you can choose to restore the virtual object settings such as the decoration layout, or you can choose to abandon the previous virtual object settings such as the decoration layout.
通过使用场景设置,每个粉丝拥有自己的直播场景,并可通过AR技术,自己进行主播形象的动态装饰和保存。By using the scene settings, each fan has its own live broadcast scene, and can use AR technology to dynamically decorate and save the anchor image.
需要说明的是,本步骤为可选步骤,在需要设置AR业务的使用场景时执行,该使用场景设置完成后,若未接收到重新设置的指令,将一直使用。It should be noted that this step is an optional step. It is executed when the usage scenario of the AR service needs to be set. After the usage scenario setting is completed, if the reset command is not received, it will be used.
步骤S206:根据所述设定标识触发第一终端采集场景图像;判断采集的场景图像与存储的场景图像是否一致;若一致,则执行步骤S208;若不一致,则执行步骤S210。Step S206: triggering the first terminal to collect the scene image according to the setting identifier; determining whether the collected scene image is consistent with the stored scene image; if yes, executing step S208; if not, performing step S210.
在第一终端的用户设置有AR业务的使用场景时,在每次使用该AR业务时,会识别设定标识然后进入该使用场景,进行使用场景的图像采集,将采集的场景图像与之前存储的场景图像进行比对,判断是否场景相同。When the user of the first terminal sets the usage scenario of the AR service, each time the AR service is used, the setting identifier is recognized and then the usage scenario is entered, the image collection of the usage scene is performed, and the collected scene image is stored before. The scene images are compared to determine whether the scenes are the same.
例如,粉丝通过第一终端非第一次进入设定的直播间,第一终端会主动根据场景名称,提示匹配场景进行直播的投放。如果希望更换直播场景,可以放弃匹配,重新设置使用场景。之后,就可以进入正常的直播投放了。For example, the first terminal does not enter the set live broadcast room for the first time, and the first terminal actively prompts the matching scene to perform live broadcast according to the scene name. If you want to replace the live scene, you can discard the match and reset the usage scene. After that, you can enter the normal live broadcast.
其中,可以通过任意适当的图像相似度匹配算法,比如SIFT(Scale-invariant feature transform,尺度不变特征变换),光流法等,判断采集的场景图像与存储的场景图像是否一致。The image of the captured scene is consistent with the stored scene image by any suitable image similarity matching algorithm, such as SIFT (Scale-invariant feature transform), optical flow method, and the like.
步骤S208:若采集的场景图像与存储的场景图像一致,则指示第一终端继续采集场景图像,将采集的场景图像作为第一图像,并获取第二终端采集的第二图像。然后,执行步骤S212。Step S208: If the collected scene image is consistent with the stored scene image, the first terminal is instructed to continue to collect the scene image, and the captured scene image is used as the first image, and the second image collected by the second terminal is acquired. Then, step S212 is performed.
第一终端采集的第一图像为第一终端所在场景的实时图像,第二终端采集的第二图像为第二终端所在场景的实时图像。The first image collected by the first terminal is a real-time image of the scene where the first terminal is located, and the second image collected by the second terminal is a real-time image of the scene where the second terminal is located.
一般情况下,可以根据设定标识获取第一终端实时采集的第一视频流和第二终端实时采集的第二视频流;从第一视频流中获取第一图像,从第二视频流中获取第二图像。Generally, the first video stream collected by the first terminal and the second video stream collected by the second terminal in real time are obtained according to the setting identifier; the first image is obtained from the first video stream, and the second video stream is obtained from the second video stream. Second image.
对于终端设备来说,在图像采集装置如摄像头启动后,会连续采集图像并传输至目标端,即视频流。也即,视频流由连续的图像组成,因此,可以对视频流中的每帧图像进行获取和处理,当然,也可以隔帧获取和处理。For the terminal device, after the image acquisition device such as the camera is activated, the image is continuously acquired and transmitted to the target end, that is, the video stream. That is, the video stream is composed of consecutive images, and therefore, each frame of the video stream can be acquired and processed, and of course, it can be acquired and processed separately.
步骤S210:向第一终端发送提示信息,提示是否进行AR场景重新设置;若是,则转步骤S204;若否,则执行步骤S212。Step S210: Send a prompt message to the first terminal to prompt whether to perform AR scene resetting; if yes, go to step S204; if no, go to step S212.
例如,粉丝通过第一终端非第一次进入设定的直播间,第一终端会主动根据场景名称,进行使用场景的匹配,如果匹配不成功,则通过提示消息访问粉丝是否更换直播场景。如果希望更换直播场景,则重新设置直播场景。之后,就可以进入正常的直播投放了。若不更换直播场景,则可以 提示错误。For example, if the first terminal does not enter the set live broadcast room for the first time, the first terminal will actively match the usage scenario according to the scene name. If the match is unsuccessful, the prompt message is used to access whether the fan replaces the live broadcast scenario. If you want to replace the live scene, reset the live scene. After that, you can enter the normal live broadcast. If you do not change the live scene, you can prompt an error.
步骤S212:从第二图像中获取目标对象,将目标对象绘制至第一图像中。Step S212: Acquire a target object from the second image, and draw the target object into the first image.
在获取了目标对象后,可以将目标对象的二维图像绘制至第一图像中,也可以对目标对象进行三维重建后,将目标对象的三维图像绘制至第一图像中。After the target object is acquired, the two-dimensional image of the target object may be drawn into the first image, or the three-dimensional image of the target object may be drawn into the first image after the target object is three-dimensionally reconstructed.
需要说明的是,若第一终端进行了AR业务的使用场景设置,并在使用场景中设置了虚拟对象,则在从第二图像中获取目标对象之前,还需要获取设置的虚拟对象的信息;根据虚拟对象的信息,获取虚拟对象并将虚拟对象绘制至第一图像上。基于此,在从第二图像中获取目标对象,将目标对象绘制至第一图像中时,可以从第二图像中获取目标对象,并将目标对象绘制至绘制有虚拟对象的第一图像中。It should be noted that, if the first terminal performs the usage scenario setting of the AR service and the virtual object is set in the usage scenario, before acquiring the target object from the second image, the information of the set virtual object needs to be acquired; According to the information of the virtual object, the virtual object is acquired and the virtual object is drawn onto the first image. Based on this, when the target object is acquired from the second image, and the target object is drawn into the first image, the target object can be acquired from the second image, and the target object is drawn into the first image in which the virtual object is drawn.
可选地,在从第二图像中获取目标对象时,一种可行方式中,可以由第一终端对第二图像进行目标对象检测,根据检测结果获取目标对象。其中,目标对象的检测方式可以由本领域技术人员根据实际情况采用任意适当的方式实现,本发明实施例对此不作限制。在另一种可行方式中,第一终端可以从服务器获取目标对象的数据,根据接收的目标对象的数据,获取目标对象。也即,该种方式中,目标对象的检测和提取由服务器完成,第一终端从服务器获取检测和提取后的数据即可获得目标对象。Optionally, when the target object is acquired from the second image, in a feasible manner, the target image may be detected by the first terminal on the second image, and the target object is acquired according to the detection result. The method for detecting the target object can be implemented by any suitable method according to the actual situation in the art, which is not limited by the embodiment of the present invention. In another possible manner, the first terminal may acquire data of the target object from the server, and acquire the target object according to the data of the received target object. That is, in this manner, the detection and extraction of the target object is completed by the server, and the first terminal obtains the detected and extracted data from the server to obtain the target object.
在获取了目标对象后,将目标对象绘制至第一图像中,如前所述,若第一终端的粉丝通过场景设置还设置了场景中的虚拟对象,则场景中的虚拟对象和目标对象均需绘制至第一图像中。After the target object is acquired, the target object is drawn into the first image. As described above, if the fan of the first terminal further sets the virtual object in the scene through the scene setting, both the virtual object and the target object in the scene are Need to draw into the first image.
在具体绘制时,一种可行方式中,可以根据第一图像中的虚拟对象的位置信息,确定目标对象的展示位置;从第二图像中获取目标对象,并将目标对象绘制至第一图像中的展示位置。此种情况下,需要对第一图像中的虚拟对象进行位置检测,以获取其位置信息,进而,根据其位置信息确定目标对象的展示位置。例如,第一图像中的虚拟对象为虚拟小狗,则在确定了虚拟小狗在第一图像中的位置后,可以确定将目标对象如主播的形象绘制在虚拟小狗四周的任意适当位置,等等。In a specific manner, in a feasible manner, the display position of the target object may be determined according to the position information of the virtual object in the first image; the target object is obtained from the second image, and the target object is drawn into the first image. Placement. In this case, the virtual object in the first image needs to be detected in position to obtain the location information thereof, and then the display position of the target object is determined according to the location information. For example, if the virtual object in the first image is a virtual puppy, after determining the position of the virtual puppy in the first image, it may be determined that the image of the target object, such as the anchor, is drawn at any appropriate position around the virtual puppy. and many more.
为进一步提高用户的使用体验,可选地,还可以接收对第一图像中的 虚拟对象的触发操作,根据触发操作向第二终端发送操作指令,指示目标对象对应的用户进行与所述操作指令相对应的操作。例如,以虚拟对象为虚拟小狗为例,假设虚拟小狗在当前图像帧中位于粉丝家客厅的沙发上,当粉丝点击该虚拟小狗时,第一终端向第二终端发送虚拟小狗被触发的指令,该指令中可以携带有指示主播进行相应操作的操作信息,如坐下抚摸小狗;该指令也可以仅向主播通知虚拟小狗被触发的消息,主播所在的第二终端可以从服务器或者从第二终端本地存储的与虚拟小狗对应的操作中选择做一种针对虚拟小狗的操作。粉丝方的第一终端在接收到第二终端发送来的图像帧后,从中提取主播形象并对其进行检测,进而确定该主播形象的展示位置,将该主播形象渲染绘制至当前图像中的展示位置处,形成主播坐在粉丝家客厅的沙发上抚摸虚拟小狗的图像。但不限于根据触发操作向第二终端发送操作指令的方式,在实际使用中,粉丝也可以通过语音或文字的方式邀请主播进行相应的互动操作。In order to further improve the user experience, the triggering operation of the virtual object in the first image may be received, and an operation instruction is sent to the second terminal according to the triggering operation, and the user corresponding to the target object is instructed to perform the operation instruction. Corresponding operation. For example, taking the virtual object as a virtual puppy as an example, assuming that the virtual puppy is located on the sofa of the fan's living room in the current image frame, when the fan clicks on the virtual puppy, the first terminal sends the virtual puppy to the second terminal. The triggered instruction, the instruction may carry operation information indicating that the anchor performs corresponding operations, such as sitting and stroking the puppy; the instruction may also notify the anchor only that the virtual puppy is triggered, and the second terminal where the anchor is located may The server or the operation corresponding to the virtual puppy stored locally from the second terminal selects an operation for the virtual puppy. After receiving the image frame sent by the second terminal, the first terminal of the fan side extracts the anchor image and detects it, thereby determining the display position of the anchor image, and rendering the anchor image to the display in the current image. At the location, the anchor is sitting on the sofa in the fan's living room and stroking the image of the virtual puppy. However, it is not limited to the manner of sending an operation instruction to the second terminal according to the triggering operation. In actual use, the fan may also invite the anchor to perform corresponding interactive operations by voice or text.
可选地,还可以从第二图像中获取目标对象,并对目标对象进行动作和/或表情检测;根据检测结果更新虚拟对象;将目标对象和更新后的虚拟对象绘制至第一图像中。其中,对虚拟对象的更新可以包括以下至少之一:虚拟对象的形象、虚拟对象的展示位置、虚拟对象的展示层次。通过对目标对象进行动作和/或表情检测,调整虚拟对象,使得目标对象与虚拟对象的交互更丰富多样,进一步提升了用户的使用体验。Optionally, the target object may also be acquired from the second image, and the target object is subjected to motion and/or expression detection; the virtual object is updated according to the detection result; and the target object and the updated virtual object are drawn into the first image. The update to the virtual object may include at least one of the following: an image of the virtual object, a placement of the virtual object, and a presentation level of the virtual object. By performing motion and/or expression detection on the target object, the virtual object is adjusted, so that the interaction between the target object and the virtual object is more diverse, thereby further improving the user experience.
例如,仍以主播进行相应操作,实现坐在粉丝家沙发上抚摸小狗为例,主播方的第二终端采集到主播做坐下抚摸的动作的图像,传输给粉丝方的第一终端;第一终端将主播的形象从图像中提取出来并对该图像中主播的动作和表情进行检测;检测到主播的动作为抚摸动作、表情为微笑,则相应地,将虚拟小狗从卧着的形象更新为蹲坐且抬头看主播的形象。For example, the anchor is still operated by the anchor, and the second terminal of the anchor party collects the image of the action of the anchor to perform the sitting and touching, and transmits the image to the first terminal of the fan side; A terminal extracts the image of the anchor from the image and detects the motion and expression of the anchor in the image; and detects that the motion of the anchor is a stroke motion, and the expression is a smile, correspondingly, the image of the virtual puppy is lying down Updated to squat and look up at the anchor's image.
再例如,若主播的动作从坐着改为站起,在检测到该动作后,则相应地,将虚拟小狗从蹲坐在沙发上更新为跳至沙发前地面上,也即,既将虚拟小狗的形象从蹲坐形象更新为站立形象,又将虚拟小狗的展示位置从原沙发上的相应位置更新为沙发前的位置。For another example, if the action of the anchor is changed from sitting to standing, after detecting the action, the virtual puppy is updated from sitting on the sofa to jump to the front of the sofa, that is, The image of the virtual puppy is updated from the squat image to the standing image, and the virtual puppy's display position is updated from the corresponding position on the original sofa to the position in front of the sofa.
又例如,若主播的动作从站着改为向虚拟小狗方向迈步,在检测到该动作后,则相应地,将虚拟小狗的展示层次从与主播形象相同层次更新为 低于主播形象的层次,生成虚拟小狗被主播形象遮挡的部分不显示,未遮挡部分仍然显示的图像,更贴近实际生活场景。For another example, if the action of the anchor changes from standing to moving toward the virtual puppy, after detecting the action, the display level of the virtual puppy is updated from the same level as the anchor image to the image below the anchor image. At the level, the part in which the virtual puppy is blocked by the anchor image is not displayed, and the image that is still not displayed in the unoccluded part is closer to the actual living scene.
如前所述,当连续的图像帧组成视频流后,就会产生主播帮粉丝遛狗的交互AR。As mentioned earlier, when a continuous image frame is composed of a video stream, an interactive AR is generated for the anchor to help the dog.
此外,可选地,第一终端可以采用动态人物三维重建技术重建主播的三维形象,进而将该三维形象绘制至第一图像中。三维重建是利用二维图像恢复物体三维信息(形状等)的数学过程和计算机技术,包括图像获取、预处理、点云拼接和特征分析等过程。通过三维重建,可以在粉丝方的第一终端看到与主播在其直播场景中的相同的直播情况。In addition, optionally, the first terminal may reconstruct the three-dimensional image of the anchor by using a dynamic character three-dimensional reconstruction technology, and then the three-dimensional image is drawn into the first image. Three-dimensional reconstruction is a mathematical process and computer technology that uses three-dimensional images to recover three-dimensional information (shapes, etc.) of objects, including image acquisition, preprocessing, point cloud stitching, and feature analysis. Through the three-dimensional reconstruction, the same live broadcast situation as the anchor in the live broadcast scene can be seen on the first terminal of the fan side.
例如,通过主播方的第二终端采集主播的实时视频图像帧,将采集的视频图像帧通过视频压缩技术,提取关键信息,在网络中进行高效传播,这样在网络中传播的信息就会很少,而且速度很快。当传输帧率达到60fps以上的时候,网络传输延迟对用户是无感知的,不会感受到卡顿和延时,可以实时看到主播的正常直播状态。For example, the second terminal of the anchor party collects the real-time video image frame of the anchor, and the captured video image frame is extracted by video compression technology to extract key information and spread efficiently in the network, so that the information transmitted in the network is rarely And it's fast. When the transmission frame rate reaches 60fps or more, the network transmission delay is not perceived by the user, and the cardon and delay are not felt, and the normal live state of the anchor can be seen in real time.
此外,主播与粉丝的互动也可以由主播发起,主播提示粉丝将要做遛狗的动作,粉丝就可以在场景界面中通过AR添加虚拟小狗的3D渲染动画到指定的位置,以实现与主播的互动操作。In addition, the interaction between the anchor and the fans can also be initiated by the anchor. The anchor prompts the fans to do the action of walking the dog. The fans can add the virtual puppy's 3D rendering animation to the specified location through the AR in the scene interface to achieve the anchor. Interactive operation.
进一步地,在直播过程中,可通过第一终端截图拍下AR的实时互动场景,进行分享。Further, in the live broadcast process, the real-time interactive scene of the AR may be captured by the first terminal screenshot for sharing.
为了使主播的动作和/或表情能够更到位,还可以在主播的第二终端的直播界面中显示第一终端的实时图像,例如,可以在第二终端的直播界面中显示第一终端的实时图像的小窗口,也可以采用全屏显示的方式等,本发明实施例对此不作限制。In order to make the motion and/or the expression of the anchor more in place, the real-time image of the first terminal may also be displayed in the live interface of the second terminal of the anchor. For example, the real-time of the first terminal may be displayed in the live interface of the second terminal. The small window of the image may also be a full-screen display or the like, which is not limited in the embodiment of the present invention.
可见,将本实施例的方案应用于直播场景中时,粉丝方的直播入口是专属二维码,其它人不能进入,保护了粉丝的隐私信息;粉丝设定的直播间是专属的(其它粉丝只能共享主播的直播画面);粉丝设定的直播间可以通过AR技术任意投放主播形象,并可以进行动态装饰,永久保存(比如,每次都把主播投放在粉丝家的某个角落);粉丝设定的直播间拥有动态可变性,可根据需要灵活设置场景,而且都会保存为模板供粉丝再次使用,实现更灵活。由此,极大地满足了粉丝对直播的个性化需求,且具有 更好的互动性。It can be seen that when the solution of the embodiment is applied to the live broadcast scenario, the live broadcast portal of the fan party is a dedicated two-dimensional code, and other people cannot enter, and the privacy information of the fan is protected; the live broadcast room set by the fan is exclusive (other fans) Only the live broadcast screen of the anchor can be shared; the live broadcast room set by the fans can be used to arbitrarily distribute the anchor image through AR technology, and can be dynamically decorated and permanently saved (for example, each time the anchor is placed in a corner of the fan home); The live room set by the fans has dynamic variability, and the scene can be flexibly set as needed, and will be saved as a template for fans to use again for more flexibility. As a result, the individual needs of the live broadcast of the fans are greatly satisfied, and the interaction is better.
通过本实施例,在实现AR业务时,如在直播场景中实现AR业务时,会根据设定标识获取第一终端实时采集的图像(即第一图像)和第二终端实时采集的图像(即第二图像);进而,将目标对象从第二终端采集的图像中提取出来,绘制在第一终端的图像中。基于该方案,第一终端可以设置实现AR业务的私人场景,并将目标对象投射在该私人场景中,该私人场景可以仅对第一终端可见,或者,仅对第一终端和第二终端可见。由此,第一终端的用户可以根据自身的偏好设置适当的AR业务使用场景,有效解决了现有的直播画面单一,无法满足用户对直播的个性化需求的问题。In this embodiment, when the AR service is implemented, if the AR service is implemented in the live broadcast scenario, the image collected by the first terminal in real time (that is, the first image) and the image collected by the second terminal in real time are obtained according to the setting identifier (ie, Second image); further, the target object is extracted from the image acquired by the second terminal and drawn in the image of the first terminal. Based on the solution, the first terminal may set a private scenario for implementing the AR service, and project the target object in the private scenario, and the private scenario may be visible only to the first terminal, or may be visible only to the first terminal and the second terminal. . Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
此外,设定标识与第一终端的AR业务处理和第二终端的AR业务处理均相关,因此,当设定标识被触发时,可以既获取第一终端采集的图像也获取第二终端采集的图像。通过设定标识,能够实现终端信息的快速获取和确定,提高AR业务实现速度,提升用户使用体验。In addition, the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image. By setting the identifier, the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
本实施例的AR业务处理方法可以由任意适当的具有数据处理能力的终端设备执行,包括但不限于:移动终端,如平板电脑、手机,台式机等。The AR service processing method in this embodiment may be performed by any suitable terminal device having data processing capability, including but not limited to: a mobile terminal, such as a tablet computer, a mobile phone, a desktop computer, or the like.
实施例三Embodiment 3
参照图3,示出了根据本发明实施例三的一种AR业务处理装置的结构框图。Referring to FIG. 3, a block diagram of a structure of an AR service processing apparatus according to Embodiment 3 of the present invention is shown.
本实施例的AR业务处理装置包括:第一获取模块302,用于根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像;绘制模块304,用于从第二图像中获取目标对象,将目标对象绘制至第一图像中;展示模块306,用于在第一终端中展示绘制后的第一图像。The AR service processing apparatus of this embodiment includes: a first obtaining module 302, configured to acquire a first image collected by the first terminal and a second image collected by the second terminal according to the setting identifier; and a drawing module 304, configured to be used by the second The target object is obtained in the image, and the target object is drawn into the first image; the display module 306 is configured to display the drawn first image in the first terminal.
本实施例的AR业务处理装置用于实现前述多个方法实施例中相应的AR业务处理方法,并具有相应的方法实施例的有益效果,在此不再赘述。The AR service processing apparatus in this embodiment is used to implement the corresponding AR service processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
实施例四Embodiment 4
参照图4,示出了根据本发明实施例四的一种AR业务处理装置的结构框图。Referring to FIG. 4, a block diagram of a structure of an AR service processing apparatus according to Embodiment 4 of the present invention is shown.
本实施例的AR业务处理装置包括:第一获取模块402,用于根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像;绘制模块404,用于从第二图像中获取目标对象,将目标对象绘制至第一图像中; 展示模块406,用于在第一终端中展示绘制后的第一图像。The AR service processing apparatus of this embodiment includes: a first acquiring module 402, configured to acquire a first image collected by the first terminal and a second image collected by the second terminal according to the setting identifier; and a drawing module 404, configured to be used from the second The target object is obtained in the image, and the target object is drawn into the first image. The display module 406 is configured to display the first image after the drawing in the first terminal.
可选地,所述设定标识根据第一终端的用户的用户标识和第二终端的用户的用户标识生成。Optionally, the setting identifier is generated according to a user identifier of a user of the first terminal and a user identifier of a user of the second terminal.
可选地,所述设定标识为二维码。Optionally, the setting identifier is a two-dimensional code.
可选地,本实施例的AR业务处理装置还包括:第二获取模块408,用于在绘制模块404从第二图像中获取目标对象之前,获取设置的虚拟对象的信息;根据虚拟对象的信息,获取虚拟对象并将虚拟对象绘制至第一图像上;绘制模块404用于从第二图像中获取目标对象,并将目标对象绘制至绘制有虚拟对象的第一图像中。Optionally, the AR service processing apparatus of this embodiment further includes: a second obtaining module 408, configured to acquire information of the set virtual object before the drawing module 404 acquires the target object from the second image; and according to the information of the virtual object Obtaining a virtual object and drawing the virtual object onto the first image; the drawing module 404 is configured to acquire the target object from the second image and draw the target object into the first image in which the virtual object is drawn.
可选地,绘制模块404用于根据第一图像中的虚拟对象的位置信息,确定目标对象的展示位置;从第二图像中获取目标对象,并将目标对象绘制至第一图像中的展示位置。Optionally, the drawing module 404 is configured to determine a display position of the target object according to the position information of the virtual object in the first image; acquire the target object from the second image, and draw the target object to the display position in the first image .
可选地,本实施例的AR业务处理装置还包括:指令模块410,用于接收对虚拟对象的触发操作,根据触发操作向第二终端发送操作指令,指示目标对象对应的用户进行与操作指令相对应的操作。Optionally, the AR service processing apparatus of the embodiment further includes: an instruction module 410, configured to receive a trigger operation on the virtual object, send an operation instruction to the second terminal according to the triggering operation, and indicate that the user corresponding to the target object performs the operation instruction Corresponding operation.
可选地,绘制模块404包括:检测模块4042,用于从第二图像中获取目标对象,并对目标对象进行动作和/或表情检测;更新模块4044,用于根据检测结果更新虚拟对象;更新后绘制模块4046,用于将目标对象和更新后的虚拟对象绘制至第一图像中。Optionally, the drawing module 404 includes: a detecting module 4042, configured to acquire a target object from the second image, and perform an action and/or an expression detection on the target object; and an updating module 4044, configured to update the virtual object according to the detection result; The post rendering module 4046 is configured to draw the target object and the updated virtual object into the first image.
可选地,更新模块4044用于根据检测结果对虚拟对象进行以下更新至少之一:虚拟对象的形象、虚拟对象的展示位置、虚拟对象的展示层次。Optionally, the updating module 4044 is configured to perform at least one of the following updates on the virtual object according to the detection result: an image of the virtual object, a display position of the virtual object, and a display level of the virtual object.
可选地,本实施例的AR业务处理装置还包括:设置模块412,用于在第一获取模块402根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像之前,接收AR场景设置指令,根据所述设置指令采集场景图像和/或设置虚拟对象;将所述场景图像和/或设置的虚拟对象与所述设定标识对应存储。Optionally, the AR service processing apparatus of this embodiment further includes: a setting module 412, configured to: before the first acquiring module 402 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier Receiving an AR scene setting instruction, collecting a scene image and/or setting a virtual object according to the setting instruction, and storing the scene image and/or the set virtual object corresponding to the setting identifier.
可选地,第一获取模块402用于根据所述设定标识触发第一终端采集场景图像;判断采集的场景图像与存储的场景图像是否一致;若一致,则指示第一终端继续采集场景图像,将采集的场景图像作为第一图像,并获取第二终端采集的第二图像。Optionally, the first acquiring module 402 is configured to trigger the first terminal to collect the scene image according to the setting identifier, and determine whether the collected scene image is consistent with the stored scene image; if they are consistent, the first terminal is instructed to continue collecting the scene image. And acquiring the captured scene image as the first image, and acquiring the second image collected by the second terminal.
可选地,绘制模块404用于对第二图像进行目标对象检测,根据检测结果获取目标对象;或者,根据接收的目标对象的数据,获取目标对象;将目标对象绘制至第一图像中。Optionally, the drawing module 404 is configured to perform target object detection on the second image, acquire the target object according to the detection result, or acquire the target object according to the received data of the target object; and draw the target object into the first image.
可选地,第一获取模块402用于根据设定标识获取第一终端实时采集的第一视频流和第二终端实时采集的第二视频流;从第一视频流中获取第一图像,从第二视频流中获取第二图像。Optionally, the first obtaining module 402 is configured to acquire, according to the setting identifier, the first video stream collected by the first terminal in real time and the second video stream collected by the second terminal in real time; and acquiring the first image from the first video stream, The second image is acquired in the second video stream.
本实施例的AR业务处理装置用于实现前述多个方法实施例中相应的AR业务处理方法,并具有相应的方法实施例的有益效果,在此不再赘述。The AR service processing apparatus in this embodiment is used to implement the corresponding AR service processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
实施例五Embodiment 5
参照图5,示出了根据本发明实施例五的一种终端的结构示意图,本发明具体实施例并不对终端的具体实现做限定。Referring to FIG. 5, a schematic structural diagram of a terminal according to Embodiment 5 of the present invention is shown. The specific implementation of the present invention does not limit the specific implementation of the terminal.
如图5所示,该终端可以包括:处理器(processor)502、通信接口(Communications Interface)504、存储器(memory)506、以及通信总线508。As shown in FIG. 5, the terminal may include a processor 502, a communications interface 504, a memory 506, and a communication bus 508.
其中:among them:
处理器502、通信接口504、以及存储器506通过通信总线508完成相互间的通信。Processor 502, communication interface 504, and memory 506 complete communication with one another via communication bus 508.
通信接口504,用于与其它终端或服务器进行通信。The communication interface 504 is configured to communicate with other terminals or servers.
处理器502,用于执行程序510,具体可以执行上述AR业务处理方法实施例中的相关步骤。The processor 502 is configured to execute the program 510, and specifically, the related steps in the foregoing AR service processing method embodiment.
具体地,程序510可以包括程序代码,该程序代码包括计算机操作指令。In particular, program 510 can include program code, the program code including computer operating instructions.
处理器502可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。终端包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the terminal may be the same type of processor, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
存储器506,用于存放程序510。存储器506可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。The memory 506 is configured to store the program 510. Memory 506 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
程序510具体可以用于使得处理器502执行以下操作:根据设定标识 获取第一终端采集的第一图像和第二终端采集的第二图像;从第二图像中获取目标对象,将目标对象绘制至第一图像中;在第一终端中展示绘制后的第一图像。The program 510 may be specifically configured to: the processor 502: obtain the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier; acquire the target object from the second image, and draw the target object Up to the first image; the first image after the rendering is displayed in the first terminal.
在一种可选的实施方式中,所述设定标识根据第一终端的用户的用户标识和第二终端的用户的用户标识生成。In an optional implementation manner, the setting identifier is generated according to a user identifier of a user of the first terminal and a user identifier of a user of the second terminal.
在一种可选的实施方式中,所述设定标识为二维码。In an optional implementation manner, the setting identifier is a two-dimensional code.
在一种可选的实施方式中,程序510还用于使得处理器502在从第二图像中获取目标对象之前,获取设置的虚拟对象的信息;根据虚拟对象的信息,获取虚拟对象并将虚拟对象绘制至第一图像上;从第二图像中获取目标对象,并将目标对象绘制至绘制有虚拟对象的第一图像中。In an optional implementation manner, the program 510 is further configured to: when the processor 502 acquires the target object from the second image, acquire information of the set virtual object; according to the information of the virtual object, acquire the virtual object and The object is drawn onto the first image; the target object is obtained from the second image, and the target object is drawn into the first image in which the virtual object is drawn.
在一种可选的实施方式中,程序510还用于使得处理器502在从第二图像中获取目标对象,并将目标对象绘制至绘制有虚拟对象的第一图像中时,根据第一图像中的虚拟对象的位置信息,确定目标对象的展示位置;从第二图像中获取目标对象,并将目标对象绘制至第一图像中的展示位置。In an optional implementation, the program 510 is further configured to cause the processor 502 to acquire the target object from the second image and draw the target object into the first image in which the virtual object is drawn, according to the first image. The position information of the virtual object in the determination, the display position of the target object; the target object is obtained from the second image, and the target object is drawn to the placement position in the first image.
在一种可选的实施方式中,程序510还用于使得处理器502接收对虚拟对象的触发操作,根据触发操作向第二终端发送操作指令,指示目标对象对应的用户进行与操作指令相对应的操作。In an optional implementation, the program 510 is further configured to: the processor 502 receives a triggering operation on the virtual object, and sends an operation instruction to the second terminal according to the triggering operation, indicating that the user corresponding to the target object performs the operation instruction. Operation.
在一种可选的实施方式中,程序510还用于使得处理器502在从第二图像中获取目标对象,并将目标对象绘制至绘制有虚拟对象的第一图像中时,从第二图像中获取目标对象,并对目标对象进行动作和/或表情检测;根据检测结果更新虚拟对象;将目标对象和更新后的虚拟对象绘制至第一图像中。In an optional implementation, the program 510 is further configured to cause the processor 502 to acquire the target object from the second image and draw the target object into the first image in which the virtual object is drawn, from the second image. Obtaining the target object, and performing motion and/or expression detection on the target object; updating the virtual object according to the detection result; and drawing the target object and the updated virtual object into the first image.
在一种可选的实施方式中,程序510还用于使得处理器502在根据检测结果更新虚拟对象时,对虚拟对象进行以下更新至少之一:虚拟对象的形象、虚拟对象的展示位置、虚拟对象的展示层次。In an optional implementation manner, the program 510 is further configured to: when the processor 502 updates the virtual object according to the detection result, perform at least one of the following updates on the virtual object: an image of the virtual object, a placement of the virtual object, and a virtual The presentation level of the object.
在一种可选的实施方式中,程序510还用于使得处理器502在根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像之前,接收AR场景设置指令,根据所述设置指令采集场景图像和/或设置虚拟对象;将所述场景图像和/或设置的虚拟对象与所述设定标识对应存储。In an optional implementation, the program 510 is further configured to: when the processor 502 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier, the AR scenario setting instruction is received, where Acquiring a scene image and/or setting a virtual object according to the setting instruction; storing the scene image and/or the set virtual object corresponding to the setting identifier.
在一种可选的实施方式中,程序510还用于使得处理器502在根据设 定标识获取第一终端采集的第一图像和第二终端采集的第二图像时,根据所述设定标识触发第一终端采集场景图像;判断采集的场景图像与存储的场景图像是否一致;若一致,则指示第一终端继续采集场景图像,将采集的场景图像作为第一图像,并获取第二终端采集的第二图像。In an optional implementation, the program 510 is further configured to: when the processor 502 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier, according to the setting identifier Triggering the first terminal to collect the scene image; determining whether the collected scene image is consistent with the stored scene image; if they are consistent, instructing the first terminal to continue collecting the scene image, using the collected scene image as the first image, and acquiring the second terminal. The second image.
在一种可选的实施方式中,程序510还用于使得处理器502在从第二图像中获取目标对象时,对第二图像进行目标对象检测,根据检测结果获取目标对象;或者,根据接收的目标对象的数据,获取目标对象。In an optional implementation manner, the program 510 is further configured to: when the processor 502 acquires the target object from the second image, perform target object detection on the second image, acquire the target object according to the detection result; or, according to the receiving The target object's data is obtained from the target object.
在一种可选的实施方式中,程序510还用于使得处理器502在根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像时,根据设定标识获取第一终端实时采集的第一视频流和第二终端实时采集的第二视频流;从第一视频流中获取第一图像,从第二视频流中获取第二图像。In an optional implementation, the program 510 is further configured to: when the processor 502 acquires the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier, acquiring the first The first video stream collected by the terminal in real time and the second video stream collected by the second terminal in real time; the first image is obtained from the first video stream, and the second image is obtained from the second video stream.
程序510中各步骤的具体实现可以参见上述AR业务处理方法实施例中的相应步骤和单元中对应的描述,在此不赘述。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的设备和模块的具体工作过程,可以参考前述方法实施例中的对应过程描述,在此不再赘述。For a specific implementation of the steps in the program 510, reference may be made to the corresponding steps in the foregoing embodiment of the AR service processing method and the corresponding description in the unit, and details are not described herein. A person skilled in the art can clearly understand that, for the convenience and brevity of the description, the specific working process of the device and the module described above may be referred to the corresponding process description in the foregoing method embodiment, and details are not described herein again.
可见,本实施例中的终端可以作为第一终端使用。It can be seen that the terminal in this embodiment can be used as the first terminal.
通过本实施例,在在实现AR业务时,如在直播场景中实现AR业务时,会根据设定标识获取第一终端(如粉丝端)实时采集的图像(即第一图像)和第二终端(如主播端)实时采集的图像(即第二图像);进而,将目标对象(如主播的形象)从第二终端采集的图像中提取出来,绘制在第一终端的图像中。基于该方案,第一终端可以设置实现AR业务的私人场景(如粉丝家的客厅场景),并将目标对象投射在该私人场景中,该私人场景可以仅对第一终端可见。由此,第一终端的用户可以根据自身的偏好设置适当的AR业务使用场景,有效解决了现有的直播画面单一,无法满足用户对直播的个性化需求的问题。In this embodiment, when the AR service is implemented, if the AR service is implemented in the live broadcast scenario, the image (ie, the first image) and the second terminal that are collected by the first terminal (such as the fan) in real time are obtained according to the setting identifier. (such as the anchor end) the image acquired in real time (ie, the second image); further, the target object (such as the image of the anchor) is extracted from the image acquired by the second terminal and drawn in the image of the first terminal. Based on the solution, the first terminal may set a private scene (such as a fan's living room scene) that implements the AR service, and project the target object in the private scene, and the private scene may be visible only to the first terminal. Therefore, the user of the first terminal can set an appropriate AR service usage scenario according to its own preference, which effectively solves the problem that the existing live broadcast screen is single and cannot meet the user's personalized demand for the live broadcast.
此外,设定标识与第一终端的AR业务处理和第二终端的AR业务处理均相关,因此,当设定标识被触发时,可以既获取第一终端采集的图像也获取第二终端采集的图像。通过设定标识,能够实现终端信息的快速获取和确定,提高AR业务实现速度,提升用户使用体验。In addition, the setting identifier is related to the AR service processing of the first terminal and the AR service processing of the second terminal. Therefore, when the setting identifier is triggered, the image collected by the first terminal and the second terminal may be acquired. image. By setting the identifier, the terminal information can be quickly obtained and determined, and the AR service speed can be improved and the user experience can be improved.
图6为本发明一示例性实施例示出的增强现实AR业务处理设备的结 构图。FIG. 6 is a structural diagram of an augmented reality AR service processing apparatus according to an exemplary embodiment of the present invention.
如图6所示,本实施例提供的增强现实AR业务处理设备包括:As shown in FIG. 6, the augmented reality AR service processing device provided in this embodiment includes:
存储器61; Memory 61;
处理器62;以及 Processor 62;
计算机程序;Computer program;
其中,所述计算机程序存储在所述存储器中,并配置为由所述处理器执行以实现任一种如上所述的增强现实AR业务处理方法。Wherein the computer program is stored in the memory and configured to be executed by the processor to implement any of the augmented reality AR service processing methods described above.
本实施例还提供一种计算机可读存储介质,其上存储有计算机程序,The embodiment further provides a computer readable storage medium on which a computer program is stored.
所述计算机程序被处理器执行以实现任一种如上所述的增强现实AR业务处理方法。The computer program is executed by a processor to implement any of the augmented reality AR service processing methods described above.
需要指出,根据实施的需要,可将本发明实施例中描述的各个部件/步骤拆分为更多部件/步骤,也可将两个或多个部件/步骤或者部件/步骤的部分操作组合成新的部件/步骤,以实现本发明实施例的目的。It should be noted that the various components/steps described in the embodiments of the present invention may be split into more components/steps according to the needs of the implementation, or two or more components/steps or partial operations of the components/steps may be combined into one. New components/steps to achieve the objectives of embodiments of the present invention.
上述根据本发明实施例的方法可在硬件、固件中实现,或者被实现为可存储在记录介质(诸如CD ROM、RAM、软盘、硬盘或磁光盘)中的软件或计算机代码,或者被实现通过网络下载的原始存储在远程记录介质或非暂时机器可读介质中并将被存储在本地记录介质中的计算机代码,从而在此描述的方法可被存储在使用通用计算机、专用处理器或者可编程或专用硬件(诸如ASIC或FPGA)的记录介质上的这样的软件处理。可以理解,计算机、处理器、微处理器控制器或可编程硬件包括可存储或接收软件或计算机代码的存储组件(例如,RAM、ROM、闪存等),当所述软件或计算机代码被计算机、处理器或硬件访问且执行时,实现在此描述的AR业务处理方法。此外,当通用计算机访问用于实现在此示出的AR业务处理方法的代码时,代码的执行将通用计算机转换为用于执行在此示出的AR业务处理方法的专用计算机。The above method according to an embodiment of the present invention may be implemented in hardware, firmware, or implemented as software or computer code that may be stored in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or implemented by The network downloads computer code originally stored in a remote recording medium or non-transitory machine readable medium and stored in a local recording medium so that the methods described herein can be stored using a general purpose computer, a dedicated processor or programmable Such software processing on a recording medium of dedicated hardware such as an ASIC or an FPGA. It will be understood that a computer, processor, microprocessor controller or programmable hardware includes storage components (eg, RAM, ROM, flash memory, etc.) that can store or receive software or computer code, when the software or computer code is The AR service processing method described herein is implemented when the processor or hardware accesses and executes. Moreover, when a general purpose computer accesses code for implementing the AR business processing method shown herein, execution of the code converts the general purpose computer into a special purpose computer for executing the AR business processing method shown herein.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施 例的范围。Those of ordinary skill in the art will appreciate that the elements and method steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the present invention.
以上实施方式仅用于说明本发明实施例,而并非对本发明实施例的限制,有关技术领域的普通技术人员,在不脱离本发明实施例的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明实施例的范畴,本发明实施例的专利保护范围应由权利要求限定。The above embodiments are only used to illustrate the embodiments of the present invention, and are not intended to limit the embodiments of the present invention, and those skilled in the art can also make various kinds without departing from the spirit and scope of the embodiments of the present invention. Variations and modifications, therefore, all equivalent technical solutions are also within the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.

Claims (26)

  1. 一种增强现实AR业务处理方法,包括:An augmented reality AR service processing method, comprising:
    根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像;Obtaining, by the setting identifier, the first image collected by the first terminal and the second image collected by the second terminal;
    从所述第二图像中获取目标对象,将所述目标对象绘制至所述第一图像中;Obtaining a target object from the second image, and drawing the target object into the first image;
    在所述第一终端中展示绘制后的所述第一图像。The drawn first image is displayed in the first terminal.
  2. 根据权利要求1所述的方法,其中,所述设定标识根据所述第一终端的用户的用户标识和所述第二终端的用户的用户标识生成。The method according to claim 1, wherein the setting identifier is generated according to a user identifier of a user of the first terminal and a user identifier of a user of the second terminal.
  3. 根据权利要求2所述的方法,其中,所述设定标识为二维码。The method of claim 2 wherein said setting identifier is a two-dimensional code.
  4. 根据权利要求1-3任一项所述的方法,其中,A method according to any one of claims 1 to 3, wherein
    在从所述第二图像中获取目标对象之前,所述方法还包括:获取设置的虚拟对象的信息;根据所述虚拟对象的信息,获取虚拟对象并将所述虚拟对象绘制至所述第一图像上;Before acquiring the target object from the second image, the method further includes: acquiring information of the set virtual object; acquiring the virtual object according to the information of the virtual object, and drawing the virtual object to the first On the image;
    所述从所述第二图像中获取目标对象,将所述目标对象绘制至所述第一图像中,包括:从所述第二图像中获取目标对象,并将所述目标对象绘制至绘制有所述虚拟对象的第一图像中。Obtaining the target object from the second image, and drawing the target object into the first image comprises: acquiring a target object from the second image, and drawing the target object to a drawing In the first image of the virtual object.
  5. 根据权利要求4所述的方法,其中,从所述第二图像中获取目标对象,并将所述目标对象绘制至绘制有所述虚拟对象的第一图像中,包括:The method of claim 4, wherein acquiring the target object from the second image and rendering the target object into the first image in which the virtual object is drawn comprises:
    根据所述第一图像中的所述虚拟对象的位置信息,确定所述目标对象的展示位置;Determining a display position of the target object according to location information of the virtual object in the first image;
    从所述第二图像中获取目标对象,并将所述目标对象绘制至所述第一图像中的所述展示位置。A target object is obtained from the second image and the target object is drawn to the display location in the first image.
  6. 根据权利要求4所述的方法,其中,所述方法还包括:The method of claim 4 wherein the method further comprises:
    接收对所述虚拟对象的触发操作,根据所述触发操作向所述第二终端发送操作指令,指示所述目标对象对应的用户进行与所述操作指令相对应的操作。Receiving a triggering operation on the virtual object, and sending an operation instruction to the second terminal according to the triggering operation, instructing a user corresponding to the target object to perform an operation corresponding to the operation instruction.
  7. 根据权利要求4所述的方法,其中,从所述第二图像中获取目标对象,并将所述目标对象绘制至绘制有所述虚拟对象的第一图像中,包括:The method of claim 4, wherein acquiring the target object from the second image and rendering the target object into the first image in which the virtual object is drawn comprises:
    从所述第二图像中获取目标对象,并对所述目标对象进行动作和/或表 情检测;Obtaining a target object from the second image, and performing an action and/or a feeling detection on the target object;
    根据检测结果更新所述虚拟对象;Updating the virtual object according to the detection result;
    将所述目标对象和更新后的所述虚拟对象绘制至所述第一图像中。The target object and the updated virtual object are drawn into the first image.
  8. 根据权利要求7所述的方法,其中,所述根据检测结果更新所述虚拟对象,包括:The method of claim 7, wherein the updating the virtual object according to the detection result comprises:
    对所述虚拟对象进行以下更新至少之一:所述虚拟对象的形象、所述虚拟对象的展示位置、所述虚拟对象的展示层次。Performing at least one of the following updates on the virtual object: an image of the virtual object, a display location of the virtual object, and a display hierarchy of the virtual object.
  9. 根据权利要求1-3任一项所述的方法,其中,在所述根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像之前,所述方法还包括:The method according to any one of claims 1-3, wherein before the acquiring the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier, the method further includes:
    接收AR场景设置指令,根据所述设置指令采集场景图像和/或设置虚拟对象;Receiving an AR scene setting instruction, collecting a scene image and/or setting a virtual object according to the setting instruction;
    将所述场景图像和/或设置的虚拟对象与所述设定标识对应存储。The scene image and/or the set virtual object are stored corresponding to the setting identifier.
  10. 根据权利要求9所述的方法,其中,所述根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像,包括:The method of claim 9, wherein the acquiring the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier comprises:
    根据所述设定标识触发所述第一终端采集场景图像;And triggering, by the setting identifier, the first terminal to collect a scene image;
    判断采集的场景图像与存储的场景图像是否一致;Determining whether the collected scene image is consistent with the stored scene image;
    若一致,则指示所述第一终端继续采集场景图像,将采集的场景图像作为第一图像,并获取所述第二终端采集的第二图像。If the information is consistent, the first terminal continues to collect the scene image, and the captured scene image is used as the first image, and the second image collected by the second terminal is acquired.
  11. 根据权利要求1-3任一项所述的方法,其中,所述从所述第二图像中获取目标对象,包括:The method of any one of claims 1-3, wherein the obtaining the target object from the second image comprises:
    对所述第二图像进行目标对象检测,根据检测结果获取所述目标对象;Performing target object detection on the second image, and acquiring the target object according to the detection result;
    或者,or,
    根据接收的目标对象的数据,获取所述目标对象。The target object is acquired according to data of the received target object.
  12. 根据权利要求1-3任一项所述的方法,其中,所述根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像,包括:The method according to any one of claims 1-3, wherein the acquiring the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier comprises:
    根据设定标识获取第一终端实时采集的第一视频流和第二终端实时采集的第二视频流;Obtaining, by the setting identifier, the first video stream collected by the first terminal in real time and the second video stream collected by the second terminal in real time;
    从所述第一视频流中获取所述第一图像,从所述第二视频流中获取所述第二图像。Acquiring the first image from the first video stream and acquiring the second image from the second video stream.
  13. 一种增强现实AR业务处理装置,包括:An augmented reality AR service processing device, comprising:
    第一获取模块,用于根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像;a first acquiring module, configured to acquire, according to the setting identifier, the first image collected by the first terminal and the second image collected by the second terminal;
    绘制模块,用于从所述第二图像中获取目标对象,将所述目标对象绘制至所述第一图像中;a drawing module, configured to acquire a target object from the second image, and draw the target object into the first image;
    展示模块,用于在所述第一终端中展示绘制后的所述第一图像。And a display module, configured to display the drawn first image in the first terminal.
  14. 根据权利要求13所述的装置,其中,所述设定标识根据所述第一终端的用户的用户标识和所述第二终端的用户的用户标识生成。The apparatus according to claim 13, wherein the setting identifier is generated according to a user identifier of a user of the first terminal and a user identifier of a user of the second terminal.
  15. 根据权利要求14所述的装置,其中,所述设定标识为二维码。The apparatus of claim 14, wherein the setting identifier is a two-dimensional code.
  16. 根据权利要求13-15任一项所述的装置,其中,A device according to any of claims 13-15, wherein
    所述装置还包括:第二获取模块,用于在所述绘制模块从所述第二图像中获取目标对象之前,获取设置的虚拟对象的信息;根据所述虚拟对象的信息,获取虚拟对象并将所述虚拟对象绘制至所述第一图像上;The device further includes: a second obtaining module, configured to acquire information of the set virtual object before the drawing module acquires the target object from the second image; and acquire the virtual object according to the information of the virtual object Drawing the virtual object onto the first image;
    所述绘制模块,用于从所述第二图像中获取目标对象,并将所述目标对象绘制至绘制有所述虚拟对象的第一图像中。The drawing module is configured to acquire a target object from the second image, and draw the target object into a first image in which the virtual object is drawn.
  17. 根据权利要求16所述的装置,其中,所述绘制模块,用于根据所述第一图像中的所述虚拟对象的位置信息,确定所述目标对象的展示位置;从所述第二图像中获取目标对象,并将所述目标对象绘制至所述第一图像中的所述展示位置。The apparatus according to claim 16, wherein the drawing module is configured to determine a display position of the target object according to position information of the virtual object in the first image; from the second image A target object is obtained and the target object is drawn to the display location in the first image.
  18. 根据权利要求16所述的装置,其中,所述装置还包括:The apparatus of claim 16 wherein said apparatus further comprises:
    指令模块,用于接收对所述虚拟对象的触发操作,根据所述触发操作向所述第二终端发送操作指令,指示所述目标对象对应的用户进行与所述操作指令相对应的操作。The instruction module is configured to receive a triggering operation on the virtual object, and send an operation instruction to the second terminal according to the triggering operation, and instruct the user corresponding to the target object to perform an operation corresponding to the operation instruction.
  19. 根据权利要求16所述的装置,其中,所述绘制模块包括:The apparatus of claim 16 wherein said rendering module comprises:
    检测模块,用于从所述第二图像中获取目标对象,并对所述目标对象进行动作和/或表情检测;a detecting module, configured to acquire a target object from the second image, and perform an action and/or an expression detection on the target object;
    更新模块,用于根据检测结果更新所述虚拟对象;An update module, configured to update the virtual object according to the detection result;
    更新后绘制模块,用于将所述目标对象和更新后的所述虚拟对象绘制至所述第一图像中。And an update drawing module, configured to draw the target object and the updated virtual object into the first image.
  20. 根据权利要求19所述的装置,其中,所述更新模块,用于根据 检测结果对所述虚拟对象进行以下更新至少之一:所述虚拟对象的形象、所述虚拟对象的展示位置、所述虚拟对象的展示层次。The device according to claim 19, wherein the updating module is configured to perform at least one of the following updates on the virtual object according to the detection result: an image of the virtual object, a display position of the virtual object, the The presentation level of the virtual object.
  21. 根据权利要求13-15任一项所述的装置,其中,所述装置还包括:The device of any of claims 13-15, wherein the device further comprises:
    设置模块,用于在所述根据设定标识获取第一终端采集的第一图像和第二终端采集的第二图像之前,接收AR场景设置指令,根据所述设置指令采集场景图像和/或设置虚拟对象;将所述场景图像和/或设置的虚拟对象与所述设定标识对应存储。a setting module, configured to receive an AR scene setting instruction, and collect a scene image and/or a setting according to the setting instruction, before acquiring the first image collected by the first terminal and the second image collected by the second terminal according to the setting identifier a virtual object; storing the scene image and/or the set virtual object corresponding to the setting identifier.
  22. 根据权利要求21所述的装置,其中,所述第一获取模块,用于根据所述设定标识触发所述第一终端采集场景图像;判断采集的场景图像与存储的场景图像是否一致;若一致,则指示所述第一终端继续采集场景图像,将采集的场景图像作为第一图像,并获取所述第二终端采集的第二图像。The device according to claim 21, wherein the first acquiring module is configured to trigger the first terminal to collect a scene image according to the setting identifier; and determine whether the collected scene image is consistent with the stored scene image; If the image is consistent, the first terminal continues to collect the scene image, and the captured scene image is used as the first image, and the second image collected by the second terminal is acquired.
  23. 根据权利要求13-15任一项所述的装置,其中,所述绘制模块,用于对所述第二图像进行目标对象检测,根据检测结果获取所述目标对象;或者,根据接收的目标对象的数据,获取所述目标对象;将所述目标对象绘制至所述第一图像中。The apparatus according to any one of claims 13 to 15, wherein the drawing module is configured to perform target object detection on the second image, acquire the target object according to the detection result; or, according to the received target object Data, the target object is obtained; the target object is drawn into the first image.
  24. 根据权利要求13-15任一项所述的装置,其中,所述第一获取模块,用于根据设定标识获取第一终端实时采集的第一视频流和第二终端实时采集的第二视频流;从所述第一视频流中获取所述第一图像,从所述第二视频流中获取所述第二图像。The device according to any one of claims 13-15, wherein the first obtaining module is configured to acquire, according to the setting identifier, a first video stream collected by the first terminal in real time and a second video captured by the second terminal in real time. Streaming; acquiring the first image from the first video stream and acquiring the second image from the second video stream.
  25. 一种增强现实AR业务处理设备,其特征在于,包括:An augmented reality AR service processing device, comprising:
    存储器;Memory
    处理器;以及Processor;
    计算机程序;Computer program;
    其中,所述计算机程序存储在所述存储器中,并配置为由所述处理器执行以实现如权利要求1-12任一种所述的方法。Wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-12.
  26. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,A computer readable storage medium having stored thereon a computer program
    所述计算机程序被处理器执行以实现如权利要求1-12任一种所述的方法。The computer program is executed by a processor to implement the method of any of claims 1-12.
PCT/CN2018/098145 2017-12-18 2018-08-01 Ar service processing method, apparatus, device and computer readable storage medium WO2019119815A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711366753.4A CN108134945B (en) 2017-12-18 2017-12-18 AR service processing method, AR service processing device and terminal
CN201711366753.4 2017-12-18

Publications (1)

Publication Number Publication Date
WO2019119815A1 true WO2019119815A1 (en) 2019-06-27

Family

ID=62390459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098145 WO2019119815A1 (en) 2017-12-18 2018-08-01 Ar service processing method, apparatus, device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108134945B (en)
WO (1) WO2019119815A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108134945B (en) * 2017-12-18 2021-03-19 阿里巴巴(中国)有限公司 AR service processing method, AR service processing device and terminal
CN111935489B (en) * 2019-05-13 2023-08-04 阿里巴巴集团控股有限公司 Network live broadcast method, information display method and device, live broadcast server and terminal equipment
US20220279234A1 (en) * 2019-11-07 2022-09-01 Guangzhou Huya Technology Co., Ltd. Live stream display method and apparatus, electronic device, and readable storage medium
CN111263178A (en) * 2020-02-20 2020-06-09 广州虎牙科技有限公司 Live broadcast method, device, user side and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930284A (en) * 2009-06-23 2010-12-29 腾讯科技(深圳)有限公司 Method, device and system for implementing interaction between video and virtual network scene
US20130215292A1 (en) * 2012-02-19 2013-08-22 Udacity, Inc. System and method for combining computer-based educational content recording and video-based educational content recording
CN106464773A (en) * 2014-03-20 2017-02-22 2Mee有限公司 Augmented reality apparatus and method
US20170076498A1 (en) * 2015-09-10 2017-03-16 Nbcuniversal Media, Llc System and method for presenting content within virtual reality environment
CN106792228A (en) * 2016-12-12 2017-05-31 福建星网视易信息系统有限公司 A kind of living broadcast interactive method and system
CN106792214A (en) * 2016-12-12 2017-05-31 福建凯米网络科技有限公司 A kind of living broadcast interactive method and system based on digital audio-video place
CN108134945A (en) * 2017-12-18 2018-06-08 广州市动景计算机科技有限公司 AR method for processing business, device and terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867744B (en) * 2009-04-20 2013-10-16 Tcl集团股份有限公司 TV set having electronic drawing function and realizing method thereof
WO2011152841A1 (en) * 2010-06-01 2011-12-08 Hewlett-Packard Development Company, L.P. Replacement of a person or object in an image
CN107347166B (en) * 2016-08-19 2020-03-03 北京市商汤科技开发有限公司 Video image processing method and device and terminal equipment
CN106789991B (en) * 2016-12-09 2021-06-22 福建星网视易信息系统有限公司 Multi-person interactive network live broadcast method and system based on virtual scene
CN106803921A (en) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 Instant audio/video communication means and device based on AR technologies
CN107124658B (en) * 2017-05-02 2019-10-11 北京小米移动软件有限公司 Net cast method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930284A (en) * 2009-06-23 2010-12-29 腾讯科技(深圳)有限公司 Method, device and system for implementing interaction between video and virtual network scene
US20130215292A1 (en) * 2012-02-19 2013-08-22 Udacity, Inc. System and method for combining computer-based educational content recording and video-based educational content recording
CN106464773A (en) * 2014-03-20 2017-02-22 2Mee有限公司 Augmented reality apparatus and method
US20170076498A1 (en) * 2015-09-10 2017-03-16 Nbcuniversal Media, Llc System and method for presenting content within virtual reality environment
CN106792228A (en) * 2016-12-12 2017-05-31 福建星网视易信息系统有限公司 A kind of living broadcast interactive method and system
CN106792214A (en) * 2016-12-12 2017-05-31 福建凯米网络科技有限公司 A kind of living broadcast interactive method and system based on digital audio-video place
CN108134945A (en) * 2017-12-18 2018-06-08 广州市动景计算机科技有限公司 AR method for processing business, device and terminal

Also Published As

Publication number Publication date
CN108134945B (en) 2021-03-19
CN108134945A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
WO2019119815A1 (en) Ar service processing method, apparatus, device and computer readable storage medium
US11830118B2 (en) Virtual clothing try-on
WO2018072652A1 (en) Video processing method, video processing device, and storage medium
WO2019109828A1 (en) Ar service processing method, device, server, mobile terminal, and storage medium
JP2017531950A (en) Method and apparatus for constructing a shooting template database and providing shooting recommendation information
CN109242940B (en) Method and device for generating three-dimensional dynamic image
WO2019114328A1 (en) Augmented reality-based video processing method and device thereof
KR20150011008A (en) Augmented reality interaction implementation method and system
JP7342366B2 (en) Avatar generation system, avatar generation method, and program
CN108983974B (en) AR scene processing method, device, equipment and computer-readable storage medium
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN110545442A (en) live broadcast interaction method and device, electronic equipment and readable storage medium
WO2023030550A1 (en) Data generation method, image processing method, apparatuses, device, and storage medium
WO2022048373A1 (en) Image processing method, mobile terminal, and storage medium
US20230419497A1 (en) Whole body segmentation
US20240096040A1 (en) Real-time upper-body garment exchange
US20240013463A1 (en) Applying animated 3d avatar in ar experiences
KR101749104B1 (en) System and method for advertisement using 3d model
US9699123B2 (en) Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session
WO2019105100A1 (en) Augmented reality processing method and apparatus, and electronic device
US20230196602A1 (en) Real-time garment exchange
WO2023121896A1 (en) Real-time motion and appearance transfer
WO2023076909A1 (en) Point and clean
US11127218B2 (en) Method and apparatus for creating augmented reality content
CN105426039A (en) Method and apparatus for pushing approach image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18890991

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18890991

Country of ref document: EP

Kind code of ref document: A1