WO2019205284A1 - Ar成像方法和装置 - Google Patents

Ar成像方法和装置 Download PDF

Info

Publication number
WO2019205284A1
WO2019205284A1 PCT/CN2018/094077 CN2018094077W WO2019205284A1 WO 2019205284 A1 WO2019205284 A1 WO 2019205284A1 CN 2018094077 W CN2018094077 W CN 2018094077W WO 2019205284 A1 WO2019205284 A1 WO 2019205284A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
limb
model
augmented reality
target
Prior art date
Application number
PCT/CN2018/094077
Other languages
English (en)
French (fr)
Inventor
李建亿
Original Assignee
太平洋未来科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来科技(深圳)有限公司 filed Critical 太平洋未来科技(深圳)有限公司
Publication of WO2019205284A1 publication Critical patent/WO2019205284A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an AR imaging method and apparatus.
  • AR Augmented Reality
  • virtual reality is an improved technology based on virtual reality. It can superimpose real-world scenes and virtual scenes in real time, providing users with more realistic scenes and further enhancing user immersion.
  • the video provider generally guides the user how to perform an augmented reality interaction.
  • the user encounters a video of interest to himself, if the video does not have an augmented reality.
  • the user can not interact with the video with augmented reality, and can not interact with the video through their own avatar, which reduces the user experience and limits the integration of augmented reality and other technologies.
  • the AR imaging method, device and electronic device provided by the embodiments of the present invention are used to solve at least the above problems in the related art.
  • An embodiment of the present invention provides an AR imaging method, which is applied to an augmented reality device, and includes:
  • the avatar library includes a plurality of avatar model files, and each avatar model file includes a limb model subfile and a facial features model subfile, and each limb model in the limb model subfile carries There is corresponding limb feature information, and each of the five sense models in the five-man model sub-file carries corresponding facial features information.
  • generating an avatar matching the character image according to the facial features information and the limb feature information including: determining a target avatar; and a model corresponding to the target avatar a file, searching for a matching target facial features model according to the facial features information, searching for a matching target limb model according to the limb feature information; combining the target facial features model and the target limb model to obtain a matching image of the character Virtual image.
  • the facial features information includes facial information of facial features or facial feature information
  • the limb characteristic information includes angle information of the limb or feature name information of the limb.
  • a video is captured by a camera of a mobile terminal, the target video frame being from a video captured by the camera;
  • the camera includes a lens, an auto focus voice coil motor, an image sensor, and a micro memory alloy optical image stabilization device, the lens being fixed on the auto focus voice coil motor, the image sensor acquiring an optical scene of the lens Converting to image data, the autofocus voice coil motor is mounted on the micro memory alloy optical image stabilization device, and the processor of the mobile terminal drives the micro memory alloy optical image stabilization device according to the lens shake data detected by the gyroscope Action to achieve lens compensation for the lens;
  • the micro memory alloy optical image stabilization device includes a movable plate and a substrate, the auto focus voice coil motor is mounted on the movable plate, the substrate has a size larger than the movable plate, and the movable plate is mounted on the substrate
  • a plurality of movable supports are disposed between the movable plate and the substrate, and four sides of the substrate are provided on the periphery of the substrate, and a gap is formed in a middle portion of each of the side walls, and a micro-motion is installed in the notch a movable member of the micro switch capable of opening or closing the notch under the instruction of the processor, and the movable member is provided with a strip disposed along a width direction of the movable member near a side of the movable plate
  • the substrate is provided with a temperature control circuit connected to the electrical contact
  • the processor controls opening and closing of the temperature control circuit according to a lens shake direction detected by the gyroscope
  • the movable plate The middle of the four sides of the four
  • the elastic member is a spring.
  • the mobile terminal is mounted on a bracket, the bracket includes a mounting seat, a support shaft, and three support frames hinged on the support shaft;
  • the mounting base includes a first mounting plate and a second mounting plate that are perpendicular to each other, and the first mounting plate and the second mounting plate are both for mounting the camera, and the support shaft is vertically mounted on the first mounting plate a bottom surface, the support shaft is disposed away from a bottom surface of the mounting seat with a radial dimension larger than a circumferential surface of the support shaft, and the three support frames are mounted on the support shaft from top to bottom, and each of the two The horizontal projection of the support frame after deployment is at an angle, the support shaft is a telescopic rod member, and includes a tube body connected to the mounting seat and a rod body partially retractable into the tube body, a portion of the rod that extends into the tubular body includes a first section, a second section, a third section, and a fourth section that are sequentially hinged, the first section being coupled to the tubular body, the first section being adjacent to the
  • the end of the second stage is provided with a mounting groove, a locking member is hinged in the mounting
  • a mounting slot is disposed at an end of the second segment adjacent to the third segment, and the mounting slot is hinged a locking member, the end of the third section adjacent to the second section is provided with a locking hole detachably engaged with the locking member, and the third section is provided with a mounting groove near the end of the fourth section A locking member is hinged in the mounting groove, and an end of the fourth segment adjacent to the third segment is provided with a locking hole detachably engaged with the locking member.
  • each of the support frames is further connected with a distance adjusting device
  • the distance adjusting device comprises a bearing ring mounted on the bottom of the support frame, a rotating ring connected to the bearing ring, a pipe body, a screw, a threaded sleeve and a support rod, wherein one end of the tubular body is provided with a plug, and the screw portion is installed in the tube body through the plugging, and the plugging is provided with an inner portion adapted to the screw Thread, another part of the screw is connected to the rotating ring, one end of the screw sleeve is installed in the tube body and is screwed with the screw, and the other end of the screw sleeve protrudes outside the tube body and
  • the support rod is fixedly connected, and the inner wall of the screw sleeve is provided with a protrusion, and the outer side wall of the screw sleeve is provided with a slide rail adapted to the protrusion along the length direction thereof, and the tube body includes adjacent
  • a still further aspect of the embodiments of the present invention provides an AR imaging method, which is applied to a server, and includes:
  • the processing comprising: determining whether there is a target object in the video frame image that is close to the avatar gesture or expression; if present, The target object is replaced with the avatar.
  • the camera includes a lens, an auto focus voice coil motor, an image sensor, and a micro memory alloy optical image stabilization device, the lens being fixed on the auto focus voice coil motor, the image sensor acquiring an optical scene of the lens Converting to image data, the autofocus voice coil motor is mounted on the micro memory alloy optical image stabilization device, and the processor of the mobile terminal drives the micro memory alloy optical image stabilization device according to the lens shake data detected by the gyroscope Action to achieve lens compensation for the lens;
  • the micro memory alloy optical image stabilization device includes a movable plate and a substrate, the auto focus voice coil motor is mounted on the movable plate, the substrate has a size larger than the movable plate, and the movable plate is mounted on the substrate
  • a plurality of movable supports are disposed between the movable plate and the substrate, and four sides of the substrate are provided on the periphery of the substrate, and a gap is formed in a middle portion of each of the side walls, and a micro-motion is installed in the notch a movable member of the micro switch capable of opening or closing the notch under the instruction of the processor, and the movable member is provided with a strip disposed along a width direction of the movable member near a side of the movable plate
  • the substrate is provided with a temperature control circuit connected to the electrical contact
  • the processor controls opening and closing of the temperature control circuit according to a lens shake direction detected by the gyroscope
  • the movable plate The middle of the four sides of the four
  • the elastic member is a spring.
  • the mobile terminal is mounted on a bracket, the bracket includes a mounting seat, a support shaft, and three support frames hinged on the support shaft;
  • the mounting base includes a first mounting plate and a second mounting plate that are perpendicular to each other, and the first mounting plate and the second mounting plate are both for mounting the camera, and the support shaft is vertically mounted on the first mounting plate a bottom surface, the support shaft is disposed away from a bottom surface of the mounting seat with a radial dimension larger than a circumferential surface of the support shaft, and the three support frames are mounted on the support shaft from top to bottom, and each of the two The horizontal projection of the support frame after deployment is at an angle, the support shaft is a telescopic rod member, and includes a tube body connected to the mounting seat and a rod body partially retractable into the tube body, a portion of the rod that extends into the tubular body includes a first section, a second section, a third section, and a fourth section that are sequentially hinged, the first section being coupled to the tubular body, the first section being adjacent to the
  • the end of the second stage is provided with a mounting groove, a locking member is hinged in the mounting
  • a mounting slot is disposed at an end of the second segment adjacent to the third segment, and the mounting slot is hinged a locking member, the end of the third segment adjacent to the second segment is provided with a locking hole detachably engaged with the locking member, and the third portion is provided with a mounting groove near the end of the fourth segment.
  • a locking member is hinged in the mounting groove, and an end of the fourth segment adjacent to the third segment is provided with a locking hole detachably engaged with the locking member.
  • each of the support frames is further connected with a distance adjusting device
  • the distance adjusting device comprises a bearing ring mounted on the bottom of the support frame, a rotating ring connected to the bearing ring, a pipe body, a screw, a threaded sleeve and a support rod, wherein one end of the tubular body is provided with a plug, and the screw portion is installed in the tube body through the plugging, and the plugging is provided with an inner portion adapted to the screw Thread, another part of the screw is connected to the rotating ring, one end of the screw sleeve is installed in the tube body and is screwed with the screw, and the other end of the screw sleeve protrudes outside the tube body and
  • the support rod is fixedly connected, and the inner wall of the screw sleeve is provided with a protrusion, and the outer side wall of the screw sleeve is provided with a slide rail adapted to the protrusion along the length direction thereof, and the tube body includes adjacent
  • Another aspect of the present invention provides an AR imaging apparatus for use in an augmented reality device, including:
  • An acquisition module configured to collect a character image, and identify a facial feature point and a limb feature point in the character image; and a determining module, configured to determine, by using the preset first model, the facial feature information corresponding to the facial feature point, and utilize The preset second model determines the limb feature information corresponding to the limb feature point; and the generating module is configured to determine, in the avatar database, the virtual image matching the character image according to the facial feature information and the limb feature information a sending module, configured to send the avatar and the identifier of the augmented reality device to the server, so that the server adds the avatar to a target video frame for augmented reality processing, and processes The subsequent target video frame is transmitted to the augmented reality device; wherein the target video frame is from a video captured by a mobile terminal.
  • the avatar library includes a plurality of avatar model files, and each avatar model file includes a limb model subfile and a facial features model subfile, and each limb model in the limb model subfile carries There is corresponding limb feature information, and each of the five sense models in the five-man model sub-file carries corresponding facial features information.
  • the generating module includes: a determining unit, configured to determine a target avatar; and a searching unit, configured to: in the model file corresponding to the target avatar, search for a matching target facial feature model according to the facial features information, according to The limb feature information finds a matching target limb model; a combination unit is configured to combine the target facial features model and the target limb model to obtain an avatar matching the character image.
  • a further aspect of the present invention provides an AR imaging apparatus, which is applied to a server, and includes:
  • a first receiving module configured to receive an avatar sent by the augmented reality device and an identifier of the augmented reality device
  • a second receiving module configured to receive a video frame image captured by the mobile terminal and an identifier of the mobile terminal; Determining, according to a correspondence table between the identifier of the pre-established augmented reality device and the identifier of the mobile terminal, a target video frame image corresponding to the avatar; and a processing module, configured to target the avatar according to the avatar
  • the video frame image is processed, and the processed target video frame image is transmitted to the augmented reality device.
  • processing module includes:
  • a determining unit configured to determine whether there is a target object in the video frame image that is close to the avatar gesture or expression; and a replacement unit, when there is a target object that is close to the avatar gesture or expression, The target object is replaced with the avatar.
  • a still further aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor;
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform any of the enhanced device side of the embodiments of the present invention Item AR imaging method.
  • a further aspect of the embodiments of the present invention provides a server, including: at least one processor; and a memory communicatively coupled to the at least one processor;
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform any of the above-described server side of an embodiment of the present invention AR imaging method.
  • the AR imaging method, apparatus, and electronic device provided by the embodiments of the present invention can determine an augmented reality scene based on the user's own will, and can generate a virtual image corresponding to the user, so that the virtual image is placed on the current mobile terminal.
  • the played video enhance the user's immersion and interactive fun.
  • FIG. 1 is a flowchart of an AR imaging method according to an embodiment of the present invention
  • step S103 is a specific flowchart of step S103 in the AR imaging method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of an AR imaging method according to an embodiment of the present invention.
  • FIG. 4 is a specific flowchart of step S303 in the AR imaging method according to an embodiment of the present invention.
  • FIG. 5 is a structural diagram of an AR imaging apparatus according to an embodiment of the present invention.
  • FIG. 6 is a structural diagram of an AR imaging apparatus according to an embodiment of the present invention.
  • FIG. 7 is a structural diagram of an AR imaging apparatus according to an embodiment of the present invention.
  • FIG. 8 is a structural diagram of an AR imaging apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing the hardware structure of an electronic device for performing an AR imaging method provided by an embodiment of the method of the present invention.
  • FIG. 10 is a structural diagram of a camera according to an embodiment of the present invention.
  • FIG. 11 is a structural diagram of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • FIG. 12 is a structural diagram showing an operation state of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • Figure 13 is a structural view of a bracket according to an embodiment of the present invention.
  • Figure 14 is a structural view of a support shaft according to an embodiment of the present invention.
  • FIG. 15 is a structural diagram of a distance adjusting device according to an embodiment of the present invention.
  • the application scenario of the embodiment of the present invention is introduced.
  • a user wears an augmented reality device to watch a video captured by the mobile terminal
  • the user sees a video segment that is of interest to him
  • the user can perform an augmented reality interaction with the segment, that is, the user's A avatar is added to the video clip to place the avatar in the currently playing video.
  • the augmented reality device may be used to collect the avatar of the character, and the avatar matching the image of the character is processed and sent to the server, and the server acquires the video segment, and the frame image of the video segment is performed by using the avatar.
  • the augmented reality process returns the processed video frame image to the augmented reality device, thereby enabling the user to see their interaction with the video through the augmented reality device.
  • the augmented reality device in embodiments of the present invention may include an eye and a helmet that are wearable by a user.
  • the augmented reality device is provided with a human face image capturing component.
  • the facial image capturing component faces the user's face and has a preset distance from the user's face, that is, not directly with the user's face.
  • the mobile terminal in the embodiment of the present invention includes but is not limited to a mobile phone, a tablet computer, and the like.
  • FIG. 1 is a flowchart of an AR imaging method according to an embodiment of the present invention. As shown in FIG. 1 , an AR imaging method provided by an embodiment of the present invention is applied to an augmented reality device, including:
  • S101 Collect a character image, and identify a facial feature point and a limb feature point in the character image.
  • the augmented reality device can collect the image of the user's character through the face image acquisition component set by it.
  • the acquisition timing of the character image may be an operation instruction to the enhanced device in response to the user's instruction to the enhanced device, that is, the instruction is an augmented reality interaction that the user desires to perform with the currently played video content.
  • the enhancement device can recognize and mark the facial features and limb feature points in the image of the person.
  • the specific identification method may be a pre-trained model or a pre-set feature point identification rule, which is not limited herein.
  • Limb feature points can include: head, neck, hand, arm, torso, leg, and foot.
  • the features of the facial features may include: eyebrows, eyes, mouth, and the like.
  • S102 Determine a facial feature information corresponding to the facial feature point by using a preset first model, and determine a limb feature information corresponding to the limb feature point by using a preset second model.
  • the preset first model and the second model are models for determining facial features information and limb feature information based on facial feature points and limb feature points.
  • the preset first model and the second model may be implemented by using an existing existing algorithm, such as a convolutional neural network algorithm, and the specific implementation process belongs to common technical means of those skilled in the art, and details are not described herein again.
  • the first model inputs a character image identifying the facial features point, and outputs facial features information; the second model inputs a character image identifying the limb feature points, and outputs the limb feature information.
  • the facial feature information may include facial information of the facial features or feature name information of the facial features
  • the physical feature information may include angle information of the limb or feature name information of the limb. That is, the first model can determine the angle of the facial features in the image of the person through the identified features of the facial features (such as 15 degrees of the mouth angle, 5 degrees of the eyebrows, etc.), or directly obtain the feature name information of the facial features according to the facial features of the logo (for example) The mouth is smiling, the mouth is toothy, the stretch of the eyebrows, etc.).
  • the second model can determine the angle of each limb in the image of the person through the identified features of the facial features (such as 45 degrees of the left arm, 20 degrees of the right leg, etc.), or directly according to the identified feature points of the limb Get the feature name information of each limb (such as raising the arms horizontally, lifting the toes, head to the left, etc.).
  • the avatar may refer to a game character, a cartoon character, an anime character, etc., and the present invention is not limited thereto.
  • the avatar library includes a plurality of avatar model files, and each avatar model file includes a limb model subfile and a facial features model subfile, and each limb model in the limb model subfile carries a corresponding limb feature.
  • Information, each of the five sense models in the five-man model sub-file carries corresponding facial features information.
  • the limb model sub-file may include each limb model, wherein each limb model includes a limb model of a plurality of different angle information of the limb or a limb model including a plurality of different limb feature information, taking the left arm as an example, left
  • the arm model may include a 45-degree model of the left arm, a 90-degree model of the left arm, a 30-degree model of the left arm, or a left-arm flat model, and a left-arm inward bending model.
  • the facial features model sub-file may include a five-member model, wherein each five-minor model includes a plurality of facial features of different angle information or a five-member model including a plurality of different facial features information, and the examples are similar to the limb model, and no description is made herein. .
  • step S103 may include the following sub-steps:
  • This step is to select an avatar as the target avatar for the character image.
  • the target avatar may be automatically selected by analyzing the historical behavior data of the user; or the target avatar may be automatically selected by analyzing the image of the person; or the user may set the target avatar in advance;
  • the invention is not limited herein.
  • S1032 Searching, in the model file corresponding to the target avatar, a matching target facial features model according to the facial features information, and searching for a matching target limb model according to the limb feature information.
  • a matching facial features model is found (ie, an eye, a nose, and a matching image matching the character image are found) Mouth, etc.)
  • the matched limb model is found (ie, the arm matching the character motion in the character image is found) , legs, torso, etc.).
  • the five facial features models are combined to obtain the avatar expressions that match the expressions of the characters in the image of the characters; the movements of the characters are projected through the limbs, and the target limb models are When combined, a virtual character action that matches the character's motion in the character image is obtained. Finally, the two are brought together to get an avatar that matches the image of the character.
  • the avatar sent to the server should also carry the identity of the augmented reality device to determine which augmented reality device the avatar came from.
  • an operation instruction may be sent to the mobile terminal, where the operation instruction may be double-clicking the screen of the mobile terminal, long pressing the screen of the mobile terminal, or triggering the preset of the mobile terminal.
  • the present invention is not limited herein.
  • the mobile terminal transmits the video frame image of the currently played video and the identifier of the mobile terminal to the server in response to the user's operation instruction.
  • a correspondence table between the augmented reality device identifier and the mobile terminal identifier is pre-stored in the server, that is, the augmented reality device and the mobile terminal having the interaction relationship can be determined by using the correspondence relationship table.
  • the server searches for a video frame image corresponding to the received avatar according to the correspondence table, and determines the video frame image as the target video frame image. Then, the server adds the avatar to the target video frame, performs augmented reality processing on the target video frame, and sends the enhanced reality processed video frame image to the augmented reality device.
  • the server After performing the augmented reality processing on the video frame image by the above, the server transmits the enhanced reality processed video frame image to the augmented reality device. After receiving the user's operation instruction, the above steps are repeated to perform augmented reality processing on each frame of the video frame image, so the user wearing the target augmented reality device can see that his or her avatar appears in the played video. , complete the augmented reality interaction.
  • the AR imaging method provided by the embodiment of the present invention can determine an augmented reality scene based on the user's own will, and can generate a virtual image corresponding to the user, so that the virtual image is placed in the video played by the current mobile terminal, thereby enhancing the user's immersion and The fun of interaction.
  • FIG. 3 is a flowchart of an AR imaging method according to an embodiment of the present invention. As shown in FIG. 3, the AR imaging method provided by the embodiment of the present invention includes:
  • S301 Receive an avatar sent by the augmented reality device and an identifier of the augmented reality device.
  • S302. Receive a video frame image captured by the mobile terminal and an identifier of the mobile terminal.
  • the avatar sent to the server should also carry the identifier of the augmented reality device to determine which augmented reality device the avatar comes from.
  • an operation instruction may be sent to the mobile terminal, where the operation instruction may be double-clicking the screen of the mobile terminal, long pressing the screen of the mobile terminal, or triggering the preset of the mobile terminal.
  • the present invention is not limited herein.
  • the mobile terminal transmits the video frame image of the currently played video and the identifier of the mobile terminal to the server in response to the user's operation instruction.
  • a correspondence table between the augmented reality device identifier and the mobile terminal identifier is pre-stored in the server, that is, the augmented reality device and the mobile terminal having the interaction relationship can be determined by using the correspondence relationship table.
  • the server searches for a video frame map corresponding to the received avatar according to the correspondence table, and determines the video frame image as the target video frame image.
  • S304 Process the target video frame image according to the avatar, and send the processed target video frame image to the augmented reality device.
  • the server adds the avatar to the target video frame, performs augmented reality processing on the target video frame, and sends the enhanced reality processed video frame image to the augmented reality device.
  • this step may include the following sub-steps:
  • S3041 Determine whether there is a target object in the video frame image that is close to the avatar gesture or expression.
  • the user may wish to replace a certain target character in the video with himself, thereby completing an augmented reality interaction with the video.
  • the user may make an action similar to the target character.
  • an expression similar to the target character and an instruction is given to the augmented reality device, and the augmented reality device then collects the image of the person including the action or the expression corresponding to the instruction, and performs virtualization according to the method of the embodiment shown in FIG. 1 or FIG. The generation of the image.
  • the server can compare the pose or expression of the avatar with the pose or expression of the character appearing in the video frame image.
  • the specific comparison method may be to calculate the coincidence degree of the posture of the limb or the coincidence degree of the expression, or input the characters and avatars appearing in each video frame image into the model through the pre-trained model, and output the coincidence degree of the two.
  • the coincidence degree of a certain character and the avatar in the video frame image is greater than a preset threshold, the character is a target object that is close to the avatar. At this time, step S3042 is performed.
  • the server can find the target position matching the video frame image according to the avatar expression and/or the avatar posture.
  • the video frame image is the sea
  • the avatar expression is the smile
  • the beach can be One place is the target position that matches the avatar.
  • the pixel corresponding to the target position is replaced with the pixel of the avatar, thereby completing the processing of the augmented reality of the video frame image.
  • the server can replace the pixel corresponding to the target object with the pixel of the avatar, thereby completing the processing of the augmented reality of the video frame image.
  • the server After performing the augmented reality processing on the video frame image by the above, the server transmits the enhanced reality processed video frame image to the augmented reality device. After receiving the user's operation instruction, the above steps are repeated to perform augmented reality processing on each frame of the video frame image, so the user wearing the target augmented reality device can see that his or her avatar appears in the played video. , complete the augmented reality interaction.
  • the AR imaging method provided by the embodiment of the present invention can determine an augmented reality scene based on the user's own will, and can generate a virtual image corresponding to the user, so that the virtual image is placed in the video played by the current mobile terminal, thereby enhancing the user's immersion and The fun of interaction.
  • FIG. 5 and FIG. 6 are structural diagrams of an AR imaging apparatus according to an embodiment of the present invention.
  • the device specifically includes: an acquisition module 1000, a determination module 2000, a generation module 3000, and a transmission module 4000. among them,
  • the collecting module 1000 is configured to collect a character image, and identify a facial feature point and a limb feature point in the character image;
  • the determining module 2000 is configured to determine, by using a preset first model, the facial feature point corresponding to the feature The facial feature information is used to determine the limb feature information corresponding to the limb feature point by using the preset second model;
  • the generating module 3000 is configured to, in the avatar database, the facial features information and the limb feature information Determining an avatar matching the character image;
  • the sending module 4000 configured to send the avatar and the identifier of the augmented reality device to the server, so that the server adds the avatar to Augmented reality processing is performed in the target video frame, and the processed target video frame is sent to the augmented reality device; wherein the target video frame is from a video captured by the mobile terminal.
  • the avatar library includes a plurality of avatar model files, and each avatar model file includes a limb model subfile and a facial features model subfile, and each limb model in the limb model subfile carries There is corresponding limb feature information, and each of the five sense models in the five-man model sub-file carries corresponding facial features information.
  • the generating module 3000 includes: a determining unit 310, a searching unit 320, and a combining unit 330. among them,
  • the determining unit 310 is configured to determine a target avatar
  • the searching unit 320 is configured to: in the model file corresponding to the target avatar, search for a matching target facial feature model according to the facial features information, according to the limb
  • the feature information finds a matching target limb model
  • the combining unit 330 is configured to combine the target facial features model and the target limb model to obtain an avatar matching the character image.
  • the AR imaging device provided by the embodiment of the present invention is specifically configured to perform the method provided by the embodiment shown in FIG. 1 and FIG. 2, and the implementation principle, method, and functional use thereof are similar to the embodiment shown in FIG. 1 and FIG. This will not be repeated here.
  • FIG. 7 and FIG. 8 are structural diagrams of an AR imaging apparatus according to an embodiment of the present invention.
  • the device specifically includes: a first receiving module 50, a second receiving module 60, a determining module 70, and a processing module 80. among them,
  • the first receiving module 50 is configured to receive an avatar sent by the augmented reality device and an identifier of the augmented reality device, where the second receiving module 60 is configured to receive a video frame image captured by the mobile terminal and the mobile terminal.
  • the determining module 70 is configured to determine, according to a correspondence table between the identifier of the pre-established augmented reality device and the identifier of the mobile terminal, a target video frame image corresponding to the avatar; the processing module 80 And for processing the target video frame image according to the avatar, and transmitting the processed target video frame image to the augmented reality device.
  • the processing module 80 includes: a determining unit 810 and a replacing unit 820.
  • the determining unit 810 is configured to determine whether there is a target object in the video frame image that is close to the avatar gesture or expression;
  • the replacing unit 820 is configured to exist in proximity to the avatar gesture or expression When the target object is replaced, the target object is replaced with the avatar.
  • the AR imaging device provided by the embodiment of the present invention is specifically configured to perform the method provided by the embodiment shown in FIG. 3 and FIG. 4, and the implementation principle, method, and functional use thereof are similar to the embodiment shown in FIG. 3 and FIG. This will not be repeated here.
  • the AR imaging device of the embodiments of the present invention can be independently installed in the electronic device as one of the software or hardware functional units, or can be used as one of the functional modules integrated in the processor to perform the AR imaging of the embodiment of the present invention. method.
  • FIG. 9 is a schematic diagram showing the hardware structure of an electronic device for performing an AR imaging method provided by an embodiment of the method of the present invention.
  • the electronic device includes:
  • processors 910 and memory 920 one processor 910 is taken as an example in FIG.
  • the apparatus for performing the AR imaging method described above may further include: an input device 930 and an output device 930.
  • the processor 910, the memory 920, the input device 930, and the output device 940 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 920 is a non-volatile computer readable storage medium, and is usable for storing a non-volatile software program, a non-volatile computer-executable program, and a module, as in the AR imaging method in the embodiment of the present invention.
  • the processor 910 performs various functional applications of the server and data processing by executing non-volatile software programs, instructions, and modules stored in the memory 920, that is, implementing the AR imaging method.
  • the memory 920 may include a storage program area and a storage data area, wherein the storage program area may store an operation device, an application required for at least one function; the storage data area may store the use of the AR imaging device provided according to an embodiment of the present invention. Data, etc.
  • memory 920 can include high speed random access memory 920, and can also include non-volatile memory 920, such as at least one disk storage 920 device, flash memory device, or other non-volatile solid state memory 920 device.
  • memory 920 can optionally include memory 920 remotely disposed relative to processor 99, which can be connected to the AR imaging device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 930 can receive input numeric or character information and generate key signal inputs related to user settings and function control of the AR imaging device.
  • Input device 930 can include a device such as a press module.
  • the one or more modules are stored in the memory 920, and when executed by the one or more processors 910, the AR imaging method is performed.
  • the device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • Embodiments of the present invention provide a non-transitory computer readable storage medium storing computer executable instructions, wherein when the computer executable instructions are executed by an electronic device, the electronic device is caused
  • the AR imaging method in any of the above method embodiments is performed.
  • An embodiment of the present invention provides a computer program product, wherein the computer program product comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, wherein when the program instruction When executed by the electronic device, the electronic device is caused to perform the AR imaging method in any of the above method embodiments.
  • the embodiment provides a The camera of the mobile terminal with better anti-shake performance, the picture or video obtained by the camera is more clear than the ordinary camera, and can better meet the high quality requirements of the user.
  • the video acquired by the camera in the present embodiment is used in the AR imaging method or apparatus in the above embodiment, the effect of augmented reality is better.
  • the existing mobile terminal camera (the mobile terminal is a mobile phone or a video camera, etc.) including the lens 1, the auto focus voice coil motor 2, and the image sensor 3 are known in the art, and thus are not described here.
  • Micro memory alloy optical anti-shake device is usually used because the existing anti-shake device mostly uses the energized coil to generate Loren magnetic force to drive the lens in the magnetic field, and to achieve optical image stabilization, the lens needs to be driven in at least two directions. It means that multiple coils need to be arranged, which will bring certain challenges to the miniaturization of the overall structure, and is easily interfered by external magnetic fields, thus affecting the anti-shake effect.
  • Some prior art techniques achieve the stretching and shortening of the memory alloy wire through temperature changes.
  • the autofocus voice coil motor is moved to realize the lens shake compensation, and the control chip of the micro memory alloy optical anti-shake actuator can control the change of the drive signal to change the temperature of the memory alloy wire, thereby controlling the extension of the memory alloy wire.
  • the length and length are shortened, and the position and moving distance of the actuator are calculated based on the resistance of the memory alloy wire.
  • the Applicant has found that due to the randomness and uncertainty of the jitter, the structure of the above technical solution alone cannot accurately compensate the lens in the case of multiple jitters due to the temperature rise of the shape memory alloy. It takes a certain time to cool down.
  • the above technical solution can compensate the lens for the first direction jitter, but when the second direction of the jitter occurs, the memory alloy wire is too late. It is deformed in an instant, so it is easy to cause the compensation to be untimely. It is impossible to accurately achieve lens shake compensation for multiple jitters and continuous jitter in different directions. This results in poor quality of the acquired image, so the camera or camera structure needs to be improved.
  • the camera of the embodiment includes a lens 1, an auto focus voice coil motor 2, an image sensor 3, and a micro memory alloy optical image stabilization device 4, and the lens 1 is fixed to the auto focus voice coil.
  • the image sensor 3 transmits an image acquired by the lens 1 to the identification module 100
  • the autofocus voice coil motor 2 is mounted on the micro memory alloy optical image stabilization device 4, the movement
  • the internal processor of the terminal drives the action of the micro-memory alloy optical anti-shake device 4 according to the lens shake detected by the gyroscope (not shown) inside the mobile terminal to realize the lens compensation of the lens;
  • the improvement of the micro memory alloy optical anti-shake device is as follows:
  • the micro memory alloy optical image stabilization device comprises a movable plate 5 and a substrate 6, wherein the movable plate 5 and the substrate 6 are rectangular plate-shaped members, and the autofocus voice coil motor 2 is mounted on the movable plate 5, the substrate
  • the size of 6 is larger than the size of the movable panel 5, the movable panel 5 is mounted on the substrate 6, and a plurality of movable supports 7 are disposed between the movable panel 5 and the substrate 6, and the movable support 7 Specifically, the balls are disposed in the grooves at the four corners of the substrate 6 to facilitate the movement of the movable plate 5 on the substrate 6.
  • the substrate 6 has four side walls around, and the central portion of each of the side walls A notch 8 is disposed, and the notch 8 is mounted with a micro switch 9 , and the movable member 10 of the micro switch 9 can open or close the notch under the instruction of the processing module, and the movable member 10 is close to the
  • the side surface of the movable panel 5 is provided with strip-shaped electrical contacts 11 arranged along the width direction of the movable member 10, and the substrate 6 is provided with a temperature control circuit connected to the electrical contact 11 (not shown)
  • the processing module can control the opening of the temperature control circuit according to the lens shake direction detected by the gyroscope
  • a shape memory alloy wire 12 is disposed in a middle portion of the four sides of the movable plate 5, and one end of the shape memory alloy wire 12 is fixedly connected to the movable plate 5, and the other end is slidably engaged with the electrical contact 11
  • An elastic member 13 for resetting is disposed between the inner side wall of the periphery of the
  • the working process of the micro memory alloy optical image stabilization device of the present embodiment will be described in detail below with reference to the above structure: taking the lens in the opposite direction of the lens as an example, when the lens is shaken in the first direction, the gyroscope will detect The lens shake direction and distance are fed back to the processor, and the processor calculates the amount of elongation of the shape memory alloy wire that needs to be controlled to compensate the jitter, and drives the corresponding temperature control circuit to heat the shape memory alloy wire.
  • the memory alloy wire is elongated and drives the movable plate to move in a direction that can compensate for the first direction of shaking, while the other shape memory alloy wire symmetrical with the shape memory alloy wire does not change, but with the other shape memory alloy wire
  • the connected movable piece opens the corresponding notch, so that the other shape memory alloy wire protrudes out of the notch by the movable plate, and at this time, the elastic members near the two shape memory alloy wires are respectively stretched and Compression (as shown in Figure 12), feedback the shape memory alloy wire after moving to the specified position on the micro memory alloy optical anti-shake actuator
  • the resistance of the micro-memory alloy optical anti-shake actuator can be corrected by comparing the deviation of the resistance value from the target value; and when the second dither occurs, the processor first passes through another shape and the alloy wire
  • the abutting movable member closes the notch, and opens the movable member that abuts against the shape memory alloy wire in the extended state, and the rotation of the movable member with
  • the opening of the movable member abutting against the shape memory alloy wire in an extended state can facilitate the extension of the shape memory alloy wire in the extended state, and the elastic deformation of the two elastic members can ensure rapid reset of the movable plate.
  • the processor again calculates the amount of elongation of the shape memory alloy wire that needs to be controlled to compensate for the second jitter, and drives the corresponding temperature control circuit to heat up the other shape memory alloy wire, and the other shape memory alloy wire is elongated and Driving the movable plate to compensate for the direction of the second direction of shaking, due to the lack of the shape of the alloy wire
  • the mouth is opened, so it does not affect the other shape and the movement of the alloy ribbon moving plate, and the micro memory alloy optical image stabilization device of the embodiment is used when multiple shaking occurs due to the opening speed of the movable member and the resetting action of the spring. Accurate compensation can be made, and the effect is far superior to the micro memory alloy optical anti-shake device in the prior art.
  • the above is only a simple two-jitter.
  • the two adjacent shape memory alloy wires can be elongated to compensate for the jitter.
  • the basic working process is as described above. The description principle is the same, but it is not described here.
  • the detection feedback of the shape memory alloy resistance and the detection feedback of the gyroscope are all prior art, and will not be described here.
  • the mobile terminal is a camera
  • the camera can be mounted on the bracket of the camera.
  • the bracket of the existing camera has the following defects during use: 1.
  • the existing camera bracket All are supported by a tripod, but the tripod structure can not guarantee the level of the bracket mount when the ground unevenness is installed at a large uneven position, which is easy to be shaken or tilted, which may easily adversely affect the shooting; 2.
  • Existing bracket Cannot be used as a shoulder camera bracket, with a single structure and function, and must be equipped with a shoulder camera bracket separately when shoulder impact shooting is required.
  • the bracket of the embodiment includes a mounting seat 14, a support shaft 15, and three support frames 16 hinged on the support shaft;
  • the mounting bracket 14 includes a first mounting plate 141 and a second mounting plate 142 that are perpendicular to each other, and the first mounting plate 141 and the second mounting plate 142 are both used to mount the camera, and the support shaft 15 is vertically mounted at the same.
  • a bottom surface of the first mounting plate 141, the support shaft 15 is disposed away from the bottom end of the mounting seat 14 with a radial surface slightly larger than the circumferential surface 17 of the support shaft, and the three support frames 16 are from top to bottom.
  • each of the two support frames 16 is inclined at an angle.
  • the circumferential surface 17 is first assumed to be flat on the uneven surface.
  • the erection of the bracket is leveled by opening and adjusting the position of the three retractable support frames, so even the uneven ground can quickly erect the support, adapt to various terrains, and ensure that the mount is horizontal. status.
  • the support shaft 15 of the present embodiment is also a telescopic rod member including a tube body 151 connected to the mounting seat 14 and a rod body 152 partially retractable into the tube body 151, the rod body
  • the portion of the 152 that extends into the tubular body includes a first segment 1521, a second segment 1522, a third segment 1523, and a fourth segment 1524 that are sequentially hinged, the first segment 1521 being coupled to the tubular body 151,
  • a mounting slot 18 is defined in the end of the first segment 1521 adjacent to the second segment 1522.
  • a locking member 19 is hinged in the mounting slot 18, and the second segment 1522 is adjacent to the end of the first segment 1521.
  • the locking hole 20 is detachably engaged with the locking member 19.
  • the second portion 1522 is provided with a mounting groove 18 near the end of the third segment 1523.
  • the mounting groove 18 is hingedly locked.
  • the third section 1523 is provided with a locking hole 20 detachably engaged with the locking member 19 near the end of the second segment 1522, and the third segment 1523 is adjacent to the end of the fourth segment 1524.
  • the mounting portion 18 is provided with a locking member 19, and the locking portion 19 is hinged in the mounting groove 18.
  • the end of the fourth portion 1524 adjacent to the third portion 1523 is provided with a detachable locking member 19.
  • the locking hole 20 can be hidden in the mounting groove. When the locking member is needed, the locking member can be locked on the locking hole 20 by rotating the locking member.
  • the locking member 19 can be a strip having a protrusion, and the protrusion is matched with the size of the locking hole 20, and the protrusion is pressed into the locking hole 20 completely adjacent to the two
  • the position of the segments (for example, the first segment and the second segment) is fixed to prevent relative rotation, and the portion can be formed by the cooperation of the first segment 1521, the second segment 1522, the third segment 1523, and the fourth segment 1524.
  • the structure is fixed, and the relative positions of the segments are fixed by the locking member 19.
  • the soft material can also be provided at the bottom of the structure.
  • the Applicant has also found that the telescopic support frame stretches most of the telescopic portion by human force to realize the adjustment of the telescopic length, but the distance is uncontrollable and the randomness is large, so that the problem of adjustment inconvenience often occurs, especially in need of When the length of the telescopic length is finely adjusted, it is often not easy to implement. Therefore, the applicant also optimizes the structure of the support frame 16. As shown in FIG. 15, the bottom end of each of the support frames 16 of the embodiment is also connected with a pitch adjustment.
  • the device 21 includes a bearing ring 211 mounted on the bottom of the support frame 16, a rotating ring 212 connected to the bearing ring 211, a tube body 213, a screw 214, a threaded sleeve 215 and a support rod 216.
  • One end of the tubular body 213 is provided with a plugging 217, and the screw 215 is partially installed in the tubular body 213 through the plugging 217, and the plugging 217 is provided with the screw 214.
  • the other end of the screw 214 is connected to the rotating ring 212.
  • One end of the threaded sleeve 215 is mounted in the tube body 213 and is screwed to the screw 214.
  • the other end of the threaded sleeve 215 extends.
  • the inner wall of the threaded sleeve 215 is provided with a protrusion 218.
  • the outer side wall of the threaded sleeve 215 is provided with a slide 219 adapted to the protrusion along the length thereof.
  • 213 includes an adjacent first portion 2131 having an inner diameter smaller than an inner diameter of the second portion 2132, and a second portion 2132 disposed on an outer end of the second portion 2132.
  • the end of the screw sleeve 215 near the screw 214 is provided with a limiting end 2151 having an outer diameter larger than the inner diameter of the first portion.
  • the screw 214 By rotating the rotating ring 212, the screw 214 is rotated in the tube body 213, and the rotation trend is transmitted.
  • the screw sleeve 215 is not rotated due to the cooperation of the protrusion 218 and the slide 219, so that the rotational force is turned into an outward linear movement, thereby driving the support rod 216 to move, and the bottom end of the support frame is realized.
  • the length is finely adjusted, which is convenient for the user to flatten the bracket and its mounting seat, and provide a good foundation for the subsequent shooting work.
  • a machine-readable medium includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc., the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiments described Methods.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media e.g., magnetic disks, magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc.
  • the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiment

Abstract

本发明实施例提供一种AR成像方法和装置,包括:采集人物图像,并标识所述人物图像中的五官特征点和肢体特征点;利用第一模型确定所述五官特征点对应的五官特征信息,利用第二模型确定所述肢体特征点对应的肢体特征信息;在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息生成与所述人物图像匹配的虚拟形象;将所述虚拟形象和所述增强现实设备的标识发送至所述服务器,以使所述服务器将所述虚拟形象添加至目标视频帧中进行增强现实处理,并将处理后的所述目标视频帧发送至所述增强现实设备。

Description

AR成像方法和装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种AR成像方法和装置。
背景技术
AR(Augmented Reality,增强现实技术)是一种基于虚拟现实的改进技术,能够将现实场景与虚拟场景进行实时叠加,为用户提供更加逼真的场景,进一步增强了用户的沉浸感。
然而,发明人在实现本发明的过程中发现,现有技术中一般都是由视频提供方指导用户如何进行增强现实的交互,当用户遇到自己感兴趣的视频时,如果该视频没有增强现实功能,则用户不能与该视频进行增强现实的交互,更不能通过自己的虚拟形象与视频进行交互,降低了用户体验,同时限制了增强现实与其他技术的融合发展。
发明内容
本发明实施例提供的AR成像方法、装置及电子设备,用以至少解决相关技术中的上述问题。
本发明实施例一方面提供了一种AR成像方法,应用于增强现实设备,包括:
采集人物图像,并标识所述人物图像中的五官特征点和肢体特征点;利用预设的第一模型确定所述五官特征点对应的五官特征信息,利用预设的第二模型确定所述肢体特征点对应的肢体特征信息;在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息生成与所述人物图像匹配的虚拟形象;将所述虚拟形象和所述增强现实设备的标识发送至所述服务器,以使所述服务器将所述虚拟形象添加至目标视频帧中进行增强现实处理,并将处理后的所述目标视频帧发送至所述增强现实设备;其中,所述目标视频帧来自移动终端拍摄的视频。
进一步地,所述虚拟形象库中包括多个虚拟形象的模型文件,每个虚拟形象的模型文件中包括肢体模型子文件和五官模型子文件,所述肢体模型子文件中的每个肢体模型携带有对应的肢体特征信息,所述五官模型子文件中的每个五官模型携带有对应的五官特征信息。
进一步地,所述在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息生成与所述人物图像匹配的虚拟形象,包括:确定目标虚拟形象;在所述目标虚拟形象对应的模型文件中,根据所述五官特征信息查找匹配的目标五官模型,根据所 述肢体特征信息查找匹配的目标肢体模型;将所述目标五官模型和所述目标肢体模型进行组合,得到与人物图像匹配的虚拟形象。
进一步地,所述五官特征信息包括五官的角度信息或五官的特征名称信息,所述肢体特征信息包括肢体的角度信息或肢体的特征名称信息。
进一步的,
所述AR成像方法中,通过移动终端的摄像头来拍摄视频,所述目标视频帧来自所述摄像头拍摄的视频;
所述摄像头包括镜头、自动聚焦音圈马达、图像传感器以及微型记忆合金光学防抖器,所述镜头固装在所述自动聚焦音圈马达上,所述图像传感器将所述镜头获取的光学场景转换为图像数据,所述自动聚焦音圈马达安装在所述微型记忆合金光学防抖器上,移动终端的处理器根据陀螺仪检测到的镜头抖动数据驱动所述微型记忆合金光学防抖器的动作,实现镜头的抖动补偿;
所述微型记忆合金光学防抖器包括活动板和基板,所述自动聚焦音圈马达安装在所述活动板上,所述基板的尺寸大于所述活动板,所述活动板安装在所述基板上,所述活动板和所述基板之间设有多个活动支撑,所述基板的四周具有四个侧壁,每个所述侧壁的中部设有一缺口,所述缺口处安装有微动开关,所述微动开关的活动件可以在所述处理器的指令下打开或封闭所述缺口,所述活动件靠近所述活动板的侧面设有沿所述活动件宽度方向布设的条形的电触点,所述基板设有与所述电触点相连接的温控电路,所述处理器根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板的四个侧边的中部均设有形状记忆合金丝,所述形状记忆合金丝一端与所述活动板固定连接,另一端与所述电触点滑动配合,所述基板的四周的内侧壁与所述活动板之间均设有弹性件,当所述基板上的一个温控电路连通时,与该电路相连接的形状记忆合金丝伸长,同时,远离该形状记忆合金丝的微动开关的活动件打开所述缺口,与该形状记忆合金丝同侧的弹性件收缩,远离该形状记忆合金丝的弹性件伸长。
进一步的,所述弹性件为弹簧。
进一步的,所述移动终端安装于支架上,所述支架包括安装座、支撑轴、三个铰装在所述支撑轴上的支撑架;
所述安装座包括相互垂直的第一安装板和第二安装板,所述第一安装板和第二安装板均可用于安装所述摄像机,所述支撑轴垂直安装在所述第一安装板的底面,所述支撑轴远离所述安装座的底端设有径向尺寸大于所述支撑轴的圆周面,三个所述支撑架由上至下安装在所述支撑轴上,且每两个所述支撑架展开后的水平投影呈一夹角,所述支撑轴为伸缩杆件,其包括与所述安装座相连接的管体和部分可收缩至所述管体内的杆体,所述杆体伸入所述管体的部分包括依次铰接的第一段、第二段、第三段和第四段,所述第一段与所述管体相连接,所述第一段靠近所述第二段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第二段靠近所述第一段的端部设有与锁止 件可拆卸配合的锁止孔,所述第二段靠近所述第三段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第三段靠近所述第二段的端部设有与锁止件可拆卸配合的锁止孔,所述第三段靠近所述第四段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第四段靠近所述第三段的端部设有与锁止件可拆卸配合的锁止孔。
进一步的,每个所述支撑架的底端还连接有调距装置,所述调距装置包括安装在所述支撑架底部的轴承圈、与所述轴承圈相连接的转动环、管体、螺杆、螺套及支撑杆,所述管体的一端设有封堵,所述螺杆部分通过所述封堵安装在所述管体内,所述封堵设有与所述螺杆相适配的内螺纹,所述螺杆另一部分与所述转动环相连接,所述螺套一端安装在所述管体内并与所述螺杆螺纹连接,所述螺套的另一端伸出所述管体外并与所述支撑杆固定连接,所述螺套的内壁设有一凸起,所述螺套的外侧壁沿其长度方向设有与所述凸起相适配的滑道,所述管体包括相邻的第一部分和第二部分,所述第一部分的内径小于所述第二部分的内径,所述封堵设置在所述第二部分的外端上,所述螺套靠近所述螺杆的端部设有外径大于所述第一部分内径的限位端。
本发明实施例又一方面提供了一种AR成像方法,应用于服务器,包括:
接收增强现实设备发送的虚拟形象和所述增强现实设备的标识;接收移动终端拍摄的视频帧图像和所述移动终端的标识;根据预先建立的增强现实设备的标识与所述移动终端的标识的对应关系表,确定与所述虚拟形象对应的目标视频帧图像;根据所述虚拟形象对所述目标视频帧图像进行处理,并将处理后的所述目标视频帧图像发送至所述增强现实设备。
进一步地,所述根据所述虚拟形象对所述目标视频帧图像进行处理,包括:判断所述视频帧图像中是否存在与所述虚拟形象姿态或表情相近的目标对象;若存在,则将所述目标对象替换成所述虚拟形象。
进一步的,
通过移动终端的摄像头来拍摄视频帧图像;
所述摄像头包括镜头、自动聚焦音圈马达、图像传感器以及微型记忆合金光学防抖器,所述镜头固装在所述自动聚焦音圈马达上,所述图像传感器将所述镜头获取的光学场景转换为图像数据,所述自动聚焦音圈马达安装在所述微型记忆合金光学防抖器上,移动终端的处理器根据陀螺仪检测到的镜头抖动数据驱动所述微型记忆合金光学防抖器的动作,实现镜头的抖动补偿;
所述微型记忆合金光学防抖器包括活动板和基板,所述自动聚焦音圈马达安装在所述活动板上,所述基板的尺寸大于所述活动板,所述活动板安装在所述基板上,所述活动板和所述基板之间设有多个活动支撑,所述基板的四周具有四个侧壁,每个所述侧壁的中部设有一缺口,所述缺口处安装有微动开关,所述微动开关的活动件可以在所述处理器的指令下打开或封闭所述缺口,所述活动件靠近所述活动板的侧面设有沿所述活动件宽度方向布设的条形的电触点,所述基板设有与所述电触点相连接的 温控电路,所述处理器根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板的四个侧边的中部均设有形状记忆合金丝,所述形状记忆合金丝一端与所述活动板固定连接,另一端与所述电触点滑动配合,所述基板的四周的内侧壁与所述活动板之间均设有弹性件,当所述基板上的一个温控电路连通时,与该电路相连接的形状记忆合金丝伸长,同时,远离该形状记忆合金丝的微动开关的活动件打开所述缺口,与该形状记忆合金丝同侧的弹性件收缩,远离该形状记忆合金丝的弹性件伸长。
进一步的,所述弹性件为弹簧。
进一步的,所述移动终端安装于支架上,所述支架包括安装座、支撑轴、三个铰装在所述支撑轴上的支撑架;
所述安装座包括相互垂直的第一安装板和第二安装板,所述第一安装板和第二安装板均可用于安装所述摄像机,所述支撑轴垂直安装在所述第一安装板的底面,所述支撑轴远离所述安装座的底端设有径向尺寸大于所述支撑轴的圆周面,三个所述支撑架由上至下安装在所述支撑轴上,且每两个所述支撑架展开后的水平投影呈一夹角,所述支撑轴为伸缩杆件,其包括与所述安装座相连接的管体和部分可收缩至所述管体内的杆体,所述杆体伸入所述管体的部分包括依次铰接的第一段、第二段、第三段和第四段,所述第一段与所述管体相连接,所述第一段靠近所述第二段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第二段靠近所述第一段的端部设有与锁止件可拆卸配合的锁止孔,所述第二段靠近所述第三段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第三段靠近所述第二段的端部设有与锁止件可拆卸配合的锁止孔,所述第三段靠近所述第四段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第四段靠近所述第三段的端部设有与锁止件可拆卸配合的锁止孔。
进一步的,每个所述支撑架的底端还连接有调距装置,所述调距装置包括安装在所述支撑架底部的轴承圈、与所述轴承圈相连接的转动环、管体、螺杆、螺套及支撑杆,所述管体的一端设有封堵,所述螺杆部分通过所述封堵安装在所述管体内,所述封堵设有与所述螺杆相适配的内螺纹,所述螺杆另一部分与所述转动环相连接,所述螺套一端安装在所述管体内并与所述螺杆螺纹连接,所述螺套的另一端伸出所述管体外并与所述支撑杆固定连接,所述螺套的内壁设有一凸起,所述螺套的外侧壁沿其长度方向设有与所述凸起相适配的滑道,所述管体包括相邻的第一部分和第二部分,所述第一部分的内径小于所述第二部分的内径,所述封堵设置在所述第二部分的外端上,所述螺套靠近所述螺杆的端部设有外径大于所述第一部分内径的限位端。
本发明实施例的另一方面提供了一种AR成像装置,应用于增强现实设备,包括:
采集模块,用于采集人物图像,并标识所述人物图像中的五官特征点和肢体特征点;确定模块,用于利用预设的第一模型确定所述五官特征点对应的五官特征信息,利用预设的第二模型确定所述肢体特征点对应的肢体特征信息;生成模块,用于在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息确定与所述人物 图像匹配的虚拟形象;发送模块,用于将所述虚拟形象和所述增强现实设备的标识发送至所述服务器,以使所述服务器将所述虚拟形象添加至目标视频帧中进行增强现实处理,并将处理后的所述目标视频帧发送至所述增强现实设备;其中,所述目标视频帧来自移动终端拍摄的视频。
进一步地,所述虚拟形象库中包括多个虚拟形象的模型文件,每个虚拟形象的模型文件中包括肢体模型子文件和五官模型子文件,所述肢体模型子文件中的每个肢体模型携带有对应的肢体特征信息,所述五官模型子文件中的每个五官模型携带有对应的五官特征信息。
进一步地,所述生成模块包括:确定单元,用于确定目标虚拟形象;查找单元,用于在所述目标虚拟形象对应的模型文件中,根据所述五官特征信息查找匹配的目标五官模型,根据所述肢体特征信息查找匹配的目标肢体模型;组合单元,用于将所述目标五官模型和所述目标肢体模型进行组合,得到与人物图像匹配的虚拟形象。
本发明实施例的再一方面提供了一种AR成像装置,应用于服务器,包括:
第一接收模块,用于接收增强现实设备发送的虚拟形象和所述增强现实设备的标识;第二接收模块,用于接收移动终端拍摄的视频帧图像和所述移动终端的标识;确定模块,用于根据预先建立的增强现实设备的标识与所述移动终端的标识的对应关系表,确定与所述虚拟形象对应的目标视频帧图像;处理模块,用于根据所述虚拟形象对所述目标视频帧图像进行处理,并将处理后的所述目标视频帧图像发送至所述增强现实设备。
进一步地,所述处理模块包括:
判断单元,用于判断所述视频帧图像中是否存在与所述虚拟形象姿态或表情相近的目标对象;替换单元,用于存在与所述虚拟形象姿态或表情相近的目标对象时,则将所述目标对象替换成所述虚拟形象。
本发明实施例的又一方面提供一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明实施例上述增强设备侧的任一项AR成像方法。
本发明实施例的再一方面提供一种服务器,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明实施例上述服务器侧的任一项AR成像方法。
由以上技术方案可见,本发明实施例提供的AR成像方法、装置及电子设备,能够基于用户自身的意愿确定增强现实场景,同时能够生成用户对应的虚拟形象,使该 虚拟形象置身于当前移动终端播放的视频中,增强用户的沉浸感和交互的趣味性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本发明一个实施例提供的AR成像方法流程图;
图2为本发明一个实施例提供的AR成像方法中步骤S103的具体流程图;
图3为本发明一个实施例提供的AR成像方法流程图;
图4为本发明一个实施例提供的AR成像方法中步骤S303的具体流程图;
图5为本发明一个实施例提供的AR成像装置结构图;
图6为本发明一个实施例提供的AR成像装置结构图;
图7为本发明一个实施例提供的AR成像装置结构图;
图8为本发明一个实施例提供的AR成像装置结构图;
图9为执行本发明方法实施例提供的AR成像方法的电子设备的硬件结构示意图;
图10为本发明一个实施例提供的摄像头的结构图;
图11为本发明一个实施例提供的微型记忆合金光学防抖器的结构图;
图12为本发明一个实施例提供的微型记忆合金光学防抖器的一种工作状态结构图;
图13为本发明一个实施例提供的支架结构图;
图14为本发明一个实施例提供的支撑轴结构图;
图15为本发明一个实施例提供的调距装置结构图。
具体实施方式
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。
首先介绍一下本发明实施例的应用场景,当用户佩戴增强现实设备观看移动终端中自己拍摄的视频时,当看到自己感兴趣的视频片段,可以与该片段进行增强现实的交互,即将用户的虚拟形象添加到该视频片段上,使该虚拟形象置身于当前播放的 视频中。具体地,可以利用增强现实设备采集自己的人物头像,并处理得到与该人物图像相匹配的虚拟形象并发送至服务器,同时服务器获取该视频片段,利用该虚拟形象对该视频片段的帧图像进行增强现实处理,将处理好的视频帧图像返回至增强现实设备,从而使用户通过增强现实设备看到自己与视频的交互。
本发明实施例中的增强现实设备可以包括可被用户佩戴的眼睛和头盔。该增强现实设中设置有人脸图像采集组件,当该设备主体被用户佩戴时,该脸部图像采集组件朝向用户脸部,且与该用户脸部存在预设距离,即不直接与用户的脸部接触。本发明实施例中的移动终端包括但不限于手机、平板电脑等。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互结合。
图1为本发明实施例提供的AR成像方法流程图。如图1所示,本发明实施例提供的AR成像方法,应用于增强现实设备,包括:
S101,采集人物图像,并标识所述人物图像中的五官特征点和肢体特征点。
增强现实设备可以通过其设置的人脸图像采集组件来采集用户的人物图像。对人物图像的采集时机可以是响应于用户对增强设备的指令,即该指令为用户希望与当前播放的视频内容进行增强现实交互时,对增强设备的操作指令。
在采集到人物图像时,增强设备可以识别人物图像中的五官特征点和肢体特征点并进行标记。具体地识别方法可以是通过预先训练的模型、或者通过预先设定的特征点识别规则,本发明在此不做限定。肢体特征点可以包括:头部、脖子、手部、胳臂、躯干、腿部、以及脚部等。五官特征点可以包括:眉毛、眼睛、嘴巴等。
S102,利用预设的第一模型确定所述五官特征点对应的五官特征信息,利用预设的第二模型确定所述肢体特征点对应的肢体特征信息。
预设的第一模型和第二模型,是用来根据五官特征点和肢体特征点来确定五官特征信息和肢体特征信息的模型。该预设的第一模型和第二模型可以采用例如卷积神经网络算法等现有现有算法实现,具体实现过程属于本领域技术人员的常用技术手段,此处不再赘述。其中,第一模型输入标识了五官特征点的人物图像,输出五官特征信息;第二模型输入标识了肢体特征点的人物图像,输出肢体特征信息。
可选地,五官特征信息可以包括五官的角度信息或五官的特征名称信息,肢体特征信息可以包括肢体的角度信息或肢体的特征名称信息。即第一模型能够通过标识的五官特征点确定人物图像中五官的角度(比如嘴角上扬15度,眉毛下弯5度等),或,直接根据标识的五官特征点得到五官的特征名称信息(比如嘴巴是微笑的、嘴巴是露齿大小的、眉毛的舒展的等等)。相对应的,第二模型能够通过标识的五官特征点确定人物图像中各肢体的角度(比如左臂向上45度,右腿向前迈开20度等),或,直接根据标识的肢体特征点得到各肢体的特征名称信息(比如双臂水平举起、脚尖抬 起、头向左歪等等)。
S103,在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息生成与所述人物图像匹配的虚拟形象。
虚拟形象可以指游戏角色、卡通人物、动漫人物等,本发明在此不做限制。其中,虚拟形象库中包括多个虚拟形象的模型文件,每个虚拟形象的模型文件中包括肢体模型子文件和五官模型子文件,肢体模型子文件中的每个肢体模型携带有对应的肢体特征信息,所述五官模型子文件中的每个五官模型携带有对应的五官特征信息。也就是说,肢体模型子文件中可以包括各肢体模型,其中每个肢体模型包括该肢体多个不同角度信息的肢体模型或包括多个不同肢体特征信息的肢体模型,以左臂为例,左臂模型中可以包括左臂上扬45度模型,左臂前伸90度模型,左臂后摆30度模型,或,包括左臂平举模型,左臂向内弯曲模型等。对应的,五官模型子文件中可以包括五官模型,其中每个五官模型包括多个不同角度信息的五官模型或包括多个不同五官特征信息的五官模型,例子与肢体模型相似,在此不做赘述。
具体地,如图2所示,步骤S103可以包括如下子步骤:
S1031,确定目标虚拟形象。
本步骤是为人物图像选定虚拟形象作为目标虚拟形象。可以是通过对用户历史行为数据的分析,自动为其选定目标虚拟形象;也可以是通过对人物图像的分析,自动为其选定目标虚拟形象;还可以是用户预先自主设定的,本发明在此不做限制。
S1032,在所述目标虚拟形象对应的模型文件中,根据所述五官特征信息查找匹配的目标五官模型,根据所述肢体特征信息查找匹配的目标肢体模型。
通过步骤S102中确定的五官特征信息,在目标虚拟形象的五官模型子文件中,根据每个模型携带的五官特征信息,查找到匹配的五官模型(即查找到与人物图像匹配的眼睛、鼻子、嘴等)。通过步骤S102中确定的肢体特征信息,在目标虚拟形象的肢体模型子文件中,根据每个模型携带的肢体特征信息,查找到匹配的肢体模型(即查找到与人物图像中人物动作匹配的胳臂、腿、躯干等)。
S1033,将所述目标五官模型和所述目标肢体模型进行组合,得到与人物图像匹配的虚拟形象。
由于人物的表情是通过五官反映出来的,将各目标五官模型组合在一起,即得到与人物图像中人物表情相匹配的虚拟人物表情;人物的动作是通过肢体放映出来的,将各目标肢体模型组合在一起,即得到与人物图像中人物动作相匹配的虚拟人物动作。最后,将两者集合在一起,即得到与人物图像相匹配的虚拟形象。
S104,将所述虚拟形象和所述增强现实设备的标识发送至所述服务器,以使所述服务器将所述虚拟形象添加至目标视频帧中进行增强现实处理,并将处理后的所述目标视频帧发送至所述增强现实设备。
由于想要进行增强现实的设备可能有多个,因此向服务器发送的虚拟形象的同 时,还应携带有该增强现实设备的标识,用来确定该虚拟形象来自哪一个增强现实设备。
此外,用户想要和当前播放的视频画面进行增强现实的交互时,可以向移动终端发出操作指令,该操作指令可以是双击移动终端的屏幕、长按移动终端的屏幕或触发移动终端的预设位置等,本发明在此不做限定。移动终端响应于用户的操作指令,将当前播放视频的视频帧图像和该移动终端的标识发送至服务器。
服务器中预先存储有增强现实设备标识与移动终端标识的对应关系表,即通过该对应关系表可以确定存在交互关系的增强现实设备和移动终端。服务器根据该对应关系表,来查找与接收到的虚拟形象对应的视频帧图像,将该视频帧图像确定为目标视频帧图像。然后,服务器根据将所述虚拟形象添加至目标视频帧中,对目标视频帧进行增强现实处理,将增强现实处理后的所述视频帧图像发送至所述增强现实设备中。
在通过上述对视频帧图像进行增强现实处理后,服务器将增强现实处理后的所述视频帧图像发送至所述增强现实设备。由于接收到用户的操作指令后,会不断重复上述步骤来对每帧视频帧图像进行增强现实的处理,因此佩戴该目标增强现实设备的用户即可以看到自己的虚拟形象出现在播放的视频中,完成增强现实的交互。
本发明实施例提供的AR成像方法,能够基于用户自身的意愿确定增强现实场景,同时能够生成用户对应的虚拟形象,使该虚拟形象置身于当前移动终端播放的视频中,增强用户的沉浸感和交互的趣味性。
图3为本发明实施例提供的AR成像方法流程图。如图3所示,本发明实施例提供的AR成像方法,包括:
S301,接收增强现实设备发送的虚拟形象和所述增强现实设备的标识。
具体地,虚拟形象的生成过程如图1、图2所示的实施例所述,本发明在此不做赘述。
S302,接收移动终端拍摄的视频帧图像和所述移动终端的标识。
由于想要进行增强现实的设备可能有多个,因此向服务器发送的虚拟形象的同时,还应携带有该增强现实设备的标识,用来确定该虚拟形象来自哪一个增强现实设备。
此外,用户想要和当前播放的视频画面进行增强现实的交互时,可以向移动终端发出操作指令,该操作指令可以是双击移动终端的屏幕、长按移动终端的屏幕或触发移动终端的预设位置等,本发明在此不做限定。移动终端响应于用户的操作指令,将当前播放视频的视频帧图像和该移动终端的标识发送至服务器。
S303,根据预先建立的增强现实设备的标识与所述移动终端的标识的对应关系表,确定与所述虚拟形象对应的目标视频帧图像。
服务器中预先存储有增强现实设备标识与移动终端标识的对应关系表,即通过该对应关系表可以确定存在交互关系的增强现实设备和移动终端。服务器根据该对应关系表,来查找与接收到的虚拟形象对应的视频帧图,将该视频帧图像确定为目标视频帧图像。
S304,根据所述虚拟形象对所述目标视频帧图像进行处理,并将处理后的所述目标视频帧图像发送至所述增强现实设备。
服务器将所述虚拟形象添加至目标视频帧中,对目标视频帧进行增强现实处理,将增强现实处理后的所述视频帧图像发送至所述增强现实设备中。
可选地,如图4所示,本步骤可以包括如下子步骤:
S3041,判断所述视频帧图像中是否存在与所述虚拟形象姿态或表情相近的目标对象。
具体地,用户在观看通过移动终端拍摄的视频时,可能希望把视频中的某个目标人物替换自己,从而完成与视频的增强现实交互,此时,用户会做出和该目标人物相近的动作或与该目标人物相近的表情,并给增强现实设备发出指令,增强现实设备随即相应该指令采集包括该动作或该表情的人物图像,并根据图1或图2所示实施例的方法进行虚拟形象的生成。服务器可以将虚拟形象的姿态或表情与视频帧图像中出现的人物的姿态或表情进行对比。具体的对比方式可以是计算肢体姿态的重合度或表情的重合度,也可以通过预先训练的模型,将每个视频帧图像中出现的人物和虚拟形象输入该模型中,输出两者的重合度。当视频帧图像中某个人物与虚拟形象的重合度大于预设的阈值时,则说明该人物为与虚拟形象相近的目标对象。此时,则执行步骤S3042。
进一步地,当重合度小于预设的阈值时,说明用户并没有想要替换的目标人物。此时服务器可以根据虚拟形象的表情和/或虚拟形象的姿态在视频帧图像上找到与其匹配的目标位置,例如,视频帧图像为大海,虚拟形象的表情为微笑,则可以把海边沙滩的某一处作为与该虚拟形象匹配的目标位置。然后,将该目标位置对应的像素点替换成虚拟形象的像素点,从而完成视频帧图像的增强现实的处理。
S3042,将所述目标对象替换成所述虚拟形象。
服务器可以将该目标对象对应的像素点替换成虚拟形象的像素点,从而完成视频帧图像的增强现实的处理。
在通过上述对视频帧图像进行增强现实处理后,服务器将增强现实处理后的所述视频帧图像发送至所述增强现实设备。由于接收到用户的操作指令后,会不断重复上述步骤来对每帧视频帧图像进行增强现实的处理,因此佩戴该目标增强现实设备的用户即可以看到自己的虚拟形象出现在播放的视频中,完成增强现实的交互。
本发明实施例提供的AR成像方法,能够基于用户自身的意愿确定增强现实场景,同时能够生成用户对应的虚拟形象,使该虚拟形象置身于当前移动终端播放的视 频中,增强用户的沉浸感和交互的趣味性。
图5、图6为本发明实施例提供的AR成像装置结构图。如图5、图6所示,该装置具体包括:采集模块1000,确定模块2000,生成模块3000和发送模块4000。其中,
所述采集模块1000,用于采集人物图像,并标识所述人物图像中的五官特征点和肢体特征点;所述确定模块2000,用于利用预设的第一模型确定所述五官特征点对应的五官特征信息,利用预设的第二模型确定所述肢体特征点对应的肢体特征信息;所述生成模块3000,用于在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息确定与所述人物图像匹配的虚拟形象;所述发送模块4000,用于将所述虚拟形象和所述增强现实设备的标识发送至所述服务器,以使所述服务器将所述虚拟形象添加至目标视频帧中进行增强现实处理,并将处理后的所述目标视频帧发送至所述增强现实设备;其中,所述目标视频帧来自移动终端拍摄的视频。
进一步地,所述虚拟形象库中包括多个虚拟形象的模型文件,每个虚拟形象的模型文件中包括肢体模型子文件和五官模型子文件,所述肢体模型子文件中的每个肢体模型携带有对应的肢体特征信息,所述五官模型子文件中的每个五官模型携带有对应的五官特征信息。
进一步地,所述生成模块3000包括:确定单元310,查找单元320,组合单元330。其中,
所述确定单元310,用于确定目标虚拟形象;所述查找单元320,用于在所述目标虚拟形象对应的模型文件中,根据所述五官特征信息查找匹配的目标五官模型,根据所述肢体特征信息查找匹配的目标肢体模型;所述组合单元330,用于将所述目标五官模型和所述目标肢体模型进行组合,得到与人物图像匹配的虚拟形象。
本发明实施例提供的AR成像装置具体用于执行图1、图2所示实施例提供的所述方法,其实现原理、方法和功能用途等与图1、图2所示实施例类似,在此不再赘述。
图7、图8为本发明实施例提供的AR成像装置结构图。如图7、图8所示,该装置具体包括:第一接收模块50,第二接收模块60,确定模块70和处理模块80。其中,
所述第一接收模块50,用于接收增强现实设备发送的虚拟形象和所述增强现实设备的标识;所述第二接收模块60,用于接收移动终端拍摄的视频帧图像和所述移动终端的标识;所述确定模块70,用于根据预先建立的增强现实设备的标识与所述移动终端的标识的对应关系表,确定与所述虚拟形象对应的目标视频帧图像;所述处理模块80,用于根据所述虚拟形象对所述目标视频帧图像进行处理,并将处理后的 所述目标视频帧图像发送至所述增强现实设备。
进一步地,所述处理模块80包括:判断单元810和替换单元820。其中,所述判断单元810,用于判断所述视频帧图像中是否存在与所述虚拟形象姿态或表情相近的目标对象;所述替换单元820,用于存在与所述虚拟形象姿态或表情相近的目标对象时,则将所述目标对象替换成所述虚拟形象。
本发明实施例提供的AR成像装置具体用于执行图3、图4所示实施例提供的所述方法,其实现原理、方法和功能用途等与图3、图4所示实施例类似,在此不再赘述。
上述这些本发明实施例的AR成像装置可以作为其中一个软件或者硬件功能单元,独立设置在上述电子设备中,也可以作为整合在处理器中的其中一个功能模块,执行本发明实施例的AR成像方法。
图9为执行本发明方法实施例提供的AR成像方法的电子设备的硬件结构示意图。根据图9所示,该电子设备包括:
一个或多个处理器910以及存储器920,图9中以一个处理器910为例。
执行所述的AR成像方法的设备还可以包括:输入装置930和输出装置930。
处理器910、存储器920、输入装置930和输出装置940可以通过总线或者其他方式连接,图9中以通过总线连接为例。
存储器920作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的所述AR成像方法对应的程序指令/模块。处理器910通过运行存储在存储器920中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现所述AR成像方法。
存储器920可以包括存储程序区和存储数据区,其中,存储程序区可存储操作装置、至少一个功能所需要的应用程序;存储数据区可存储根据本发明实施例提供的AR成像装置的使用所创建的数据等。此外,存储器920可以包括高速随机存取存储器920,还可以包括非易失性存储器920,例如至少一个磁盘存储器920件、闪存器件、或其他非易失性固态存储器920件。在一些实施例中,存储器920可选包括相对于处理器99远程设置的存储器920,这些远程存储器920可以通过网络连接至所述AR成像装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置930可接收输入的数字或字符信息,以及产生与AR成像装置的用户设置以及功能控制有关的键信号输入。输入装置930可包括按压模组等设备。
所述一个或者多个模块存储在所述存储器920中,当被所述一个或者多个处理器910执行时,执行所述AR成像方法。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本发明实施例提供了一种非暂态计算机可读存存储介质,所述计算机存储介质存储有计算机可执行指令,其中,当所述计算机可执行指令被电子设备执行时,使所述电子设备上执行上述任意方法实施例中的AR成像方法。
本发明实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,其中,当所述程序指令被电子设备执行时,使所述电子设备执行上述任意方法实施例中的AR成像方法。
在另一实施例中,为了使上述实施例所述的AR成像的方法或装置获得更高的AR成像质量,有必要令移动终端输出质量更高的视频数据,因此本实施例提供了一种具有更好防抖性能的移动终端的摄像头,通过该摄像头获取的图片或视频相比于普通摄像头更加清晰,更能满足用户的高质量需求。特别是本实施例中的摄像头获取的视频用于上述实施例中的AR成像方法或装置时,增强现实的效果更佳。
具体的,现有的移动终端摄像头(移动终端为手机或摄像机等)包括镜头1、自动聚焦音圈马达2、图像传感器3为本领域技术人员公知的现有技术,因此这里不过多描述。通常采用微型记忆合金光学防抖器是因为现有的防抖器大多由通电线圈在磁场中产生洛伦磁力驱动镜头移动,而要实现光学防抖,需要在至少两个方向上驱动镜头,这意味着需要布置多个线圈,会给整体结构的微型化带来一定挑战,而且容易受外界磁场干扰,进而影响防抖效果,一些现有技术通过温度变化实现记忆合金丝的拉伸和缩短,以此拉动自动聚焦音圈马达移动,实现镜头的抖动补偿,微型记忆合金光学防抖致动器的控制芯片可以控制驱动信号的变化来改变记忆合金丝的温度,以此控制记忆合金丝的伸长和缩短,并且根据记忆合金丝的电阻来计算致动器的位置和移动距离。当微型记忆合金光学防抖致动器上移动到指定位置后反馈记忆合金丝此时的电阻,通过比较这个电阻值与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差。但是申请人发现,由于抖动的随机性和不确定性,仅仅依靠上述技术方案的结构是无法实现在多次抖动发生的情况下能够对镜头进行精确的补偿,这是由于形状记忆合金的升温和降温均需要一定的时间,当抖动向第一方向发生时,上述技术方案可以实现镜头对第一方向抖动的补偿,但是当随之而来的第二方向的抖动发生时,由于记忆合金丝来不及在瞬间变形,因此容易造成补偿不及时,无法精准实现对多次抖动和不同方向的连续抖动的镜头抖动补偿,这导致了获取的图片质量不佳,因 此需要对摄像头或摄像机结构上进行改进。
如图10所示,本实施例的所述摄像头包括镜头1、自动聚焦音圈马达2、图像传感器3以及微型记忆合金光学防抖器4,所述镜头1固装在所述自动聚焦音圈马达2上,所述图像传感器3将所述镜头1获取的图像传输至所述识别模块100,所述自动聚焦音圈马达2安装在所述微型记忆合金光学防抖器4上,所述移动终端内部处理器根据移动终端内部陀螺仪(图中未示出)检测到的镜头抖动驱动所述微型记忆合金光学防抖器4的动作,实现镜头的抖动补偿;
结合附图11所示,对所述微型记忆合金光学防抖器的改进之处介绍如下:
所述微型记忆合金光学防抖器包括活动板5和基板6,活动板5和基板6均为矩形板状件,所述自动聚焦音圈马达2安装在所述活动板5上,所述基板6的尺寸大于所述活动板5的尺寸,所述活动板5安装在所述基板6上,所述活动板5和所述基板6之间设有多个活动支撑7,所述活动支撑7具体为设置在所述基板6四个角处凹槽内的滚珠,便于活动板5在基板6上的移动,所述基板6的四周具有四个侧壁,每个所述侧壁的中部均设有一缺口8,所述缺口8处安装有微动开关9,所述微动开关9的活动件10可以在所述处理模块的指令下打开或封闭所述缺口,所述活动件10靠近所述活动板5的侧面设有沿所述活动件10宽度方向布设的条形的电触点11,所述基板6设有与所述电触点11相连接的温控电路(图中未示出),所述处理模块可以根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板5的四个侧边的中部均设有形状记忆合金丝12,所述形状记忆合金丝12一端与所述活动板5固定连接,另一端与所述电触点11滑动配合,所述基板6的四周的内侧壁与所述活动板5之间均设有用于复位的弹性件13,具体的,本实施例的所述弹性件优选为微型的弹簧。
下面结合上述结构对本实施例的微型记忆合金光学防抖器的工作过程进行详细的描述:以镜头两次方向相反的抖动为例,当镜头发生向第一方向抖动时,陀螺仪将检测到的镜头抖动方向和距离反馈给所述处理器,处理器计算出需要控制可以补偿该抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对该形状记忆合金丝进行升温,该形状记忆合金丝伸长并带动活动板向可补偿第一方向抖动的方向运动,与此同时与该形状记忆合金丝相对称的另一形状记忆合金丝没有变化,但是与该另一形状记忆合金丝相连接的活动件会打开与其对应的缺口,便于所述另一形状记忆合金丝在活动板的带动下向缺口外伸出,此时,两个形状记忆合金丝附近的弹性件分别拉伸和压缩(如图12所示),当微型记忆合金光学防抖致动器上移动到指定位置后反馈该形状记忆合金丝的电阻,通过比较这个电阻值与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差;而当第二次抖动发生时,处理器首先通过与另一形状以及合金丝相抵接的活动件关闭缺口,并且打开与处于伸长状态的该形状记忆合金丝相抵接的活动件,与另一形状以及合金丝相抵接活动件的转动可以推动另一形状记忆合 金丝复位,与处于伸长状态的该形状记忆合金丝相抵接的活动件的打开可以便于伸长状态的形状记忆合金丝伸出,并且在上述的两个弹性件的弹性作用下可以保证活动板迅速复位,同时处理器再次计算出需要控制可以补偿第二次抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对另一形状记忆合金丝进行升温,另一形状记忆合金丝伸长并带动活动板向可补偿第二方向抖动的方向运动,由于在先伸长的形状记忆合金丝处的缺口打开,因此不会影响另一形状以及合金丝带动活动板运动,而由于活动件的打开速度和弹簧的复位作用,因此在发生多次抖动时,本实施例的微型记忆合金光学防抖器均可做出精准的补偿,其效果远远优于现有技术中的微型记忆合金光学防抖器。
当然上述仅仅为简单的两次抖动,当发生多次抖动时,或者抖动的方向并非往复运动时,可以通过驱动两个相邻的形状记忆合金丝伸长以补偿抖动,其基础工作过程与上述描述原理相同,这里不过多赘述,另外关于形状记忆合金电阻的检测反馈、陀螺仪的检测反馈等均为现有技术,这里也不做赘述。
另一实施例中,移动终端为摄像机,所述摄像机可以安装于所述摄像机的支架上,但是申请人在使用过程中发现,现有的摄像机的支架具有以下缺陷:1、现有的摄像机支架均采用三脚架支撑,但是三脚架结构在地面不平整存在较大凹凸不平的位置进行安装时无法保证支架安装座的水平,容易发生抖动或者倾斜,对拍摄容易产生不良的影响;2、现有的支架无法作为肩抗式摄影机支架,结构和功能单一,在需要肩抗拍摄时必须单独配备肩抗式摄影机支架。
因此,申请人对支架结构进行改进,如图13和14所示,本实施例的所述支架包括安装座14、支撑轴15、三个铰装在所述支撑轴上的支撑架16;所述安装座14包括相互垂直的第一安装板141和第二安装板142,所述第一安装板141和第二安装板142均可用于安装所述摄像机,所述支撑轴15垂直安装在所述第一安装板141的底面,所述支撑轴15远离所述安装座14的底端设有径向尺寸略大于所述支撑轴的圆周面17,三个所述支撑架16由上至下安装在所述支撑轴15上,且每两个所述支撑架16展开后的水平投影呈一倾角,上述结构在进行支架的架设时,首先将圆周面17假设在凹凸不平的平面较平整的一小块区域,在通过打开并调整三个可伸缩的支撑架的位置实现支架的架设平整,因此即使是凹凸不平的地面也能迅速将支架架设平整,适应各种地形,保证安装座处于水平状态。
更有利的,本实施例的所述支撑轴15也是伸缩杆件,其包括与所述安装座14相连接的管体151和部分可收缩至所述管体151内的杆体152,所述杆体152伸入所述管体的部分包括依次铰接的第一段1521、第二段1522、第三段1523和第四段1524,所述第一段1521与所述管体151相连接,所述第一段1521靠近所述第二段1522的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第二段1522靠近所述第一段1521的端部设有与锁止件19可拆卸配合的锁止孔20,同理,所述第二段1522 靠近所述第三段1523的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第三段1523靠近所述第二段1522的端部设有与锁止件19可拆卸配合的锁止孔20,所述第三段1523靠近所述第四段1524的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第四段1524靠近所述第三段1523的端部设有与锁止件19可拆卸配合的锁止孔20,所述锁止件可以隐藏在安装槽内,当需要使用锁止件时可以通过转动锁止件,将锁止件扣合在所述锁止孔20上,具体的,所述锁止件19可以是具有一个凸起的条形件,该凸起与所述锁止孔20的大小尺寸相适配,将凸起压紧在锁止孔20内完整相邻两个段(例如第一段和第二段)位置的固定,防止相对转动,而通过第一段1521、第二段1522、第三段1523和第四段1524的配合可以将该部分形成一
Figure PCTCN2018094077-appb-000001
形结构,并且通过锁止件19固定各个段的相对位置,还可以在该结构的底部设有软质材料,当需要将支架作为肩抗式摄像机支架时,该部分放置在用户的肩部,通过把持三个支撑架中的一个作为肩抗式支架的手持部,可以快速的实现由固定式支架到肩抗式支架的切换,十分方便。
另外,申请人还发现,可伸缩的支撑架伸大多通过人力拉出伸缩部分以实现伸缩长度的调节,但是该距离不可控制,随机性较大,因此常常出现调节不便的问题,特别是需要将伸缩长度部分微调时,往往不容易实现,因此申请人还对支撑架的16结构进行优化,结合附图15所示,本实施例的每个所述支撑架16的底端还连接有调距装置21,所述调距装置21包括安装在所述支撑架16底部的轴承圈211、与所述轴承圈211相连接的转动环212、管体213、螺杆214、螺套215及支撑杆216,所述管体213的一端设有封堵217,所述螺杆215部分通过所述封堵217安装在所述管体213内,所述封堵217设有与所述螺杆214相适配的内螺纹,所述螺杆214另一部分与所述转动环212相连接,所述螺套215一端安装在所述管体213内并与所述螺杆214螺纹连接,所述螺套215的另一端伸出所述管体213外并与所述支撑杆216固定连接,所述螺套215的内壁设有一凸起218,所述螺套215的外侧壁沿其长度方向设有与所述凸起相适配的滑道219,所述管体213包括相邻的第一部分2131和第二部分2132,所述第一部分2131的内径小于所述第二部分2132的内径,所述封堵217设置在所述第二部分2132的外端上,所述螺套215靠近所述螺杆214的端部设有外径大于所述第一部分内径的限位端2151,通过转动所述转动环212带动螺杆214在管体213内转动,并将转动趋势传递给所述螺套215,而由于螺套受凸起218和滑道219的配合影响,无法转动,因此将转动力化为向外的直线移动,进而带动支撑杆216运动,实现支撑架底端的长度微调节,便于用户架平支架及其安装座,为后续的拍摄工作提供良好的基础保障。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形 式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载波、红外信号、数字信号等)等,该计算机软件产品包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明实施例的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种AR成像方法,应用于增强现实设备,其特征在于,包括:
    采集人物图像,并标识所述人物图像中的五官特征点和肢体特征点;
    利用预设的第一模型确定所述五官特征点对应的五官特征信息,利用预设的第二模型确定所述肢体特征点对应的肢体特征信息;
    在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息生成与所述人物图像匹配的虚拟形象;
    将所述虚拟形象和所述增强现实设备的标识发送至所述服务器,以使所述服务器将所述虚拟形象添加至目标视频帧中进行增强现实处理,并将处理后的所述目标视频帧发送至所述增强现实设备;
    其中,所述目标视频帧来自移动终端拍摄的视频。
  2. 根据权利要求1所述的方法,其特征在于,所述虚拟形象库中包括多个虚拟形象的模型文件,每个虚拟形象的模型文件中包括肢体模型子文件和五官模型子文件,所述肢体模型子文件中的每个肢体模型携带有对应的肢体特征信息,所述五官模型子文件中的每个五官模型携带有对应的五官特征信息。
  3. 根据权利要求2所述的方法,其特征在于,所述在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息生成与所述人物图像匹配的虚拟形象,包括:
    确定目标虚拟形象;
    在所述目标虚拟形象对应的模型文件中,根据所述五官特征信息查找匹配的目标五官模型,根据所述肢体特征信息查找匹配的目标肢体模型;
    将所述目标五官模型和所述目标肢体模型进行组合,得到与人物图像匹配的虚拟形象。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述五官特征信息包括五官的角度信息或五官的特征名称信息,所述肢体特征信息包括肢体的角度信息或肢体的特征名称信息。
  5. 一种AR成像方法,应用于服务器,其特征在于,包括:
    接收增强现实设备发送的虚拟形象和所述增强现实设备的标识;
    接收移动终端拍摄的视频帧图像和所述移动终端的标识;
    根据预先建立的增强现实设备的标识与所述移动终端的标识的对应关系表,确定与所述虚拟形象对应的目标视频帧图像;
    根据所述虚拟形象对所述目标视频帧图像进行处理,并将处理后的所述目标视频帧图像发送至所述增强现实设备。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述虚拟形象对所述目标视频帧图像进行处理,包括:
    判断所述视频帧图像中是否存在与所述虚拟形象姿态或表情相近的目标对象;
    若存在,则将所述目标对象替换成所述虚拟形象。
  7. 一种AR成像装置,应用于增强现实设备,其特征在于,包括:
    采集模块,用于采集人物图像,并标识所述人物图像中的五官特征点和肢体特征点;
    确定模块,用于利用预设的第一模型确定所述五官特征点对应的五官特征信息,利用预设的第二模型确定所述肢体特征点对应的肢体特征信息;
    生成模块,用于在虚拟形象数据库中,根据所述五官特征信息和所述肢体特征信息确定与所述人物图像匹配的虚拟形象;
    发送模块,用于将所述虚拟形象和所述增强现实设备的标识发送至所述服务器,以使所述服务器将所述虚拟形象添加至目标视频帧中进行增强现实处理,并将处理后的所述目标视频帧发送至所述增强现实设备;其中,所述目标视频帧来自移动终端拍摄的视频。
  8. 根据权利要求7所述的装置,其特征在于,所述虚拟形象库中包括多个虚拟形象的模型文件,每个虚拟形象的模型文件中包括肢体模型子文件和五官模型子文件,所述肢体模型子文件中的每个肢体模型携带有对应的肢体特征信息,所述五官模型子文件中的每个五官模型携带有对应的五官特征信息。
  9. 根据权利要求8所述的方法,其特征在于,所述生成模块包括:
    确定单元,用于确定目标虚拟形象;
    查找单元,用于在所述目标虚拟形象对应的模型文件中,根据所述五官特征信息查找匹配的目标五官模型,根据所述肢体特征信息查找匹配的目标肢体模型;
    组合单元,用于将所述目标五官模型和所述目标肢体模型进行组合,得到与人物图像匹配的虚拟形象。
  10. 一种AR成像装置,应用于服务器,其特征在于,包括:
    第一接收模块,用于接收增强现实设备发送的虚拟形象和所述增强现实设备的标识;
    第二接收模块,用于接收移动终端拍摄的视频帧图像和所述移动终端的标识;
    确定模块,用于根据预先建立的增强现实设备的标识与所述移动终端的标识的对应关系表,确定与所述虚拟形象对应的目标视频帧图像;
    处理模块,用于根据所述虚拟形象对所述目标视频帧图像进行处理,并将处理后的所述目标视频帧图像发送至所述增强现实设备。
PCT/CN2018/094077 2018-04-23 2018-07-02 Ar成像方法和装置 WO2019205284A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810366133.9A CN108614638B (zh) 2018-04-23 2018-04-23 Ar成像方法和装置
CN201810366133.9 2018-04-23

Publications (1)

Publication Number Publication Date
WO2019205284A1 true WO2019205284A1 (zh) 2019-10-31

Family

ID=63660625

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094077 WO2019205284A1 (zh) 2018-04-23 2018-07-02 Ar成像方法和装置

Country Status (2)

Country Link
CN (1) CN108614638B (zh)
WO (1) WO2019205284A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113366489A (zh) * 2018-11-07 2021-09-07 脸谱公司 检测增强现实目标
EP4050561A4 (en) * 2020-09-09 2023-07-05 Beijing Zitiao Network Technology Co., Ltd. AUGMENTED REALITY-BASED DISPLAY METHOD, DEVICE, AND STORAGE MEDIA

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740476B (zh) * 2018-12-25 2021-08-20 北京琳云信息科技有限责任公司 即时通讯方法、装置和服务器
CN112511815B (zh) * 2019-12-05 2022-01-21 中兴通讯股份有限公司 图像或视频生成方法及装置
CN113126746A (zh) * 2019-12-31 2021-07-16 中移(成都)信息通信科技有限公司 一种虚拟对象模型控制方法、系统及计算机可读存储介质
CN111104927B (zh) * 2019-12-31 2024-03-22 维沃移动通信有限公司 一种目标人物的信息获取方法及电子设备
CN111583355B (zh) * 2020-05-09 2024-01-23 维沃移动通信有限公司 面部形象生成方法、装置、电子设备及可读存储介质
CN111638794A (zh) * 2020-06-04 2020-09-08 上海商汤智能科技有限公司 一种虚拟文物的显示控制方法及装置
CN111694431A (zh) * 2020-06-09 2020-09-22 浙江商汤科技开发有限公司 一种人物形象生成的方法及装置
JP7427786B2 (ja) 2021-02-09 2024-02-05 北京字跳▲網▼絡技▲術▼有限公司 拡張現実に基づく表示方法、機器、記憶媒体及びプログラム製品
CN113507573A (zh) * 2021-08-13 2021-10-15 维沃移动通信(杭州)有限公司 视频生成方法、视频生成装置、电子设备和可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247199A1 (en) * 2011-03-21 2014-09-04 HJ Laboratories, LLC Providing augmented reality based on third party information
CN105608745A (zh) * 2015-12-21 2016-05-25 大连新锐天地传媒有限公司 应用于图像或视频的ar显示系统
CN106803921A (zh) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 基于ar技术的即时音视频通信方法及装置
CN107248195A (zh) * 2017-05-31 2017-10-13 珠海金山网络游戏科技有限公司 一种增强现实的主播方法、装置和系统

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140176661A1 (en) * 2012-12-21 2014-06-26 G. Anthony Reina System and method for surgical telementoring and training with virtualized telestration and haptic holograms, including metadata tagging, encapsulation and saving multi-modal streaming medical imagery together with multi-dimensional [4-d] virtual mesh and multi-sensory annotation in standard file formats used for digital imaging and communications in medicine (dicom)
CN103179437A (zh) * 2013-03-15 2013-06-26 苏州跨界软件科技有限公司 虚拟人物视频录制及播放系统和方法
CN105657294A (zh) * 2016-03-09 2016-06-08 北京奇虎科技有限公司 在移动终端上呈现虚拟特效的方法及装置
CN106127167B (zh) * 2016-06-28 2019-06-25 Oppo广东移动通信有限公司 一种增强现实中目标对象的识别方法、装置及移动终端
US10482662B2 (en) * 2016-06-30 2019-11-19 Intel Corporation Systems and methods for mixed reality transitions
CN106775198A (zh) * 2016-11-15 2017-05-31 捷开通讯(深圳)有限公司 一种基于混合现实技术实现陪伴的方法及装置
CN106993195A (zh) * 2017-03-24 2017-07-28 广州创幻数码科技有限公司 虚拟人物角色直播方法及系统
CN107613310B (zh) * 2017-09-08 2020-08-04 广州华多网络科技有限公司 一种直播方法、装置及电子设备
CN107749075B (zh) * 2017-10-26 2021-02-12 太平洋未来科技(深圳)有限公司 视频中虚拟对象光影效果的生成方法和装置
CN107728787B (zh) * 2017-10-30 2020-07-07 太平洋未来科技(深圳)有限公司 全景视频中的信息显示方法和装置
CN107749076B (zh) * 2017-11-01 2021-04-20 太平洋未来科技(深圳)有限公司 增强现实场景中生成现实光照的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247199A1 (en) * 2011-03-21 2014-09-04 HJ Laboratories, LLC Providing augmented reality based on third party information
CN105608745A (zh) * 2015-12-21 2016-05-25 大连新锐天地传媒有限公司 应用于图像或视频的ar显示系统
CN106803921A (zh) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 基于ar技术的即时音视频通信方法及装置
CN107248195A (zh) * 2017-05-31 2017-10-13 珠海金山网络游戏科技有限公司 一种增强现实的主播方法、装置和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113366489A (zh) * 2018-11-07 2021-09-07 脸谱公司 检测增强现实目标
EP4050561A4 (en) * 2020-09-09 2023-07-05 Beijing Zitiao Network Technology Co., Ltd. AUGMENTED REALITY-BASED DISPLAY METHOD, DEVICE, AND STORAGE MEDIA

Also Published As

Publication number Publication date
CN108614638A (zh) 2018-10-02
CN108614638B (zh) 2020-07-07

Similar Documents

Publication Publication Date Title
WO2019205284A1 (zh) Ar成像方法和装置
CN108596827B (zh) 三维人脸模型生成方法、装置及电子设备
CN108377398B (zh) 基于红外的ar成像方法、系统、及电子设备
US9838597B2 (en) Imaging device, imaging method, and program
CN109151340B (zh) 视频处理方法、装置及电子设备
RU2679316C1 (ru) Способ и устройство для воспроизведения видеоконтента из любого местоположения и с любого момента времени
US9479736B1 (en) Rendered audiovisual communication
JP5395956B2 (ja) 情報処理システムおよび情報処理方法
WO2019200718A1 (zh) 图像处理方法、装置及电子设备
US20160088286A1 (en) Method and system for an automatic sensing, analysis, composition and direction of a 3d space, scene, object, and equipment
US8514285B2 (en) Image processing apparatus, image processing method and program
WO2019200720A1 (zh) 基于图像处理的环境光补偿方法、装置及电子设备
CN109285216B (zh) 基于遮挡图像生成三维人脸图像方法、装置及电子设备
WO2020037681A1 (zh) 视频生成方法、装置及电子设备
JP2014143545A (ja) 撮影機器
WO2020056689A1 (zh) 一种ar成像方法、装置及电子设备
WO2020056691A1 (zh) 一种交互对象的生成方法、装置及电子设备
WO2020056692A1 (zh) 一种信息交互方法、装置及电子设备
WO2016090759A1 (zh) 一种智能拍照的方法和装置
Chu et al. Design of a motion-based gestural menu-selection interface for a self-portrait camera
CN109447924B (zh) 一种图片合成方法、装置及电子设备
WO2021026782A1 (zh) 手持云台的控制方法、控制装置、手持云台及存储介质
CN107426522B (zh) 基于虚拟现实设备的视频方法和系统
CN107250895A (zh) 头戴式显示设备及其摄像头的调节方法
TWI746463B (zh) 虛擬實境頭戴式裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916523

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18916523

Country of ref document: EP

Kind code of ref document: A1