WO2019205283A1 - 基于红外的ar成像方法、系统及电子设备 - Google Patents

基于红外的ar成像方法、系统及电子设备 Download PDF

Info

Publication number
WO2019205283A1
WO2019205283A1 PCT/CN2018/094074 CN2018094074W WO2019205283A1 WO 2019205283 A1 WO2019205283 A1 WO 2019205283A1 CN 2018094074 W CN2018094074 W CN 2018094074W WO 2019205283 A1 WO2019205283 A1 WO 2019205283A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
augmented reality
server
target
reality device
Prior art date
Application number
PCT/CN2018/094074
Other languages
English (en)
French (fr)
Inventor
李建亿
Original Assignee
太平洋未来科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来科技(深圳)有限公司 filed Critical 太平洋未来科技(深圳)有限公司
Publication of WO2019205283A1 publication Critical patent/WO2019205283A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an infrared-based AR imaging method, system, and electronic device.
  • AR Augmented Reality
  • virtual reality is an improved technology based on virtual reality. It can superimpose real-world scenes and virtual scenes in real time, providing users with more realistic scenes and further enhancing user immersion.
  • the video provider generally guides the user how to perform an augmented reality interaction.
  • the user encounters a video of interest to himself, if the video does not have an augmented reality.
  • the user can not interact with the video augmented reality, reducing the user experience, while limiting the technological development of augmented reality.
  • the infrared-based AR imaging method, system and electronic device provided by the embodiments of the present invention are used to at least solve the above problems in the related art.
  • An embodiment of the present invention provides an infrared-based AR imaging method, including:
  • the mobile terminal detects an infrared signal sent by the augmented reality device, and determines a corresponding augmented reality device identifier according to the infrared signal; the mobile terminal responds to the operation instruction of the user, and displays the video frame image of the currently played video and the augmented reality device.
  • the method further includes: the augmented reality device collects the face image, and sends the face image to the server; the server determines the target person from the face image according to the augmented reality device identifier Face image.
  • the method further includes: the server acquiring feature information from the target face image, and generating a first 3D face image matching the target face image according to the feature information;
  • the first 3D face image is polished to obtain a second 3D face image;
  • the server is configured to the target face image according to the first 3D face image and the second 3D face image Perform light and shadow processing.
  • the method further includes: the mobile terminal determining, according to the infrared signal, first target location information in the video, and transmitting the first target location information to the server.
  • the augmented reality processing specifically includes: the server adding the target face image to the video frame image according to the first target location information.
  • the augmented reality processing specifically includes: the server analyzing the video frame image to determine second target location information; and the server adding the target facial image according to the second target location information To the video frame image.
  • the video is captured by a camera of the mobile terminal
  • the camera includes a lens, an auto focus voice coil motor, an image sensor, and a micro memory alloy optical image stabilization device, the lens being fixed on the auto focus voice coil motor, the image sensor acquiring an optical scene of the lens Converting to image data, the autofocus voice coil motor is mounted on the micro memory alloy optical image stabilization device, and the processor of the mobile terminal drives the micro memory alloy optical image stabilization device according to the lens shake data detected by the gyroscope Action to achieve lens compensation for the lens;
  • the micro memory alloy optical image stabilization device includes a movable plate and a substrate, the auto focus voice coil motor is mounted on the movable plate, the substrate has a size larger than the movable plate, and the movable plate is mounted on the substrate
  • a plurality of movable supports are disposed between the movable plate and the substrate, and four sides of the substrate are provided on the periphery of the substrate, and a gap is formed in a middle portion of each of the side walls, and a micro-motion is installed in the notch a movable member of the micro switch capable of opening or closing the notch under the instruction of the processor, and the movable member is provided with a strip disposed along a width direction of the movable member near a side of the movable plate
  • the substrate is provided with a temperature control circuit connected to the electrical contact
  • the processor controls opening and closing of the temperature control circuit according to a lens shake direction detected by the gyroscope
  • the movable plate The middle of the four sides of the four
  • the elastic member is a spring.
  • the mobile terminal can be mounted on a bracket, the bracket includes a mounting base, a support shaft, and three support brackets hinged on the support shaft;
  • the mounting base includes a first mounting plate and a second mounting plate that are perpendicular to each other, and the first mounting plate and the second mounting plate are both for mounting the mobile terminal, and the support shaft is vertically mounted on the first installation a bottom surface of the plate, the support shaft is disposed away from a bottom surface of the mounting seat with a radial dimension larger than a circumferential surface of the support shaft, and the three support frames are mounted on the support shaft from top to bottom, and each The horizontal projection of the two support frames is at an angle, and the support shaft is a telescopic rod member, which comprises a tube body connected to the mounting seat and a rod body partially retractable into the tube body, The portion of the rod that extends into the tubular body includes a first section, a second section, a third section, and a fourth section that are sequentially hinged, the first section being coupled to the tubular body, the first section being adjacent to the body
  • the end of the second stage is provided with a mounting groove, a locking member is hinged in the mounting
  • the end of the second section near the third section is provided with a mounting groove, and the mounting slot is hinged a locking member, the end of the third section adjacent to the second section is provided with a locking hole detachably engaged with the locking member, and the third section is provided with a mounting groove near the end of the fourth section A locking member is hinged in the mounting groove, and an end of the fourth segment adjacent to the third segment is provided with a locking hole detachably engaged with the locking member.
  • each of the support frames is further connected with a distance adjusting device
  • the distance adjusting device comprises a bearing ring mounted on the bottom of the support frame, a rotating ring connected to the bearing ring, a pipe body, a screw, a threaded sleeve and a support rod, wherein one end of the tubular body is provided with a plug, and the screw portion is installed in the tube body through the plugging, and the plugging is provided with an inner portion adapted to the screw Thread, another part of the screw is connected to the rotating ring, one end of the screw sleeve is installed in the tube body and is screwed with the screw, and the other end of the screw sleeve protrudes outside the tube body and
  • the support rod is fixedly connected, and the inner wall of the screw sleeve is provided with a protrusion, and the outer side wall of the screw sleeve is provided with a slide rail adapted to the protrusion along the length direction thereof, and the tube body includes adjacent
  • an infrared-based AR imaging system including: a mobile terminal, an augmented reality device, and a server;
  • the mobile terminal includes a detection module and a first sending module, and the detecting module is configured to detect an infrared signal sent by the augmented reality device, and determine a corresponding augmented reality device identifier according to the infrared signal; the first sending module is configured to: Transmitting, to the server, a video frame image of the currently played video and the augmented reality device identifier in response to an operation instruction of the user; wherein the video is a video captured by the user through the mobile terminal;
  • the augmented reality device includes an acquisition module, and the collection module is configured to collect a face image and send the face image to the server;
  • the server includes a first determining module, a processing module, and a second sending module, where the first determining module is configured to determine a target face image from the face image according to the augmented reality device identifier; Performing an augmented reality process on the video frame image according to the target face image; the second sending module is configured to send the augmented reality processed video frame image to a target enhancement corresponding to the augmented reality device identifier Realistic device.
  • the mobile terminal further includes a second determining module, configured to determine first target location information in the video according to the infrared signal, where the first sending module is further used to The first target location information is sent to the server.
  • processing module is configured to add the face image to the video frame image according to the first target location information.
  • the camera of the mobile terminal includes a lens, an auto focus voice coil motor, an image sensor, and a micro memory alloy optical image stabilization device, and the lens is fixed on the auto focus voice coil motor, and the image sensor is The optical scene acquired by the lens is converted into image data, and the auto focus voice coil motor is mounted on the micro memory alloy optical image stabilization device, and the processor of the mobile terminal drives the micro memory according to the lens shake data detected by the gyroscope
  • the action of the alloy optical anti-shake device realizes the lens compensation of the lens;
  • the micro memory alloy optical image stabilization device includes a movable plate and a substrate, the auto focus voice coil motor is mounted on the movable plate, the substrate has a size larger than the movable plate, and the movable plate is mounted on the substrate
  • a plurality of movable supports are disposed between the movable plate and the substrate, and four sides of the substrate are provided on the periphery of the substrate, and a gap is formed in a middle portion of each of the side walls, and a micro-motion is installed in the notch a movable member of the micro switch capable of opening or closing the notch under the instruction of the processor, and the movable member is provided with a strip disposed along a width direction of the movable member near a side of the movable plate
  • the substrate is provided with a temperature control circuit connected to the electrical contact
  • the processor controls opening and closing of the temperature control circuit according to a lens shake direction detected by the gyroscope
  • the movable plate The middle of the four sides of the four
  • the elastic member is a spring.
  • the mobile terminal can be mounted on a bracket, the bracket includes a mounting base, a support shaft, and three support brackets hinged on the support shaft;
  • the mounting base includes a first mounting plate and a second mounting plate that are perpendicular to each other, and the first mounting plate and the second mounting plate are both for mounting the mobile terminal, and the support shaft is vertically mounted on the first installation a bottom surface of the plate, the support shaft is disposed away from a bottom surface of the mounting seat with a radial dimension larger than a circumferential surface of the support shaft, and the three support frames are mounted on the support shaft from top to bottom, and each The horizontal projection of the two support frames is at an angle, and the support shaft is a telescopic rod member, which comprises a tube body connected to the mounting seat and a rod body partially retractable into the tube body, The portion of the rod that extends into the tubular body includes a first section, a second section, a third section, and a fourth section that are sequentially hinged, the first section being coupled to the tubular body, the first section being adjacent to the body
  • the end of the second stage is provided with a mounting groove, a locking member is hinged in the mounting
  • the end of the second section near the third section is provided with a mounting groove, and the mounting slot is hinged a locking member, the end of the third section adjacent to the second section is provided with a locking hole detachably engaged with the locking member, and the third section is mounted adjacent to the end of the fourth section a slot, a locking member is hinged in the mounting slot, and an end of the fourth segment adjacent to the third segment is provided with a locking hole detachably engaged with the locking member.
  • each of the support frames is further connected with a distance adjusting device
  • the distance adjusting device comprises a bearing ring mounted on the bottom of the support frame, a rotating ring connected to the bearing ring, a pipe body, a screw, a threaded sleeve and a support rod, wherein one end of the tubular body is provided with a plug, and the screw portion is installed in the tube body through the plugging, and the plugging is provided with an inner portion adapted to the screw Thread, another part of the screw is connected to the rotating ring, one end of the screw sleeve is installed in the tube body and is screwed with the screw, and the other end of the screw sleeve protrudes outside the tube body and
  • the support rod is fixedly connected, and the inner wall of the screw sleeve is provided with a protrusion, and the outer side wall of the screw sleeve is provided with a slide rail adapted to the protrusion along the length direction thereof, and the tube body includes adjacent
  • a still further aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor;
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform any of the above-described infrared-based embodiments of the embodiments of the present invention AR imaging method.
  • a further aspect of the embodiments of the present invention provides a server, including: at least one processor; and a memory communicatively coupled to the at least one processor;
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform any of the above-described infrared-based embodiments of the embodiments of the present invention AR imaging method.
  • the infrared-based AR imaging method, system, and electronic device provided by the embodiments of the present invention can determine an augmented reality scene based on the user's own will, so that the user is placed in the currently played video, thereby enhancing the user's immersion. And experience.
  • FIG. 1 is a flowchart of an infrared-based AR imaging method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an infrared-based AR imaging method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a method for performing light and shadow processing on a target face image according to an embodiment of the present invention
  • FIG. 4 is a structural diagram of an infrared-based AR imaging system according to an embodiment of the present invention.
  • FIG. 5 is a structural diagram of an infrared-based AR imaging system according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing the hardware structure of an electronic device for performing an infrared-based AR imaging method provided by an embodiment of the method of the present invention
  • FIG. 7 is a structural diagram of a camera provided by an embodiment of the present invention.
  • FIG. 8 is a structural diagram of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • FIG. 9 is a structural diagram showing an operation state of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • Figure 10 is a structural view of a bracket according to an embodiment of the present invention.
  • Figure 11 is a structural view of a support shaft according to an embodiment of the present invention.
  • FIG. 12 is a structural diagram of a distance adjusting device according to an embodiment of the present invention.
  • the augmented reality device in embodiments of the present invention may include an eye and a helmet that are wearable by a user.
  • the augmented reality device is provided with a human face image capturing component.
  • the facial image capturing component faces the user's face and has a preset distance from the user's face, that is, not directly with the user's face.
  • the mobile terminal in the embodiment of the present invention includes but is not limited to a mobile phone, a tablet computer, and the like.
  • FIG. 1 is a flowchart of an infrared-based AR imaging method according to an embodiment of the present invention. As shown in FIG. 1 , an infrared-based AR imaging method provided by an embodiment of the present invention includes:
  • the mobile terminal detects an infrared signal sent by the augmented reality device, and determines a corresponding augmented reality device identifier according to the infrared signal.
  • the user captures one or more videos in advance by using the mobile terminal, and when the user wears the augmented reality device to play the video on the mobile terminal, the augmented reality device may emit an infrared signal, and the infrared signal can uniquely identify the augmented reality device. .
  • the mobile terminal After detecting the infrared signal, the mobile terminal can obtain the identifier of its corresponding augmented reality device from the infrared signal, so that the target augmented reality device with which the interaction is desired to be established can be determined by the identifier.
  • the mobile terminal sends a video frame image of the currently played video and the augmented reality device identifier to a server, in response to an operation instruction of the user, to enable the server to send the enhanced reality processed video frame image to the server.
  • the augmented reality device identifies a target augmented reality device corresponding to the target.
  • an operation instruction may be issued to the mobile terminal, where the operation instruction may be double-clicking the screen of the mobile terminal, long pressing the screen of the mobile terminal, or triggering the mobile terminal.
  • the present invention is not limited herein.
  • the method before performing step S103, the method further includes: the augmented reality device collecting the face image, and sending the face image to the server.
  • the augmented reality device collects a face image of the user through the face image collection component set therein, and transmits the face image to the server. It should be noted that the augmented reality device may be collected according to the user's instruction, or may be collected by the user within a preset time after wearing the augmented reality device, or may be collected in response to the user operating the mobile terminal. The invention is not limited herein.
  • the face image sent to the server carries the augmented reality device identifier corresponding to the augmented reality device, and is used to determine which face image the face image is from. Augmented reality device.
  • step S103 the server determines a target face image from the face image according to the received augmented reality device identifier. Specifically, the server searches for the stored face image according to the augmented reality device identifier detected by the mobile terminal, and determines the face image as the target face image when there is a face image matching the augmented reality device identifier. .
  • the server performs augmented reality processing on the received video frame image according to the target facial image, and sends the enhanced reality processed video frame image to the target augmented reality device corresponding to the augmented reality device identifier.
  • the method before performing the enhancement processing, further includes performing light and shadow processing on the target facial image, so that the facial features of the human face are more stereoscopic, and specifically includes the following steps:
  • the server acquires feature information from the target face image, and generates a first 3D face image that matches the target face image according to the feature information.
  • the feature information includes, but is not limited to, location information, face size, and depth information of key points in the facial features in the target face image.
  • the target face image may be input into the face feature information recognition model, and the key point coordinates of the facial features of the target face image may be output.
  • the coordinate system can take the lower left corner of the target face map Xixiang as the origin, the right direction is the positive X-axis direction, the upward direction is the positive direction of the Y-axis, and the coordinate value is measured by the number of pixels.
  • the face size is obtained.
  • the depth information of the face can also be output through the model.
  • the face model data file may be preloaded, a standard 3D face model is established, and the standard 3D face model is processed according to the feature information to generate a first 3D face image that matches the target face image.
  • the standard 3D face model is globally scaled according to the size of the target face image in the feature information, so that the distance between the uppermost vertex of the forehead and the lowermost vertex of the chin is equal to two steps perpendicular to the Y axis in step S1031.
  • the distance between the boundary lines of the strips is such that the distance between the two ears is equal to the distance between the two boundary lines perpendicular to the X-axis in step S101.
  • the map is completed by using the face image identified in step S1031, and the model is texture mapped according to the face information to generate a first 3D person that matches the target face image.
  • Face image may be the three primary colors RGB of each pixel.
  • the server performs light processing on the first 3D face image to obtain a second 3D face image.
  • the illumination rendering parameter may be determined by analyzing the illumination information in the video frame image, so that the target facial image is matched with the video frame image, and then the first 3D facial image is polished in combination with the illumination model. . Natural light is applied to the first 3D face image to simulate the light and shadow of the real light hitting the person's face.
  • the specific illumination model includes multiple types, and the invention is not limited herein.
  • the server performs light and shadow processing on the target face image according to the first 3D face image and the second 3D face image.
  • the second 3D face image of the natural light is polished, because the place where the light is brightened will be brightened, and the place where the light cannot be hit will form a shadow, and for the pixel of the same coordinate position, it is in the second 3D person.
  • There is a texture difference between the face image and the first 3D face image and according to the difference, the texture difference is strengthened or weakened at the corresponding position of the pixel point of the face image, so that the face image is generated and the second 3D The same lighting effect as the face image.
  • the face image may be subjected to light and shadow processing according to texture differences in the first 3D image and the second 3D image.
  • the facial features of the collected facial images can be more three-dimensional and more layered, thereby enhancing the user's interest in interaction.
  • the mobile terminal may further determine the first target location information in the video according to the infrared signal, and The first target location information is sent to the server along with the video frame image of the currently playing video and the augmented reality device identification.
  • the infrared device of the augmented reality device may include one, installed at the center of the augmented reality device; or may include multiple, evenly distributed around the augmented reality device, and the infrared light emitted by the infrared device may be directly transmitted to the mobile terminal.
  • the mobile terminal On the screen, the mobile terminal can determine its coordinate position on the screen according to the incident position of the infrared light.
  • the lower left corner of the mobile terminal may be used as a coordinate origin, and the origin is along the edge of the mobile terminal to the right in the positive direction of the X axis, and the edge of the mobile terminal is the positive direction of the Y axis.
  • other coordinate systems may be established. method.
  • the position is also the focus position of the user's eyes, indicating that the target point that the current user wishes to perform the augmented reality interaction is at the focus position in the video (possibly To make the user want to replace the person in the focus position with himself, or the user wants to add his face image to the image at the position), so the coordinate position of the infrared light on the screen can be directly used as the first Target location information.
  • the average position of the plurality of infrared devices is the focus position of the user's eyes, indicating that the target point of the current user wishing to perform the augmented reality interaction is in the video.
  • the focus position can be averaged on the coordinate position of the multi-beam infrared light on the screen, and the coordinate position of the average value is the first target position information.
  • the server After determining the first target location, the server adds the target face image to the first target location in the video frame image according to the first target location information.
  • the server may analyze the image corresponding to the first target location information, and when the corresponding image is a face image, the face image may be replaced with the target face image; when the corresponding image is not the face image, The target face image may be adjusted in angle so that it can match the scenery or the like in the vicinity of the image, and the non-face image is replaced with the adjusted target face image.
  • the replacement method may be pixel replacement or other conventional methods in the art.
  • the target location information may also be determined without the location of the mobile terminal.
  • the augmented reality processing specifically includes: the server analyzing the video frame image to determine second target location information; and the server adding the target facial image to the video frame image according to the second target location information. in.
  • the mobile terminal can determine the target object that needs to interact by analyzing the video frame image, for example, replacing a certain character in the video, or setting itself to a certain scene, etc.
  • the server will The position coordinates of the target object in the video are determined as second target position information, and the target face image is added to the video frame image according to the second target position information.
  • the specific addition method is as described above, and will not be described herein.
  • the server After the augmented reality processing is performed on the video frame image by using any of the foregoing methods, the server sends the enhanced reality processed video frame image to the target augmented reality device corresponding to the augmented reality device identifier. After receiving the operation instruction of the user, the above steps are continuously re-read to perform augmented reality processing on each frame of the video frame image, so the user wearing the target augmented reality device can see that the user appears in the played video to complete the enhancement. Realistic interaction.
  • the infrared-based AR imaging method provided by the embodiment of the present invention can determine an augmented reality scene based on the user's own will, so that the user is placed in the currently played video, and enhances the user's immersion and experience.
  • FIG. 3 is a flowchart of an infrared-based AR imaging method according to an embodiment of the present invention. As shown in FIG. 3 , this embodiment is a specific implementation of the embodiment shown in FIG. 1 and FIG. 2 , and therefore, the specific implementation method and beneficial effects of the steps in the embodiment shown in FIG. 1 and FIG. 2 are not further described.
  • the infrared-based AR imaging method provided by the embodiment specifically includes:
  • the mobile terminal detects an infrared signal sent by the augmented reality device, and determines a corresponding augmented reality device identifier according to the infrared signal.
  • the augmented reality device collects a face image and sends the face image to the server.
  • the mobile terminal sends the video frame image of the currently played video and the augmented reality device identifier to the server in response to an operation instruction of the user.
  • the server determines a target face image from the face image according to the augmented reality device identifier.
  • the server performs augmented reality processing on the video frame image according to the target facial image, and sends the enhanced reality processed video frame image to the target augmented reality device corresponding to the augmented reality device identifier.
  • the infrared-based AR imaging method provided by the embodiment of the present invention can determine an augmented reality scene based on the user's own will, so that the user is placed in the currently played video, and enhances the user's immersion and experience.
  • FIG. 4 and FIG. 5 are structural diagrams of an infrared-based AR imaging system according to an embodiment of the present invention. As shown in FIG. 4 and FIG. 5, the system specifically includes: a mobile terminal 10, an augmented reality device 20, and a server 30. among them,
  • the mobile terminal 10 includes a detection module 110 and a first sending module 120.
  • the detecting module 110 is configured to detect an infrared signal sent by the augmented reality device, and determine a corresponding augmented reality device identifier according to the infrared signal.
  • the first sending module 120 is configured to respond. Sending, by the user's operation instruction, a video frame image of the currently played video and the augmented reality device identifier to the server; wherein the video is a video captured by the user through the mobile terminal;
  • the augmented reality device 20 includes an acquisition module 210, and the collection module 210 is configured to collect a face image and send the face image to the server;
  • the server 30 includes a first determining module 310, a processing module 320, and a second sending module 330.
  • the first determining module 310 is configured to determine a target facial image from the facial image according to the augmented reality device identifier.
  • the third sending module is configured to send the augmented reality processed video frame image to the augmented reality device identifier corresponding to the video frame image according to the target facial image.
  • Target augmented reality device is configured to send the augmented reality processed video frame image to the augmented reality device identifier corresponding to the video frame image according to the target facial image.
  • the mobile terminal 10 further includes a second determining module 130, where the second determining module 130 is configured to determine first target location information in the video according to the infrared signal, where the first sending module 120 is further configured to use the first A target location information is sent to the server.
  • processing module 320 is configured to add the face image to the video frame image according to the first target location information.
  • the server 30 further includes an analysis module, which is not shown in the figure, the analysis module is configured to analyze the video frame image to determine second target location information, and the processing module 320 is configured according to the second target location information.
  • the target face image is added to the video frame image.
  • the server further includes a target image processing module, not shown in the figure, the target image processing module is configured to acquire feature information from the target face image, and generate and target the face image according to the feature information. Matching the first 3D face image; performing light finishing on the first 3D face image to obtain a second 3D face image; according to the first 3D face image and the second 3D face image, Performing light and shadow processing on the target face image.
  • the infrared-based AR imaging system provided by the embodiment of the present invention is specifically configured to perform the method provided by the embodiment shown in FIG. 1 to FIG. 3, the implementation principle, the method, the functional use, and the like, and the embodiment shown in FIG. 1 to FIG. Similar, I will not repeat them here.
  • the infrared-based AR imaging system of the embodiments of the present invention may be separately installed in the electronic device as one of the software or hardware functional units, or may be implemented as one of the functional modules integrated in the processor, and the embodiment of the present invention is executed. Infrared-based AR imaging method.
  • FIG. 6 is a schematic diagram showing the hardware structure of an electronic device for performing an infrared-based AR imaging method provided by an embodiment of the method of the present invention.
  • the electronic device includes:
  • processors 610 and memory 620 one processor 610 is taken as an example in FIG.
  • the apparatus for performing the infrared-based AR imaging method described above may further include: an input device 630 and an output device 630.
  • the processor 610, the memory 620, the input device 630, and the output device 640 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 620 is a non-volatile computer readable storage medium for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as the infrared-based AR imaging in the embodiment of the present invention.
  • the corresponding program instruction/module The processor 610 performs various functional applications of the server and data processing by running non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the infrared-based AR imaging method.
  • the memory 620 can include a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function; and the storage data area can store an infrared-based AR imaging system according to an embodiment of the present invention. Use the created data, etc.
  • memory 620 can include high speed random access memory 620, and can also include non-volatile memory 620, such as at least one disk storage device 620, flash memory device, or other non-volatile solid state memory 620 device.
  • memory 620 can optionally include a memory 620 remotely disposed relative to processor 66, which can be connected to the infrared-based AR imaging system over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 630 can receive input digital or character information and generate key signal inputs related to user settings and function control of the infrared based AR imaging system.
  • Input device 630 can include a device such as a press module.
  • the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, the infrared-based AR imaging method is performed.
  • the electronic device of the embodiment of the invention exists in various forms, including but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • Embodiments of the present invention provide a non-transitory computer readable storage medium storing computer executable instructions, wherein when the computer executable instructions are executed by an electronic device, the electronic device is caused
  • the infrared-based AR imaging method in any of the above method embodiments is performed.
  • An embodiment of the present invention provides a computer program product, wherein the computer program product comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, wherein when the program instruction When executed by an electronic device, the electronic device is caused to perform an infrared-based AR imaging method in any of the above method embodiments.
  • the video effect collected by the mobile terminal is specifically improved, and the embodiment provides a camera of the mobile terminal with better anti-shake performance, which is obtained by the camera.
  • Pictures and videos are more clear than ordinary cameras, and can better meet the needs of users.
  • the effect is better.
  • the existing mobile terminal camera (the mobile terminal is a mobile phone or a digital video camera, etc.) including the lens 1, the auto focus voice coil motor 2, and the image sensor 3 are known in the art, and thus are not described here.
  • Micro memory alloy optical anti-shake device is usually used because the existing anti-shake device mostly uses the energized coil to generate Loren magnetic force to drive the lens in the magnetic field, and to achieve optical image stabilization, the lens needs to be driven in at least two directions. It means that multiple coils need to be arranged, which will bring certain challenges to the miniaturization of the overall structure, and is easily interfered by external magnetic fields, thus affecting the anti-shake effect.
  • Some prior art techniques achieve the stretching and shortening of the memory alloy wire through temperature changes.
  • the autofocus voice coil motor is moved to realize the lens shake compensation, and the control chip of the micro memory alloy optical anti-shake actuator can control the change of the drive signal to change the temperature of the memory alloy wire, thereby controlling the extension of the memory alloy wire.
  • the length and length are shortened, and the position and moving distance of the actuator are calculated based on the resistance of the memory alloy wire.
  • the Applicant has found that due to the randomness and uncertainty of the jitter, the structure of the above technical solution alone cannot accurately compensate the lens in the case of multiple jitters due to the temperature rise of the shape memory alloy. It takes a certain time to cool down.
  • the above technical solution can compensate the lens for the first direction jitter, but when the second direction of the jitter occurs, the memory alloy wire is too late. It is deformed in an instant, so it is easy to cause the compensation to be untimely. It is impossible to accurately achieve lens shake compensation for multiple jitters and continuous jitter in different directions. This results in poor quality of the acquired image, so the camera or camera structure needs to be improved.
  • the camera of the embodiment includes a lens 1, an auto focus voice coil motor 2, an image sensor 3, and a micro memory alloy optical image stabilization device 4, and the lens 1 is fixed to the auto focus voice coil.
  • the image sensor 3 transmits an image acquired by the lens 1 to the identification module 100
  • the autofocus voice coil motor 2 is mounted on the micro memory alloy optical image stabilization device 4, the movement
  • the internal processor of the terminal drives the action of the micro-memory alloy optical anti-shake device 4 according to the lens shake detected by the gyroscope (not shown) inside the mobile terminal to realize the lens compensation of the lens;
  • the improvement of the micro memory alloy optical anti-shake device is as follows:
  • the micro memory alloy optical image stabilization device comprises a movable plate 5 and a substrate 6, wherein the movable plate 5 and the substrate 6 are rectangular plate-shaped members, and the autofocus voice coil motor 2 is mounted on the movable plate 5, the substrate
  • the size of 6 is larger than the size of the movable panel 5, the movable panel 5 is mounted on the substrate 6, and a plurality of movable supports 7 are disposed between the movable panel 5 and the substrate 6, and the movable support 7 Specifically, the balls are disposed in the grooves at the four corners of the substrate 6 to facilitate the movement of the movable plate 5 on the substrate 6.
  • the substrate 6 has four side walls around, and the central portion of each of the side walls A notch 8 is disposed, and the notch 8 is mounted with a micro switch 9 , and the movable member 10 of the micro switch 9 can open or close the notch under the instruction of the processing module, and the movable member 10 is close to the
  • the side surface of the movable panel 5 is provided with strip-shaped electrical contacts 11 arranged along the width direction of the movable member 10, and the substrate 6 is provided with a temperature control circuit connected to the electrical contact 11 (not shown)
  • the processing module can control the opening of the temperature control circuit according to the lens shake direction detected by the gyroscope
  • a shape memory alloy wire 12 is disposed in a middle portion of the four sides of the movable plate 5, and one end of the shape memory alloy wire 12 is fixedly connected to the movable plate 5, and the other end is slidably engaged with the electrical contact 11
  • An elastic member 13 for resetting is disposed between the inner side wall of the periphery of the
  • the working process of the micro memory alloy optical image stabilization device of the present embodiment will be described in detail below with reference to the above structure: taking the lens in the opposite direction of the lens as an example, when the lens is shaken in the first direction, the gyroscope will detect The lens shake direction and distance are fed back to the processor, and the processor calculates the amount of elongation of the shape memory alloy wire that needs to be controlled to compensate the jitter, and drives the corresponding temperature control circuit to heat the shape memory alloy wire.
  • the memory alloy wire is elongated and drives the movable plate to move in a direction that can compensate for the first direction of shaking, while the other shape memory alloy wire symmetrical with the shape memory alloy wire does not change, but with the other shape memory alloy wire
  • the connected movable piece opens the corresponding notch, so that the other shape memory alloy wire protrudes out of the notch by the movable plate, and at this time, the elastic members near the two shape memory alloy wires are respectively stretched and Compression (as shown in Figure 9), feedback the shape memory alloy wire after moving to the specified position on the micro memory alloy optical anti-shake actuator
  • the resistance of the micro-memory alloy optical anti-shake actuator can be corrected by comparing the deviation of the resistance value from the target value; and when the second dither occurs, the processor first passes through another shape and the alloy wire
  • the abutting movable member closes the notch, and opens the movable member that abuts against the shape memory alloy wire in the extended state, and the rotation of the movable member with
  • the opening of the movable member abutting against the shape memory alloy wire in an extended state can facilitate the extension of the shape memory alloy wire in the extended state, and the elastic deformation of the two elastic members can ensure rapid reset of the movable plate.
  • the processor again calculates the amount of elongation of the shape memory alloy wire that needs to be controlled to compensate for the second jitter, and drives the corresponding temperature control circuit to heat up the other shape memory alloy wire, and the other shape memory alloy wire is elongated and Driving the movable plate to compensate for the direction of the second direction of shaking, due to the lack of the shape of the alloy wire Open, so it does not affect the other shape and the movement of the alloy ribbon moving plate, and due to the opening speed of the movable member and the resetting action of the spring, the micro memory alloy optical image stabilization device of the embodiment is used when multiple shaking occurs. Accurate compensation can be made, which is far superior to the micro-memory alloy optical anti-shake device in the prior art.
  • the above is only a simple two-jitter.
  • the two adjacent shape memory alloy wires can be elongated to compensate for the jitter.
  • the basic working process is as described above. The description principle is the same, but it is not described here.
  • the detection feedback of the shape memory alloy resistance and the detection feedback of the gyroscope are all prior art, and will not be described here.
  • the mobile terminal needs to be mounted on the bracket of the camera for video or image acquisition, but the applicant finds in use that the bracket of the existing camera has the following defects:
  • the existing camera brackets are supported by a tripod, but the tripod structure cannot ensure the level of the bracket mount when the ground unevenness is large and uneven, and it is easy to be shaken or tilted, which may easily adversely affect the shooting;
  • the existing bracket cannot be used as a shoulder-resistant camera bracket, and has a single structure and a single function. When a shoulder-resistant shot is required, a shoulder-resistant camera bracket must be separately provided.
  • the bracket of the embodiment includes a mounting seat 14, a support shaft 15, and three support frames 16 hinged on the support shaft;
  • the mounting bracket 14 includes a first mounting plate 141 and a second mounting plate 142 that are perpendicular to each other, and the first mounting plate 141 and the second mounting plate 142 are both used to mount the camera, and the support shaft 15 is vertically mounted at the same.
  • a bottom surface of the first mounting plate 141, the support shaft 15 is disposed away from the bottom end of the mounting seat 14 with a radial surface slightly larger than the circumferential surface 17 of the support shaft, and the three support frames 16 are from top to bottom.
  • each of the two support frames 16 is inclined at an angle.
  • the circumferential surface 17 is first assumed to be flat on the uneven surface.
  • the erection of the bracket is leveled by opening and adjusting the position of the three retractable support frames, so even the uneven ground can quickly erect the support, adapt to various terrains, and ensure that the mount is horizontal. status.
  • the support shaft 15 of the present embodiment is also a telescopic rod member including a tube body 151 connected to the mounting seat 14 and a rod body 152 partially retractable into the tube body 151, the rod body
  • the portion of the 152 that extends into the tubular body includes a first segment 1521, a second segment 1522, a third segment 1523, and a fourth segment 1524 that are sequentially hinged, the first segment 1521 being coupled to the tubular body 151,
  • a mounting slot 18 is defined in the end of the first segment 1521 adjacent to the second segment 1522.
  • a locking member 19 is hinged in the mounting slot 18, and the second segment 1522 is adjacent to the end of the first segment 1521.
  • the locking hole 20 is detachably engaged with the locking member 19.
  • the second portion 1522 is provided with a mounting groove 18 near the end of the third segment 1523.
  • the mounting groove 18 is hingedly locked.
  • the third section 1523 is provided with a locking hole 20 detachably engaged with the locking member 19 near the end of the second segment 1522, and the third segment 1523 is adjacent to the end of the fourth segment 1524.
  • a mounting slot 18 is defined in the mounting slot 18, and a locking member 19 is hinged therein.
  • the end of the fourth segment 1524 adjacent to the third segment 1523 is detachably coupled with the locking member 19.
  • the locking hole 20 can be hidden in the mounting groove. When the locking member is needed, the locking member can be locked on the locking hole by rotating the locking member.
  • the locking member 19 may be a strip having a protrusion which is adapted to the size of the locking hole to press the protrusion into the two adjacent sections in the locking hole (
  • the first segment and the second segment are fixed in position to prevent relative rotation, and the portion can be formed by the cooperation of the first segment 1521, the second segment 1522, the third segment 1523 and the fourth segment 1524.
  • the structure is fixed, and the relative positions of the segments are fixed by the locking member 19.
  • the soft material can also be provided at the bottom of the structure.
  • the Applicant has also found that the telescopic support frame stretches most of the telescopic portion by human force to realize the adjustment of the telescopic length, but the distance is uncontrollable and the randomness is large, so that the problem of adjustment inconvenience often occurs, especially in need of When the telescopic length is partially adjusted, it is often not easy to implement. Therefore, the applicant also optimizes the structure of the support frame 16. As shown in FIG. 12, the bottom end of each of the support frames 16 of the embodiment is also connected with a pitch adjustment.
  • the device 21 includes a bearing ring 211 mounted on the bottom of the support frame 16, a rotating ring 212 connected to the bearing ring 211, a tube body 213, a screw 214, a threaded sleeve 215 and a support rod 216.
  • One end of the tubular body 213 is provided with a plugging 217, and the screw 215 is partially installed in the tubular body 213 through the plugging 217, and the plugging 217 is provided with the screw 214.
  • the other end of the screw 214 is connected to the rotating ring 212.
  • One end of the threaded sleeve 215 is mounted in the tube body 213 and is screwed to the screw 214.
  • the other end of the threaded sleeve 215 extends.
  • the inner wall of the screw sleeve 215 is provided with a protrusion 218.
  • the outer side wall of the thread sleeve 215 is provided along the length thereof with a slide 219 adapted to the protrusion.
  • 213 includes an adjacent first portion 2131 having an inner diameter smaller than an inner diameter of the second portion 2132, and a second portion 2132 disposed on an outer end of the second portion 2132.
  • the end of the screw sleeve 215 near the screw 214 is provided with a limiting end 2151 having an outer diameter larger than the inner diameter of the first portion.
  • the screw 214 By rotating the rotating ring 212, the screw 214 is rotated in the tube body 213, and the rotation trend is transmitted.
  • the screw sleeve 215 is not rotated due to the cooperation of the protrusion 218 and the slide 219, so that the rotational force is turned into an outward linear movement, thereby driving the support rod 216 to move, and the bottom end of the support frame is realized.
  • the length is finely adjusted, which is convenient for the user to flatten the bracket and its mounting seat, and provide a good foundation for the subsequent shooting work.
  • a machine-readable medium includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc., the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiments described Methods.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media e.g., magnetic disks, magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc.
  • the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiment

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

本发明实施例提供一种基于红外的AR成像方法、系统及电子设备,包括:移动终端检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识;增强现实设备采集人脸图像,并将所述人脸图像发送至所述服务器;所述移动终端响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至服务器,以使所述服务器根据所述增强现实设备标识从所述人脸图像中确定目标人脸图像,根据所述目标人脸图像对所述视频帧图像进行增强现实处理,并将处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。

Description

基于红外的AR成像方法、系统及电子设备 技术领域
本发明涉及图像处理技术领域,尤其涉及一种基于红外的AR成像方法、系统及电子设备。
背景技术
AR(Augmented Reality,增强现实技术)是一种基于虚拟现实的改进技术,能够将现实场景与虚拟场景进行实时叠加,为用户提供更加逼真的场景,进一步增强了用户的沉浸感。
然而,发明人在实现本发明的过程中发现,现有技术中一般都是由视频提供方指导用户如何进行增强现实的交互,当用户遇到自己感兴趣的视频时,如果该视频没有增强现实功能,则用户不能与该视频进行增强现实的交互,降低了用户体验,同时限制了增强现实的技术发展。
发明内容
本发明实施例提供的基于红外的AR成像方法、系统及电子设备,用以至少解决相关技术中的上述问题。
本发明实施例一方面提供了一种基于红外的AR成像方法,包括:
移动终端检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识;所述移动终端响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至服务器,以使所述服务器将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备;其中,所述视频为用户通过所述移动终端拍摄的视频。
进一步地,所述方法还包括:增强现实设备采集人脸图像,并将所述人脸图像发送至所述服务器;所述服务器根据所述增强现实设备标识从所述人脸图像中确定目标人脸图像。
进一步地,所述方法还包括:所述服务器从所述目标人脸图像中获取特征信息,根据所述特征信息生成与所述目标人脸图像匹配的第一3D人脸图像;所述服务器对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;所述服务器根据所述第一3D人脸图像和所述第二3D人脸图像, 对所述目标人脸图像进行光影处理。
进一步地,所述方法还包括:所述移动终端根据所述红外信号确定所述视频中的第一目标位置信息,将所述第一目标位置信息发送至所述服务器。
进一步地,所述增强现实处理具体包括:所述服务器根据所述第一目标位置信息,将所述目标人脸图像添加至所述视频帧图像中。
进一步地,所述增强现实处理具体包括:所述服务器对所述视频帧图像进行分析,确定第二目标位置信息;所述服务器根据所述第二目标位置信息,将所述目标人脸图像添加至所述视频帧图像中。
进一步的,
所述基于红外的AR成像方法中,通过移动终端的摄像头拍摄所述视频,
所述摄像头包括镜头、自动聚焦音圈马达、图像传感器以及微型记忆合金光学防抖器,所述镜头固装在所述自动聚焦音圈马达上,所述图像传感器将所述镜头获取的光学场景转换为图像数据,所述自动聚焦音圈马达安装在所述微型记忆合金光学防抖器上,移动终端的处理器根据陀螺仪检测到的镜头抖动数据驱动所述微型记忆合金光学防抖器的动作,实现镜头的抖动补偿;
所述微型记忆合金光学防抖器包括活动板和基板,所述自动聚焦音圈马达安装在所述活动板上,所述基板的尺寸大于所述活动板,所述活动板安装在所述基板上,所述活动板和所述基板之间设有多个活动支撑,所述基板的四周具有四个侧壁,每个所述侧壁的中部设有一缺口,所述缺口处安装有微动开关,所述微动开关的活动件可以在所述处理器的指令下打开或封闭所述缺口,所述活动件靠近所述活动板的侧面设有沿所述活动件宽度方向布设的条形的电触点,所述基板设有与所述电触点相连接的温控电路,所述处理器根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板的四个侧边的中部均设有形状记忆合金丝,所述形状记忆合金丝一端与所述活动板固定连接,另一端与所述电触点滑动配合,所述基板的四周的内侧壁与所述活动板之间均设有弹性件,当所述基板上的一个温控电路连通时,与该电路相连接的形状记忆合金丝伸长,同时,远离该形状记忆合金丝的微动开关的活动件打开所述缺口,与该形状记忆合金丝同侧的弹性件收缩,远离该形状记忆合金丝的弹性件伸长。
进一步的,所述弹性件为弹簧。
进一步的,所述移动终端可安装于支架上,所述支架包括安装座、支撑轴、三个铰装在所述支撑轴上的支撑架;
所述安装座包括相互垂直的第一安装板和第二安装板,所述第一安装板和第二安装板均可用于安装所述移动终端,所述支撑轴垂直安装在所述第一安装板的底面,所述支撑轴远离所述安装座的底端设有径向尺寸大于所述支撑轴的圆周面,三个所述支撑架由上至下安装在所述支撑轴上,且每两个所述支撑架展开后的水平投影呈一夹角,所述支撑轴为伸缩杆件,其包括与所述安装座相连接的管体和部分可收缩至所述管体内的杆体,所述杆体伸入所述管体的部分包括依次铰接的第一段、第二段、第三段和第四段,所述第一段与所述管体相连接,所述第一段靠近所述第二段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第二段靠近所述第一段的端部设有与锁止件可拆卸配合的锁止孔,所述第二段靠近所述第三段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第三段靠近所述第二段的端部设有与锁止件可拆卸配合的锁止孔,所述第三段靠近所述第四段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第四段靠近所述第三段的端部设有与锁止件可拆卸配合的锁止孔。
进一步的,每个所述支撑架的底端还连接有调距装置,所述调距装置包括安装在所述支撑架底部的轴承圈、与所述轴承圈相连接的转动环、管体、螺杆、螺套及支撑杆,所述管体的一端设有封堵,所述螺杆部分通过所述封堵安装在所述管体内,所述封堵设有与所述螺杆相适配的内螺纹,所述螺杆另一部分与所述转动环相连接,所述螺套一端安装在所述管体内并与所述螺杆螺纹连接,所述螺套的另一端伸出所述管体外并与所述支撑杆固定连接,所述螺套的内壁设有一凸起,所述螺套的外侧壁沿其长度方向设有与所述凸起相适配的滑道,所述管体包括相邻的第一部分和第二部分,所述第一部分的内径小于所述第二部分的内径,所述封堵设置在所述第二部分的外端上,所述螺套靠近所述螺杆的端部设有外径大于所述第一部分内径的限位端。
本发明实施例的另一方面提供了一种基于红外的AR成像系统,包括:移动终端、增强现实设备及服务器;其中,
所述移动终端包括检测模块和第一发送模块,所述检测模块用于检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识;所述第一发送模块用于响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至所述服务器;其中,所述视频为用户通过所述移动终端拍摄的视频;
所述增强现实设备包括采集模块,所述采集模块用于采集人脸图像,并将所述人脸图像发送至所述服务器;
所述服务器包括第一确定模块、处理模块和第二发送模块,所述第一 确定模块用于根据所述增强现实设备标识从所述人脸图像中确定目标人脸图像;所述处理模块用于根据所述目标人脸图像对所述视频帧图像进行增强现实处理;所述第二发送模块用于将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。
进一步地,所述移动终端还包括第二确定模块,所述第二确定模块用于根据所述红外信号确定所述视频中的第一目标位置信息,所述第一发送模块还用于将所述第一目标位置信息发送至所述服务器。
进一步地,所述处理模块用于根据所述第一目标位置信息,将所述人脸图像添加至所述视频帧图像中。
进一步地,所述移动终端的摄像头包括镜头、自动聚焦音圈马达、图像传感器以及微型记忆合金光学防抖器,所述镜头固装在所述自动聚焦音圈马达上,所述图像传感器将所述镜头获取的光学场景转换为图像数据,所述自动聚焦音圈马达安装在所述微型记忆合金光学防抖器上,移动终端的处理器根据陀螺仪检测到的镜头抖动数据驱动所述微型记忆合金光学防抖器的动作,实现镜头的抖动补偿;
所述微型记忆合金光学防抖器包括活动板和基板,所述自动聚焦音圈马达安装在所述活动板上,所述基板的尺寸大于所述活动板,所述活动板安装在所述基板上,所述活动板和所述基板之间设有多个活动支撑,所述基板的四周具有四个侧壁,每个所述侧壁的中部设有一缺口,所述缺口处安装有微动开关,所述微动开关的活动件可以在所述处理器的指令下打开或封闭所述缺口,所述活动件靠近所述活动板的侧面设有沿所述活动件宽度方向布设的条形的电触点,所述基板设有与所述电触点相连接的温控电路,所述处理器根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板的四个侧边的中部均设有形状记忆合金丝,所述形状记忆合金丝一端与所述活动板固定连接,另一端与所述电触点滑动配合,所述基板的四周的内侧壁与所述活动板之间均设有弹性件,当所述基板上的一个温控电路连通时,与该电路相连接的形状记忆合金丝伸长,同时,远离该形状记忆合金丝的微动开关的活动件打开所述缺口,与该形状记忆合金丝同侧的弹性件收缩,远离该形状记忆合金丝的弹性件伸长。
进一步的,所述弹性件为弹簧。
进一步的,所述移动终端可安装于支架上,所述支架包括安装座、支撑轴、三个铰装在所述支撑轴上的支撑架;
所述安装座包括相互垂直的第一安装板和第二安装板,所述第一安装板和第二安装板均可用于安装所述移动终端,所述支撑轴垂直安装在所述第一安装板的底面,所述支撑轴远离所述安装座的底端设有径向尺寸大于 所述支撑轴的圆周面,三个所述支撑架由上至下安装在所述支撑轴上,且每两个所述支撑架展开后的水平投影呈一夹角,所述支撑轴为伸缩杆件,其包括与所述安装座相连接的管体和部分可收缩至所述管体内的杆体,所述杆体伸入所述管体的部分包括依次铰接的第一段、第二段、第三段和第四段,所述第一段与所述管体相连接,所述第一段靠近所述第二段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第二段靠近所述第一段的端部设有与锁止件可拆卸配合的锁止孔,所述第二段靠近所述第三段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第三段靠近所述第二段的端部设有与锁止件可拆卸配合的锁止孔,所述第三段靠近所述第四段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第四段靠近所述第三段的端部设有与锁止件可拆卸配合的锁止孔。
进一步的,每个所述支撑架的底端还连接有调距装置,所述调距装置包括安装在所述支撑架底部的轴承圈、与所述轴承圈相连接的转动环、管体、螺杆、螺套及支撑杆,所述管体的一端设有封堵,所述螺杆部分通过所述封堵安装在所述管体内,所述封堵设有与所述螺杆相适配的内螺纹,所述螺杆另一部分与所述转动环相连接,所述螺套一端安装在所述管体内并与所述螺杆螺纹连接,所述螺套的另一端伸出所述管体外并与所述支撑杆固定连接,所述螺套的内壁设有一凸起,所述螺套的外侧壁沿其长度方向设有与所述凸起相适配的滑道,所述管体包括相邻的第一部分和第二部分,所述第一部分的内径小于所述第二部分的内径,所述封堵设置在所述第二部分的外端上,所述螺套靠近所述螺杆的端部设有外径大于所述第一部分内径的限位端。
本发明实施例的又一方面提供一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明实施例上述任一项基于红外的AR成像方法。
本发明实施例的再一方面提供一种服务器,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明实施例上述任一项基于红外的AR成像方法。
由以上技术方案可见,本发明实施例提供的基于红外的AR成像方法、系统及电子设备,能够基于用户自身的意愿确定增强现实场景,使用户置身于当前播放的视频中,增强用户的沉浸感和体验感。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本发明一个实施例提供的基于红外的AR成像方法流程图;
图2为本发明一个实施例提供的基于红外的AR成像方法流程图;
图3为本发明一个实施例提供的对目标人脸图像进行光影处理的方法流程图;
图4为本发明一个实施例提供的基于红外的AR成像系统结构图;
图5为本发明一个实施例提供的基于红外的AR成像系统结构图;
图6为执行本发明方法实施例提供的基于红外的AR成像方法的电子设备的硬件结构示意图;
图7为本发明一个实施例提供的摄像头的结构图;
图8为本发明一个实施例提供的微型记忆合金光学防抖器的结构图;
图9为本发明一个实施例提供的微型记忆合金光学防抖器的一种工作状态结构图;
图10为本发明一个实施例提供的支架结构图;
图11为本发明一个实施例提供的支撑轴结构图;
图12为本发明一个实施例提供的调距装置结构图。
具体实施方式
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。
本发明实施例中的增强现实设备可以包括可被用户佩戴的眼睛和头盔。该增强现实设中设置有人脸图像采集组件,当该设备主体被用户佩戴时,该脸部图像采集组件朝向用户脸部,且与该用户脸部存在预设距离,即不直接与用户的脸部接触。本发明实施例中的移动终端包括但不限于手机、平板电脑等。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互结合。
图1为本发明实施例提供的基于红外的AR成像方法流程图。如图1所示,本发明实施例提供的基于红外的AR成像方法,包括:
S101,移动终端检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识。
具体地,用户预先利用移动终端拍摄了一个或多个视频,当用户佩戴增强现实设备在移动终端播放所述视频时,该增强现实设备可以发出红外信号,该红外信号能够唯一标识该增强现实设备。移动终端检测到该红外信号后,能够从该红外信号中获取其对应的增强现实设备的标识,从而能够通过该标识确定希望与其建立交互的目标增强现实设备。
S103,所述移动终端响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至服务器,以使所述服务器将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。
具体地,当用户想要和当前播放的视频画面进行增强现实的交互时,可以向移动终端发出操作指令,该操作指令可以是双击移动终端的屏幕、长按移动终端的屏幕或触发移动终端的预设位置等,本发明在此不做限定。
可选地,在进行步骤S103之前,还包括:增强现实设备采集人脸图像,并将所述人脸图像发送至所述服务器。
增强现实设备通过其设置的人脸图像采集组件来采集用户的人脸图像,并将所述人脸图像发送至服务器。需要说明的是,增强现实设备可能是根据用户的指令进行采集,也可以是用户在佩戴该增强现实设备后预设时间内采集,也可以是响应于用户对移动终端进行操作指令时采集,本发明在此不做限定。
需要说明是,由于想要进行增强现实的设备可能有多个,因此向服务器发送的该人脸图像中携带有该增强现实设备对应的增强现实设备标识,用来确定该人脸图像来自哪一个增强现实设备。
在步骤S103中,服务器根据接收到的所述增强现实设备标识从所述人脸图像中确定目标人脸图像。具体地,服务器根据移动终端检测到的增强现实设备标识,来查找存储的人脸图像,当存在与该增强现实设备标识相匹配的人脸图像时,将该人脸图像确定为目标人脸图像。
然后,服务器根据所述目标人脸图像对接收到的视频帧图像进行增强现实处理,将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。
可选地,如图3所述,在进行所述增强处理之前,所述方法还包括对 所述目标人脸图像进行光影处理,从而使人脸的五官更加立体,具体包括如下步骤:
S1031,服务器从所述目标人脸图像中获取特征信息,根据所述特征信息生成与所述目标人脸图像匹配的第一3D人脸图像。
具体地,特征信息包括但不限于目标人脸图像中的五官中关键点的位置信息、人脸尺寸和深度信息。可以将目标人脸图像输入人脸特征信息识别模型,输出目标人脸图像五官的关键点坐标。坐标系可以以目标人脸图西乡的左下角为原点,向右方向为X轴正方向,向上方向为Y轴正方向,坐标值以像素点的个数进行计量。根据上述关键点坐标进行整合得到人脸尺寸。此外,还可以通过该模型输出人脸的深度信息。
进一步地,可以预先加载人脸模型数据文件,建立标准3D人脸模型,并依据上述特征信息对标准3D人脸模型进行处理,生成与所述目标人脸图像匹配的第一3D人脸图像。
可选地,首先,根据所述特征信息中目标人脸图像的尺寸对所述标准3D人脸模型进行整体缩放,使额头最上顶点与下巴最下顶点的距离等于步骤S1031垂直于Y轴的两条边界线之间的距离,使两个耳朵之间的距离等于步骤S101垂直于X轴的两条边界线之间的距离。其次,根据步骤S1031中得到的各关键点的坐标位置、确定标准3D人脸模型中眉毛、眼睛、鼻子、嘴及脸庞的位置及轮廓,使它们的形状位置与其在目标人脸图像中为位置及轮廓相一致。最后,根据上述各关键点的坐标位置,利用步骤S1031中识别得到的人脸图像完成贴图,并根据上述人脸信息对所述模型进行纹理映射,生成与目标人脸图像匹配的第一3D人脸图像。可选地,所述纹理可以为每个像素的三原色RGB。
S1032,服务器对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像。
本步骤中,可以通过分析视频帧图像中的光照信息来确定光照渲染参数,以使得目标人脸图像与视频帧图像向匹配,再结合光照模型对所述第一3D人脸图像进行打光处理。在所述第一3D人脸图像上打上自然光线.从而模拟出现实光线打在人脸上的光亮和阴影。具体的光照模型包括多种类型,本发明在此不做限定。
S1033,服务器根据所述第一3D人脸图像和所述第二3D人脸图像,对所述目标人脸图像进行光影处理。
依据步骤S1032打好自然光线的第二3D人脸图像,由于其光线打亮的地方会提亮、光线打不到的地方会形成阴影,对于同一坐标位置的像素点,其在第二3D人脸图像与第一3D人脸图像中会存在纹理差值,按该差 值,在人脸图像所述像素点的对应位置加强或减弱其纹理差值,使得人脸图像会产生和第二3D人脸图像一样的打光效果。本步骤中,可以根据所述第一3D图像和所述第二3D图像中的纹理差异,对所述人脸图像进行光影处理。
通过上述步骤,能在提亮光线的基础上,使采集到的人脸图像五官更立体、更有层次感,提高用户的交互兴趣。
进一步地,移动终端在检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识之后,还可以根据所述红外信号确定所述视频中的第一目标位置信息,将所述第一目标位置信息连同当前播放视频的视频帧图像和所述增强现实设备标识,一起发送至所述服务器。
具体地,增强现实设备的红外装置可以包括一个,安装在该增强现实设备的中心处;也可以包括多个,均匀分布在该增强现实设备四周上,红外装置发出的红外光会直射到移动终端的屏幕上,移动终端可以根据该红外光的入射位置确定其在屏幕上的坐标位置。可选地,可以将移动终端左下角作为坐标原点,原点沿移动终端边缘向右为X轴的正方向,沿移动终端边缘向上为Y轴的正方向,当然,也可以有其他建立坐标系的方法。
当红外装置为一个时,由于其设置在增强现实设备的的中心处,该位置也就是用户双眼的聚焦位置,说明当前用户希望进行增强现实交互的目标点就在视频中的该聚焦位置(可能使用户想要将该聚焦位置的人物替换成自己,也可能是用户希望将自己的人脸图像添加到该位置的图像上),因此可以直接将该红外光在屏幕上的坐标位置作为第一目标位置信息。
当红外装置为多个时,由于其均匀分布在增强现实设备的四周,该多个红外装置的平均位置也就是用户双眼的聚焦位置,说明当前用户希望进行增强现实交互的目标点就在视频中的该聚焦位置,因此可以对该多束红外光在屏幕上的坐标位置求平均值,该平均值的坐标位置即为第一目标位置信息。
确定了第一目标位置后,所述服务器根据该第一目标位置信息,将目标人脸图像添加至所述视频帧图像中的第一目标位置上。具体地,服务器可以对该第一目标位置信息对应的图像进行分析,当对应的图像为人脸图像时,可以将该人脸图像替换成目标人脸图像;当对应的图像非人脸图像时,可以对该目标人脸图像进行角度的调整,以使其能够与该图像附近的风景等相匹配,再将该非人脸图像替换成调整后的目标人脸图像。可选地,替换方法可以是像素替换,也可以是其他本领域惯用方法。
作为本实施例的可选实施方式,也可以不通过移动终端的定位来确定 目标位置信息。此时,增强现实处理具体包括:服务器对所述视频帧图像进行分析,确定第二目标位置信息;服务器根据所述第二目标位置信息,将所述目标人脸图像添加至所述视频帧图像中。
在本实施方式中,移动终端可以通过对视频帧图像的分析,确定希望进行交互的目标对象,例如,把视频中的某个人物替换自己,或者将自己设置在某个景物旁等,服务器将该目标对象在所述视频中的位置坐标确定为第二目标位置信息,并根据所述第二目标位置信息,将所述目标人脸图像添加至所述视频帧图像中。具体的添加方法如上所述,在此不做赘述。
在通过上述任一方法对视频帧图像进行增强现实处理后,服务器将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。由于接收到用户的操作指令后,会不断重读上述步骤来对每帧视频帧图像进行增强现实的处理,因此佩戴该目标增强现实设备的用户即可以看到自己出现在播放的视频中,完成增强现实的交互。
本发明实施例提供的基于红外的AR成像方法,能够基于用户自身的意愿确定增强现实场景,使用户置身于当前播放的视频中,增强用户的沉浸感和体验感。
图3为本发明实施例提供的基于红外的AR成像方法流程图。如图3所示,本实施例为图1和图2所示实施例的具体实现方案,因此不再赘述图1和图2所示实施例中各步骤的具体实现方法和有益效果,本发明实施例提供的基于红外的AR成像方法,具体包括:
S301,移动终端检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识。
S302,增强现实设备采集人脸图像,并将所述人脸图像发送至所述服务器。
S303,所述移动终端响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至服务器。
S304,所述服务器根据所述增强现实设备标识从所述人脸图像中确定目标人脸图像。
S305,所述服务器根据所述目标人脸图像对所述视频帧图像进行增强现实处理,并将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。
本发明实施例提供的基于红外的AR成像方法,能够基于用户自身的意愿确定增强现实场景,使用户置身于当前播放的视频中,增强用户的沉浸感和体验感。
图4、图5为本发明实施例提供的基于红外的AR成像系统结构图。如图4、图5所示,该系统具体包括:移动终端10,增强现实设备20和服务器30。其中,
移动终端10包括检测模块110和第一发送模块120,检测模块110用于检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识;第一发送模块120用于响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至所述服务器;其中,所述视频为用户通过所述移动终端拍摄的视频;
增强现实设备20包括采集模块210,采集模块210用于采集人脸图像,并将所述人脸图像发送至所述服务器;
服务器30包括第一确定模块310、处理模块320和第二发送模块330,第一确定模块310用于根据所述增强现实设备标识从所述人脸图像中确定目标人脸图像;第二发送模块330用于根据所述目标人脸图像对所述视频帧图像进行增强现实处理;所述第二发送模块用于将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。
进一步地,移动终端10还包括第二确定模块130,第二确定模块130用于根据所述红外信号确定所述视频中的第一目标位置信息,第一发送模块120还用于将所述第一目标位置信息发送至所述服务器。
进一步地,处理模块320用于根据所述第一目标位置信息,将所述人脸图像添加至所述视频帧图像中。
进一步地,服务器30还包括分析模块,图中未示出,所述分析模块用于对所述视频帧图像进行分析,确定第二目标位置信息;处理模块320根据所述第二目标位置信息,将所述目标人脸图像添加至所述视频帧图像中。
进一步地,服务器还包括目标图像处理模块,图中未示出,所述目标图像处理模块用于从所述目标人脸图像中获取特征信息,根据所述特征信息生成与所述目标人脸图像匹配的第一3D人脸图像;对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;根据所述第一3D人脸图像和所述第二3D人脸图像,对所述目标人脸图像进行光影处理。
本发明实施例提供的基于红外的AR成像系统具体用于执行图1至图3所示实施例提供的所述方法,其实现原理、方法和功能用途等与图1至图3所示实施例类似,在此不再赘述。
上述这些本发明实施例的基于红外的AR成像系统可以作为其中一个软件或者硬件功能单元,独立设置在上述电子设备中,也可以作为整合在处理器中的其中一个功能模块,执行本发明实施例的基于红外的AR成像 方法。
图6为执行本发明方法实施例提供的基于红外的AR成像方法的电子设备的硬件结构示意图。根据图6所示,该电子设备包括:
一个或多个处理器610以及存储器620,图6中以一个处理器610为例。
执行所述的基于红外的AR成像方法的设备还可以包括:输入装置630和输出装置630。
处理器610、存储器620、输入装置630和输出装置640可以通过总线或者其他方式连接,图6中以通过总线连接为例。
存储器620作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的所述基于红外的AR成像方法对应的程序指令/模块。处理器610通过运行存储在存储器620中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现所述基于红外的AR成像方法。
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据本发明实施例提供的基于红外的AR成像系统的使用所创建的数据等。此外,存储器620可以包括高速随机存取存储器620,还可以包括非易失性存储器620,例如至少一个磁盘存储器620件、闪存器件、或其他非易失性固态存储器620件。在一些实施例中,存储器620可选包括相对于处理器66远程设置的存储器620,这些远程存储器620可以通过网络连接至所述基于红外的AR成像系统。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置630可接收输入的数字或字符信息,以及产生与基于红外的AR成像系统的用户设置以及功能控制有关的键信号输入。输入装置630可包括按压模组等设备。
所述一个或者多个模块存储在所述存储器620中,当被所述一个或者多个处理器610执行时,执行所述基于红外的AR成像方法。
本发明实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC 设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
以上所描述的系统实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本发明实施例提供了一种非暂态计算机可读存存储介质,所述计算机存储介质存储有计算机可执行指令,其中,当所述计算机可执行指令被电子设备执行时,使所述电子设备上执行上述任意方法实施例中的基于红外的AR成像方法。
本发明实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,其中,当所述程序指令被电子设备执行时,使所述电子设备执行上述任意方法实施例中的基于红外的AR成像方法。
在另一实施例中,为了增强AR成像效果,专门对移动终端采集的视频效果进行提升,本实施例为此提供了一种具有更好防抖性能的移动终端的摄像头,通过该摄像头获取的图片、视频相比于普通摄像头更加清晰,更能满足用户的需求。特别是本实施例中的摄像头获取的视频用于上述实施例中的AR成像方法时,效果更佳。
具体的,现有的移动终端摄像头(移动终端为手机或数码摄像机等)包括镜头1、自动聚焦音圈马达2、图像传感器3为本领域技术人员公知的现有技术,因此这里不过多描述。通常采用微型记忆合金光学防抖器是因为现有的防抖器大多由通电线圈在磁场中产生洛伦磁力驱动镜头移动,而要实现光学防抖,需要在至少两个方向上驱动镜头,这意味着需要布置多个线圈,会给整体结构的微型化带来一定挑战,而且容易受外界磁场干扰,进而影响防抖效果,一些现有技术通过温度变化实现记忆合金丝的拉伸和缩短,以此拉动自动聚焦音圈马达移动,实现镜头的抖动补偿,微型记忆合金光学防抖致动器的控制芯片可以控制驱动信号的变化来改变记忆合金丝的温度,以此控制记忆合金丝的伸长和缩短,并且根据记忆合金丝的电阻来计算致动器的位置和移动距离。当微型记忆合金光学防抖致动器上移动到指定位置后反馈记忆合金丝此时的电阻,通过比较这个电阻值 与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差。但是申请人发现,由于抖动的随机性和不确定性,仅仅依靠上述技术方案的结构是无法实现在多次抖动发生的情况下能够对镜头进行精确的补偿,这是由于形状记忆合金的升温和降温均需要一定的时间,当抖动向第一方向发生时,上述技术方案可以实现镜头对第一方向抖动的补偿,但是当随之而来的第二方向的抖动发生时,由于记忆合金丝来不及在瞬间变形,因此容易造成补偿不及时,无法精准实现对多次抖动和不同方向的连续抖动的镜头抖动补偿,这导致了获取的图片质量不佳,因此需要对摄像头或摄像机结构上进行改进。
如图7所示,本实施例的所述摄像头包括镜头1、自动聚焦音圈马达2、图像传感器3以及微型记忆合金光学防抖器4,所述镜头1固装在所述自动聚焦音圈马达2上,所述图像传感器3将所述镜头1获取的图像传输至所述识别模块100,所述自动聚焦音圈马达2安装在所述微型记忆合金光学防抖器4上,所述移动终端内部处理器根据移动终端内部陀螺仪(图中未示出)检测到的镜头抖动驱动所述微型记忆合金光学防抖器4的动作,实现镜头的抖动补偿;
结合附图8所示,对所述微型记忆合金光学防抖器的改进之处介绍如下:
所述微型记忆合金光学防抖器包括活动板5和基板6,活动板5和基板6均为矩形板状件,所述自动聚焦音圈马达2安装在所述活动板5上,所述基板6的尺寸大于所述活动板5的尺寸,所述活动板5安装在所述基板6上,所述活动板5和所述基板6之间设有多个活动支撑7,所述活动支撑7具体为设置在所述基板6四个角处凹槽内的滚珠,便于活动板5在基板6上的移动,所述基板6的四周具有四个侧壁,每个所述侧壁的中部均设有一缺口8,所述缺口8处安装有微动开关9,所述微动开关9的活动件10可以在所述处理模块的指令下打开或封闭所述缺口,所述活动件10靠近所述活动板5的侧面设有沿所述活动件10宽度方向布设的条形的电触点11,所述基板6设有与所述电触点11相连接的温控电路(图中未示出),所述处理模块可以根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板5的四个侧边的中部均设有形状记忆合金丝12,所述形状记忆合金丝12一端与所述活动板5固定连接,另一端与所述电触点11滑动配合,所述基板6的四周的内侧壁与所述活动板5之间均设有用于复位的弹性件13,具体的,本实施例的所述弹性件优选为微型的弹簧。
下面结合上述结构对本实施例的微型记忆合金光学防抖器的工作过 程进行详细的描述:以镜头两次方向相反的抖动为例,当镜头发生向第一方向抖动时,陀螺仪将检测到的镜头抖动方向和距离反馈给所述处理器,处理器计算出需要控制可以补偿该抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对该形状记忆合金丝进行升温,该形状记忆合金丝伸长并带动活动板向可补偿第一方向抖动的方向运动,与此同时与该形状记忆合金丝相对称的另一形状记忆合金丝没有变化,但是与该另一形状记忆合金丝相连接的活动件会打开与其对应的缺口,便于所述另一形状记忆合金丝在活动板的带动下向缺口外伸出,此时,两个形状记忆合金丝附近的弹性件分别拉伸和压缩(如图9所示),当微型记忆合金光学防抖致动器上移动到指定位置后反馈该形状记忆合金丝的电阻,通过比较这个电阻值与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差;而当第二次抖动发生时,处理器首先通过与另一形状以及合金丝相抵接的活动件关闭缺口,并且打开与处于伸长状态的该形状记忆合金丝相抵接的活动件,与另一形状以及合金丝相抵接活动件的转动可以推动另一形状记忆合金丝复位,与处于伸长状态的该形状记忆合金丝相抵接的活动件的打开可以便于伸长状态的形状记忆合金丝伸出,并且在上述的两个弹性件的弹性作用下可以保证活动板迅速复位,同时处理器再次计算出需要控制可以补偿第二次抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对另一形状记忆合金丝进行升温,另一形状记忆合金丝伸长并带动活动板向可补偿第二方向抖动的方向运动,由于在先伸长的形状记忆合金丝处的缺口打开,因此不会影响另一形状以及合金丝带动活动板运动,而由于活动件的打开速度和弹簧的复位作用,因此在发生多次抖动时,本实施例的微型记忆合金光学防抖器均可做出精准的补偿,其效果远远优于现有技术中的微型记忆合金光学防抖器。
当然上述仅仅为简单的两次抖动,当发生多次抖动时,或者抖动的方向并非往复运动时,可以通过驱动两个相邻的形状记忆合金丝伸长以补偿抖动,其基础工作过程与上述描述原理相同,这里不过多赘述,另外关于形状记忆合金电阻的检测反馈、陀螺仪的检测反馈等均为现有技术,这里也不做赘述。
另一实施例中,某些特殊场合需要将移动终端安装于所述摄像机的支架上进行视频或图像的采集,但是申请人在使用过程中发现,现有的摄像机的支架具有以下缺陷:1、现有的摄像机支架均采用三脚架支撑,但是三脚架结构在地面不平整存在较大凹凸不平的位置进行安装时无法保证支架安装座的水平,容易发生抖动或者倾斜,对拍摄容易产生不良的影响;2、现有的支架无法作为肩抗式摄影机支架,结构和功能单一,在需要肩 抗拍摄时必须单独配备肩抗式摄影机支架。
因此,申请人对支架结构进行改进,如图10和11所示,本实施例的所述支架包括安装座14、支撑轴15、三个铰装在所述支撑轴上的支撑架16;所述安装座14包括相互垂直的第一安装板141和第二安装板142,所述第一安装板141和第二安装板142均可用于安装所述摄像机,所述支撑轴15垂直安装在所述第一安装板141的底面,所述支撑轴15远离所述安装座14的底端设有径向尺寸略大于所述支撑轴的圆周面17,三个所述支撑架16由上至下安装在所述支撑轴15上,且每两个所述支撑架16展开后的水平投影呈一倾角,上述结构在进行支架的架设时,首先将圆周面17假设在凹凸不平的平面较平整的一小块区域,在通过打开并调整三个可伸缩的支撑架的位置实现支架的架设平整,因此即使是凹凸不平的地面也能迅速将支架架设平整,适应各种地形,保证安装座处于水平状态。
更有利的,本实施例的所述支撑轴15也是伸缩杆件,其包括与所述安装座14相连接的管体151和部分可收缩至所述管体151内的杆体152,所述杆体152伸入所述管体的部分包括依次铰接的第一段1521、第二段1522、第三段1523和第四段1524,所述第一段1521与所述管体151相连接,所述第一段1521靠近所述第二段1522的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第二段1522靠近所述第一段1521的端部设有与锁止件19可拆卸配合的锁止孔20,同理,所述第二段1522靠近所述第三段1523的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第三段1523靠近所述第二段1522的端部设有与锁止件19可拆卸配合的锁止孔20,所述第三段1523靠近所述第四段1524的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第四段1524靠近所述第三段1523的端部设有与锁止件19可拆卸配合的锁止孔20,所述锁止件可以隐藏在安装槽内,当需要使用锁止件时可以通过转动锁止件,将锁止件扣合在所述锁止孔上,具体的,所述锁止件19可以是具有一个凸起的条形件,该凸起与所述锁止孔的大小尺寸相适配,将凸起压紧在锁止孔内完整相邻两个段(例如第一段和第二段)位置的固定,防止相对转动,而通过第一段1521、第二段1522、第三段1523和第四段1524的配合可以将该部分形成一
Figure PCTCN2018094074-appb-000001
形结构,并且通过锁止件19固定各个段的相对位置,还可以在该结构的底部设有软质材料,当需要将支架作为肩抗式摄像机支架时,该部分放置在用户的肩部,通过把持三个支撑架中的一个作为肩抗式支架的手持部,可以快速的实现由固定式支架到肩抗式支架的切换,十分方便。
另外,申请人还发现,可伸缩的支撑架伸大多通过人力拉出伸缩部分 以实现伸缩长度的调节,但是该距离不可控制,随机性较大,因此常常出现调节不便的问题,特别是需要将伸缩长度部分微调时,往往不容易实现,因此申请人还对支撑架的16结构进行优化,结合附图12所示,本实施例的每个所述支撑架16的底端还连接有调距装置21,所述调距装置21包括安装在所述支撑架16底部的轴承圈211、与所述轴承圈211相连接的转动环212、管体213、螺杆214、螺套215及支撑杆216,所述管体213的一端设有封堵217,所述螺杆215部分通过所述封堵217安装在所述管体213内,所述封堵217设有与所述螺杆214相适配的内螺纹,所述螺杆214另一部分与所述转动环212相连接,所述螺套215一端安装在所述管体213内并与所述螺杆214螺纹连接,所述螺套215的另一端伸出所述管体213外并与所述支撑杆216固定连接,所述螺套215的内壁设有一凸起218,所述螺套215的外侧壁沿其长度方向设有与所述凸起相适配的滑道219,所述管体213包括相邻的第一部分2131和第二部分2132,所述第一部分2131的内径小于所述第二部分2132的内径,所述封堵217设置在所述第二部分2132的外端上,所述螺套215靠近所述螺杆214的端部设有外径大于所述第一部分内径的限位端2151,通过转动所述转动环212带动螺杆214在管体213内转动,并将转动趋势传递给所述螺套215,而由于螺套受凸起218和滑道219的配合影响,无法转动,因此将转动力化为向外的直线移动,进而带动支撑杆216运动,实现支撑架底端的长度微调节,便于用户架平支架及其安装座,为后续的拍摄工作提供良好的基础保障。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载波、红外信号、数字信号等)等,该计算机软件产品包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明实施例的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案 进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种基于红外的AR成像方法,其特征在于,包括:
    移动终端检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识;
    所述移动终端响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至服务器,以使所述服务器将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备;
    其中,所述视频为用户通过所述移动终端拍摄的视频。
  2. 根据所述权利要求1所述的方法,其特征在于,所述方法还包括:
    增强现实设备采集人脸图像,并将所述人脸图像发送至所述服务器;
    所述服务器根据所述增强现实设备标识从所述人脸图像中确定目标人脸图像。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    所述服务器从所述目标人脸图像中获取特征信息,根据所述特征信息生成与所述目标人脸图像匹配的第一3D人脸图像;
    所述服务器对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;
    所述服务器根据所述第一3D人脸图像和所述第二3D人脸图像,对所述目标人脸图像进行光影处理。
  4. 根据权利要求2或3所述的方法,其特征在于,所述方法还包括:
    所述移动终端根据所述红外信号确定所述视频中的第一目标位置信息,将所述第一目标位置信息发送至所述服务器。
  5. 根据权利要求4所述的方法,其特征在于,所述增强现实处理具体包括:
    所述服务器根据所述第一目标位置信息,将所述目标人脸图像添加至所述视频帧图像中。
  6. 根据权利要求2或3所述的方法,其特征在于,所述增强现实处理具体包括:
    所述服务器对所述视频帧图像进行分析,确定第二目标位置信息;
    所述服务器根据所述第二目标位置信息,将所述目标人脸图像添加至所述视频帧图像中。
  7. 一种基于红外的AR成像系统,其特征在于,包括:
    移动终端、增强现实设备及服务器;其中,
    所述移动终端包括检测模块和第一发送模块,所述检测模块用于检测增强现实设备发出的红外信号,并根据所述红外信号确定对应的增强现实设备标识;所述第一发送模块用于响应于用户的操作指令,将当前播放视频的视频帧图像和所述增强现实设备标识发送至所述服务器;其中,所述视频为用户通过所述移动终端拍摄的视频;
    所述增强现实设备包括采集模块,所述采集模块用于采集人脸图像,并将所述人脸图像发送至所述服务器;
    所述服务器包括第一确定模块、处理模块和第二发送模块,所述第一确定模块用于根据所述增强现实设备标识从所述人脸图像中确定目标人脸图像;所述处理模块用于根据所述目标人脸图像对所述视频帧图像进行增强现实处理;所述第二发送模块用于将增强现实处理后的所述视频帧图像发送至所述增强现实设备标识对应的目标增强现实设备。
  8. 根据权利要求7所述的系统,其特征在于,所述移动终端还包括第二确定模块,所述第二确定模块用于根据所述红外信号确定所述视频中的第一目标位置信息,所述第一发送模块还用于将所述第一目标位置信息发送至所述服务器。
  9. 根据权利要求8所述的系统,其特征在于,所述处理模块用于根据所述第一目标位置信息,将所述人脸图像添加至所述视频帧图像中。
  10. 一种电子设备,其特征在于,包括:至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1或2或4所述的基于红外的AR成像方法。
PCT/CN2018/094074 2018-04-23 2018-07-02 基于红外的ar成像方法、系统及电子设备 WO2019205283A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810364811.8A CN108377398B (zh) 2018-04-23 2018-04-23 基于红外的ar成像方法、系统、及电子设备
CN201810364811.8 2018-04-23

Publications (1)

Publication Number Publication Date
WO2019205283A1 true WO2019205283A1 (zh) 2019-10-31

Family

ID=63032562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094074 WO2019205283A1 (zh) 2018-04-23 2018-07-02 基于红外的ar成像方法、系统及电子设备

Country Status (2)

Country Link
CN (1) CN108377398B (zh)
WO (1) WO2019205283A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401461A (zh) * 2020-03-24 2020-07-10 郭俊 酒品信息管理方法、装置、计算机设备以及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862286B (zh) * 2019-03-28 2021-08-17 深圳创维-Rgb电子有限公司 图像显示方法、装置、设备和计算机存储介质
CN112788273B (zh) * 2019-11-08 2022-12-02 华为技术有限公司 一种增强现实ar通信系统及基于ar的通信方法
CN111640166B (zh) * 2020-06-08 2024-03-26 上海商汤智能科技有限公司 一种ar合影方法、装置、计算机设备及存储介质
CN112633303A (zh) * 2020-12-18 2021-04-09 上海影创信息科技有限公司 限高画面变形检测方法和系统及车辆
CN113842171B (zh) * 2021-09-29 2024-03-01 北京清智图灵科技有限公司 一种用于咽拭子机器采样的有效性判定装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120365A1 (en) * 2011-11-14 2013-05-16 Electronics And Telecommunications Research Institute Content playback apparatus and method for providing interactive augmented space
CN204028887U (zh) * 2014-07-24 2014-12-17 央数文化(上海)股份有限公司 一种基于增强现实技术的手持式阅览设备
CN106780754A (zh) * 2016-11-30 2017-05-31 福建北极光虚拟视觉展示科技有限公司 一种混合现实方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013009304A (ja) * 2011-05-20 2013-01-10 Ricoh Co Ltd 画像入力装置、会議装置、画像処理制御プログラム、記録媒体
CN205430495U (zh) * 2016-03-25 2016-08-03 京东方科技集团股份有限公司 增强现实设备及系统
CN107657662A (zh) * 2016-07-26 2018-02-02 金德奎 一种用户间可直接互动的增强现实设备及其系统和方法
CN109743621A (zh) * 2016-11-02 2019-05-10 大辅科技(北京)有限公司 多vr/ar设备协同系统及协同方法
CN107506714B (zh) * 2017-08-16 2021-04-02 成都品果科技有限公司 一种人脸图像重光照的方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120365A1 (en) * 2011-11-14 2013-05-16 Electronics And Telecommunications Research Institute Content playback apparatus and method for providing interactive augmented space
CN204028887U (zh) * 2014-07-24 2014-12-17 央数文化(上海)股份有限公司 一种基于增强现实技术的手持式阅览设备
CN106780754A (zh) * 2016-11-30 2017-05-31 福建北极光虚拟视觉展示科技有限公司 一种混合现实方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401461A (zh) * 2020-03-24 2020-07-10 郭俊 酒品信息管理方法、装置、计算机设备以及存储介质

Also Published As

Publication number Publication date
CN108377398B (zh) 2020-04-03
CN108377398A (zh) 2018-08-07

Similar Documents

Publication Publication Date Title
WO2019205283A1 (zh) 基于红外的ar成像方法、系统及电子设备
WO2019205284A1 (zh) Ar成像方法和装置
US11270419B2 (en) Augmented reality scenario generation method, apparatus, system, and device
CN108596827B (zh) 三维人脸模型生成方法、装置及电子设备
CN108537870B (zh) 图像处理方法、装置及电子设备
WO2019200720A1 (zh) 基于图像处理的环境光补偿方法、装置及电子设备
US9638989B2 (en) Determining motion of projection device
WO2017163720A1 (ja) 情報処理装置、情報処理システム、および情報処理方法
EP2590396A1 (en) Information processing system, information processing device, and information processing method
KR102661185B1 (ko) 전자 장치 및 그의 이미지 촬영 방법
US8400532B2 (en) Digital image capturing device providing photographing composition and method thereof
US20110157394A1 (en) Image processing apparatus, image processing method and program
US20150172634A1 (en) Dynamic POV Composite 3D Video System
US20190129500A1 (en) Virtual reality device and content adjustment method thereof
WO2020056692A1 (zh) 一种信息交互方法、装置及电子设备
WO2020056690A1 (zh) 一种视频内容关联界面的呈现方法、装置及电子设备
WO2022142388A1 (zh) 特效显示方法及电子设备
WO2020056691A1 (zh) 一种交互对象的生成方法、装置及电子设备
CN108134928A (zh) Vr显示方法和装置
WO2020206647A1 (zh) 跟随用户运动控制播放视频内容的方法和装置
JP2018033107A (ja) 動画の配信装置及び配信方法
CN106210701A (zh) 一种用于拍摄vr图像的移动终端及其vr图像拍摄系统
JP6680294B2 (ja) 情報処理装置、情報処理方法、プログラム及び画像表示システム
US10817992B2 (en) Systems and methods to create a dynamic blur effect in visual content
TW202036369A (zh) 影像處理裝置及其方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18915938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18915938

Country of ref document: EP

Kind code of ref document: A1