CN109214351B - AR imaging method and device and electronic equipment - Google Patents

AR imaging method and device and electronic equipment Download PDF

Info

Publication number
CN109214351B
CN109214351B CN201811110297.1A CN201811110297A CN109214351B CN 109214351 B CN109214351 B CN 109214351B CN 201811110297 A CN201811110297 A CN 201811110297A CN 109214351 B CN109214351 B CN 109214351B
Authority
CN
China
Prior art keywords
sub
light
angle
face image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811110297.1A
Other languages
Chinese (zh)
Other versions
CN109214351A (en
Inventor
奥利维尔·菲永
李建亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Future Technology Shenzhen Co ltd
Original Assignee
Pacific Future Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Future Technology Shenzhen Co ltd filed Critical Pacific Future Technology Shenzhen Co ltd
Publication of CN109214351A publication Critical patent/CN109214351A/en
Application granted granted Critical
Publication of CN109214351B publication Critical patent/CN109214351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The embodiment of the invention provides an AR imaging method, an AR imaging device and electronic equipment, wherein the method comprises the following steps: detecting a face image shot by a mobile terminal, and extracting a sub-image of a nose region in the face image; determining a light intensity weighting center of the subimage based on light rays, and comparing the light intensity weighting center with the weighting center of the face image to obtain a light ray estimation angle; obtaining a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, wherein the real scene is a real world currently shot by the mobile terminal; and generating a shadow image of the virtual object according to the light angle of the real scene and the preset position of the virtual object in the real scene. The light information in the real world can be acquired only by using the conventional mobile terminal with double cameras without extra hardware, so that a more vivid effect is provided for AR, the cost is reduced, and the performance requirement on the mobile terminal equipment is low.

Description

AR imaging method and device and electronic equipment
Technical Field
The invention relates to the technical field of augmented reality, in particular to an AR imaging method, an AR imaging device and electronic equipment.
Background
A common problem of computer vision, especially in the field of augmented reality applications, is the detection of the intensity and direction of light affecting the imaging of virtual objects. Existing ray detection algorithms typically rely on specific sensors and hardware or make assumptions about the location and time of a light source (e.g., sun or moon) based on environmental information in the scene. In the field of augmented reality application, the estimation of light information mainly focuses on light intensity and color tone, and only calculates the average brightness of the whole real scene image, not the actual ambient light direction, which has no relation with the actual position of the real scene light source. There is not enough information guarantee provided for the shadow of the virtual object.
In addition, the AR technology is increasingly used in mobile phones, and mobile phone cameras are increasingly used to acquire real scenes. In order to improve the image quality of a video acquired by a mobile phone and avoid physical weakness caused by long-time holding of the mobile phone by a user to watch the video, the anti-shake performance and the stable supporting mechanism of a mobile phone camera are required to be improved. In the prior art, the shake is preprocessed by a software anti-shake technology, but the hardware is not obviously improved, so that the influence of the shake cannot be fundamentally eliminated, and the difficulty is increased for the subsequent AR image processing.
Disclosure of Invention
The embodiment of the invention provides an AR imaging method, an AR imaging device and electronic equipment, which are used for at least solving the problems in the related art.
An embodiment of the present invention provides an AR imaging method, including:
detecting a face image shot by a mobile terminal, and extracting a sub-image of a nose region in the face image; determining a light intensity weighting center of the subimage based on light rays, and comparing the light intensity weighting center with the weighting center of the face image to obtain a light ray estimation angle; obtaining a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, wherein the real scene is a real world currently shot by the mobile terminal; and generating a shadow image of the virtual object according to the light angle of the real scene and the preset position of the virtual object in the real scene.
Further, the determining a light intensity weighted center of the sub-image based on the light ray, and comparing the light intensity weighted center with a weighted center of the face image to obtain a light ray estimation angle includes: dividing the sub-image into a plurality of sub-regions, and determining the sub-light intensity weighting center of each sub-region; comparing each sub-light intensity weighting center with the weighting center of the face image to obtain sub-light ray estimation angles of each sub-area, calculating the sub-illumination intensity of each sub-area, and determining the weight of the sub-light ray estimation angles of the sub-areas according to the sub-illumination intensity of the sub-areas; and calculating to obtain the light ray estimation angle according to each sub light ray estimation angle and the weight of the sub light ray estimation angle.
Further, the method further comprises: and calculating a distance value between the light intensity weighted center and the weighted center of the face image, and determining a confidence factor based on the distance value, wherein the confidence factor reflects the contrast between the nose tip adjacent regions in the nose region.
Further, the method further comprises: calculating sub-distance values between the sub-light intensity weighting centers and the weighting center of the face image, and determining sub-confidence factors based on the sub-distance values; and determining a confidence factor according to each sub-confidence factor and the weight of the sub-ray estimation angle, wherein the confidence factor reflects the contrast between adjacent regions of the nose tip in the nose region.
Further, the method further comprises: extracting eye socket areas in the face image, and comparing the illumination intensity of the eye sockets; when the illumination intensity difference value of the eye sockets is larger than or equal to a preset threshold value, determining the head inclination angle of the human face according to the illumination intensity of the eye sockets; and adjusting the light estimation angle according to the face inclination angle.
Another aspect of an embodiment of the present invention provides an AR imaging apparatus, including: the extraction module is used for detecting a face image shot by the mobile terminal and extracting a sub-image of a nose region in the face image; the comparison module is used for determining a light intensity weighting center of the subimage based on light rays, and comparing the light intensity weighting center with the weighting center of the face image to obtain a light ray estimation angle; the obtaining module is used for obtaining a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, wherein the real scene is a real world currently shot by the mobile terminal; and the generating module is used for generating a shadow image of the virtual object according to the light angle of the real scene and the preset position of the virtual object in the real scene.
Further, the comparison module comprises: the decomposition unit is used for dividing the sub-image into a plurality of sub-regions and determining the sub-light intensity weighting center of each sub-region; the comparison unit is used for comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light ray estimation angle of each sub-area; the determining unit is used for calculating the sub-illumination intensity of each sub-region and determining the weight of the sub-ray estimation angle of each sub-region according to the sub-illumination intensity of each sub-region; and the calculating unit is used for calculating to obtain the light ray estimation angle according to each sub light ray estimation angle and the weight of the sub light ray estimation angle.
Further, the comparing module is further configured to calculate a distance value between the light intensity weighted center and the weighted center of the face image, and determine a confidence factor based on the distance value, wherein the confidence factor reflects a contrast between nose tip neighboring regions in the nose region.
Further, the comparison module is further configured to calculate a sub-distance value between each of the sub-light intensity weighted centers and the weighted center of the face image, and determine a sub-confidence factor based on the sub-distance value; and determining a confidence factor according to each sub-confidence factor and the weight of the sub-ray estimation angle, wherein the confidence factor reflects the contrast between adjacent regions of the nose tip in the nose region.
Furthermore, the comparison module further comprises an adjusting unit, wherein the adjusting unit is used for extracting an eye socket area in the face image and comparing the illumination intensity of the eye socket; when the illumination intensity difference value of the eye sockets is larger than or equal to a preset threshold value, determining the head inclination angle of the human face according to the illumination intensity of the eye sockets; and adjusting the light estimation angle according to the face inclination angle.
Another aspect of an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the AR imaging method described above.
Furthermore, the electronic device further comprises an image acquisition module, the image acquisition module comprises a lens, an automatic focusing voice coil motor, a mechanical anti-shake device and an image sensor, the lens is fixedly mounted on the automatic focusing voice coil motor, the lens is used for acquiring images, the image sensor transmits the images acquired by the lens to the identification module, the automatic focusing voice coil motor is mounted on the mechanical anti-shake device, and the processing module drives the mechanical anti-shake device to act according to feedback of lens shake detected by a gyroscope in the lens, so that shake compensation of the lens is realized.
Furthermore, the mechanical anti-shake device comprises a movable plate, a movable frame, an elastic return mechanism, a substrate and a compensation mechanism; the middle part of the movable plate is provided with a through hole for the lens to pass through, the automatic focusing voice coil motor is installed on the movable plate, the movable plate is installed in the movable frame, and two opposite sides of the movable plate are in sliding fit with the inner walls of two opposite sides of the movable frame, so that the movable plate can slide back and forth along a first direction; the size of the movable frame is smaller than that of the substrate, two opposite sides of the movable frame are respectively connected with the substrate through two elastic restoring mechanisms, and the middle of the substrate is also provided with a through hole through which the lens penetrates; the compensation mechanism is driven by the processing module to drive the movable plate and the lenses on the movable plate to act so as to realize the shake compensation of the lenses; the compensation mechanism comprises a driving shaft, a gear track and a limiting track, the driving shaft is arranged on the base plate, and the driving shaft is in transmission connection with the gear; the gear track is arranged on the movable plate, the gear is installed in the gear track, and when the gear rotates, the movable plate can generate displacement towards a first direction and displacement towards a second direction through the gear track, wherein the first direction is perpendicular to the second direction; the limiting track is arranged on the movable plate or the base plate and is used for preventing the gear from being separated from the gear track.
Furthermore, a kidney-shaped hole is formed in one side of the movable plate, a plurality of teeth meshed with the gear are arranged in the kidney-shaped hole along the circumferential direction of the kidney-shaped hole, the kidney-shaped hole and the teeth jointly form the gear track, and the gear is located in the kidney-shaped hole and meshed with the teeth; the limiting rail is arranged on the base plate, a limiting part located in the limiting rail is arranged at the bottom of the movable plate, and the limiting rail enables the motion trail of the limiting part in the limiting rail to be waist-shaped.
Further, the limiting part is a protrusion arranged on the bottom surface of the movable plate.
Further, the gear track comprises a plurality of cylindrical protrusions arranged on the movable plate, the cylindrical protrusions are uniformly distributed at intervals along the second direction, and the gear is meshed with the plurality of protrusions; the limiting track is provided with a first arc-shaped limiting part and a second arc-shaped limiting part which are arranged on the movable plate, the first arc-shaped limiting part and the second arc-shaped limiting part are respectively arranged on two opposite sides of the gear track along a first direction, and the first arc-shaped limiting part, the second arc-shaped limiting part and the plurality of protrusions are matched to enable the motion track of the movable plate to be waist-shaped.
Further, the elastic restoring mechanism comprises a telescopic spring.
Further, the image acquisition module comprises a mobile phone and a bracket for mounting the mobile phone.
Further, the support comprises a mobile phone mounting seat and a telescopic supporting rod; the mobile phone mounting seat comprises a telescopic connecting plate and folding plate groups arranged at two opposite ends of the connecting plate, and one end of the supporting rod is connected with the middle part of the connecting plate through a damping hinge; the folding plate group comprises a first plate body, a second plate body and a third plate body, wherein one end of the two opposite ends of the first plate body is hinged with the connecting plate, and the other end of the two opposite ends of the first plate body is hinged with one end of the two opposite ends of the second plate body; the other end of the second plate body at the two opposite ends is hinged with one end of the third plate body at the two opposite ends; the second plate body is provided with an opening for inserting a mobile phone corner; when the mobile phone mounting seat is used for mounting a mobile phone, the first plate body, the second plate body and the third plate body are folded to form a right-angled triangle state, the second plate body is a hypotenuse of the right-angled triangle, the first plate body and the third plate body are right-angled sides of the right-angled triangle, wherein one side face of the third plate body is attached to one side face of the connecting plate side by side, and the other end of the third plate body in the two opposite ends is abutted to one end of the first plate body in the two opposite ends.
Furthermore, a first connecting portion is arranged on one side face of the third plate body, a first matching portion matched with the first connecting portion is arranged on the side face, attached to the third plate body, of the connecting plate, and the first connecting portion and the first matching portion are connected in a clamping mode when the support mobile phone mounting seat is used for mounting a mobile phone.
Furthermore, one end of the two opposite ends of the first plate body is provided with a second connecting portion, the other end of the two opposite ends of the third plate body is provided with a second matching portion matched with the second connecting portion, and when the support mobile phone mounting seat is used for mounting a mobile phone, the second connecting portion is connected with the second matching portion in a clamping mode.
Furthermore, the other end of the supporting rod is detachably connected with a base.
According to the technical scheme, the AR imaging method, the AR imaging device and the electronic equipment provided by the embodiment of the invention can acquire light information in the real world only by using the conventional double-camera mobile terminal, so that a more vivid effect is provided for AR, and the cost is reduced; the anti-shake hardware structure of the mobile phone camera and the mobile phone self-timer support further enhance the shooting effect, and are more beneficial to subsequent image or video processing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1 is a flow chart of an AR imaging method provided by one embodiment of the present invention;
FIG. 2 is a flow chart of an AR imaging method provided by one embodiment of the present invention;
FIG. 3 is a flowchart of an AR imaging method provided by one embodiment of the present invention;
FIG. 4 is a block diagram of an AR imaging device provided in accordance with one embodiment of the present invention;
FIG. 5 is a block diagram of an AR imaging device provided in accordance with one embodiment of the present invention;
FIG. 6 is a schematic diagram of a hardware structure of an electronic device for performing the AR imaging method according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an image acquisition module according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a first mechanical anti-shake apparatus according to an embodiment of the present invention;
fig. 9 is a schematic view of a bottom structure of a first movable plate according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a second mechanical anti-shake device provided in an embodiment of the present invention;
fig. 11 is a schematic view of a bottom structure of a second movable plate according to an embodiment of the present invention;
FIG. 12 is a block diagram of a stand provided in accordance with one embodiment of the present invention;
FIG. 13 is a schematic view of a state of a stand according to an embodiment of the present invention;
FIG. 14 is a schematic view of another state of a stand according to an embodiment of the present invention;
fig. 15 is a structural state diagram of a mounting base according to an embodiment of the present invention when connected to a mobile phone.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a flowchart of an AR imaging method according to an embodiment of the present invention. As shown in fig. 1, an AR imaging method provided in an embodiment of the present invention includes:
s101, detecting a face image shot by the mobile terminal, and extracting a sub-image of a nose area in the face image.
In the embodiment, a low-cost, convenient and accurate mode is provided, and when a user wears AR glasses and watches AR videos by using a rear camera of a mobile terminal, the light conditions of the current scene of the real world can be determined by using a front camera of the mobile terminal. Since the user is approximately facing the screen of the mobile terminal, the light angle of the ambient light detected in the face image only needs to be mirrored to the virtual object to be added to the current scene of the real world.
Specifically, when it is required to detect the ambient light of the current real scene, a snapshot image of the current scene is taken through the front camera, and when it is detected that the snapshot image includes the face image of the user, the face features are extracted from the image through any face detection algorithm based on CNN (e.g., a new automatic face segmentation, face feature extraction and tracking method based on color, shape and symmetry published by Eli Saber et al; a new automatic face segmentation, face feature extraction and tracking method published by Karin Sobottka et al; a face feature detection method based on a classifier published by Phillip Wilson et al; a face detection algorithm based on deformable template extracted from a face by alanl. Therefore, the nose in the face features is extracted to obtain the sub-image of the nose region in the face image.
S102, determining a light intensity weighting center of the sub-image based on light, and comparing the light intensity weighting center with the weighting center of the face image to obtain a light estimation angle.
In this step, the corresponding light intensity weighted center is determined according to the image moment of the sub-image. The image moment is a moment set calculated from a digital graph, generally describes global features of the image, and provides a large amount of information on different types of geometric features of the image, such as size, position, direction, shape, and the like, for example, the first moment is related to the shape, the second moment shows the expansion degree of a curve around a straight line average value, the third moment is a symmetry measure about the average value, a group of 7 constant moments can be derived from the second moment and the third moment, the constant moments are image statistical features, and accordingly, the image classification operation can be performed on the images, which belongs to common knowledge in the art, and the description of the invention is omitted herein.
Optionally, after the light intensity weighting center of the sub-picture is determined, the light intensity weighting center of the sub-picture is compared with the sub-picture (the weighting center is the geometric center of the image), the coordinate position of the weighting center of the image is compared with the coordinate position of the light intensity weighting center of the sub-picture, the direction from the geometric center to the light intensity weighting center is the light ray direction of the ambient light, and meanwhile, a coordinate system can be established by selecting the origin of coordinates to obtain the included angle between the vector and the X axis, so as to serve as the light ray estimation angle of the ambient light of the current scene. In addition, the light estimation angle may also be calculated by other non-proprietary algorithms, and the invention is not limited herein. It should be noted that in the embodiments of the present invention, the ambient light will be considered unidirectional and uniform.
As some optional implementations of the embodiment of the present invention, as shown in fig. 2, the light ray estimation angle may be obtained by the following steps.
S1021, dividing the sub-image into a plurality of sub-areas, and determining the sub-light intensity weighting center of each sub-area.
And S1022, comparing the sub-light intensity weighted center with the weighted center of the face image to obtain the sub-light ray estimated angle of each sub-area.
S1023, calculating the sub-illumination intensity of each sub-area, and determining the weight of the sub-light estimation angle of each sub-area according to the sub-illumination intensity of each sub-area.
And S1024, calculating to obtain the light ray estimation angles according to the sub light ray estimation angles and the weights of the sub light ray estimation angles.
Specifically, firstly, the sub-image may be equally divided into four sub-regions to obtain four sub-regions, and the sub-light intensity weighting center of each sub-region and the sub-light estimation angle of each sub-region are determined according to the above method; secondly, for each sub-picture, obtaining the light intensity corresponding to the sub-picture according to the light and shade contrast information and the like in the sub-picture, and after obtaining the sub-illumination intensity of each sub-region, taking the sub-illumination intensity of each sub-region as the weight of the sub-light estimation angle of the sub-region; and finally, calculating the addition and the average of the sub-ray estimation angles of the four sub-regions according to the weights respectively corresponding to the sub-ray estimation angles to obtain the average ray estimation angle.
As an optional implementation manner of the embodiment of the present invention, it may also be determined whether the calculated light ray estimation angle is accurate in the following manner.
As a first embodiment, a distance value between the light intensity weighted center and the weighted center of the face image is calculated, and a confidence factor reflecting a contrast between tip adjacent regions in the nose region is determined based on the distance value.
By measuring the distance between the weighting center of the face image and the weighting center of the illumination intensity, we can obtain a distance value that increases as the contrast between the shadow portion and the glow portion of the nose region increases. In an ambient diffuse light environment without a specific light direction, the distance value will be small or zero. This distance value thus provides a confidence factor in the contrast between the regions in the nose region and the regions adjacent to the tip of the nose. When a human face is shot, a specific light source basically exists, so that the larger the distance value is, the larger the confidence coefficient is, the higher the accuracy of the light ray estimation angle is, and the more the angle can be determined to represent the light ray direction of the ambient light.
As a second implementation manner, calculating a sub-distance value between each sub-light intensity weighting center and the weighting center of the face image, and determining a sub-confidence factor based on the sub-distance value; and determining a confidence factor according to each sub-confidence factor and the weight of the sub-ray estimation angle, wherein the confidence factor reflects the contrast between adjacent regions of the nose tip in the nose region.
When the calculation of the light ray estimation angle is performed by decomposing the nose region into a plurality of sub-images in step S102, the determination of the confidence factor may be performed by the second embodiment, so as to improve the accuracy of the confidence factor.
As an optional implementation manner of the embodiment of the present invention, we can increase the confidence and accuracy of the result by focusing on the eye socket of the user as needed while focusing on the nose region of the face, especially in the case of detecting low confidence.
Specifically, as shown in fig. 3, the method includes:
s1021', extracting an eye socket area in the face image, and comparing the illumination intensity of the eye socket.
S1022', when the difference value of the illumination intensities of the eye sockets is greater than or equal to a preset threshold value, determining the head inclination angle of the human face according to the illumination intensities of the eye sockets.
And S1023', adjusting the light ray estimation angle according to the face inclination angle.
Specifically, if the head is not deflected, the illumination intensities of the two eye sockets of the human face should be the same or have a small difference, and when the difference between the illumination intensities of the two eye sockets is greater than or equal to the preset threshold, it indicates that the head of the user is deflected when the human face image captured in step S101 is taken. Alternatively, a tilt angle model may be obtained by performing machine model learning training in advance, and the illumination intensity difference is input into the tilt angle model to obtain a head tilt angle of the human face, where the head tilt angle refers to an angle at which the head is maintained at a 90-degree position from the horizontal. The quotient of the inclination angle and the right angle can be used as a correction coefficient, and the light estimation angle is adjusted based on the correction coefficient, so that the influence of head deflection on the result is solved.
S103, obtaining a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, wherein the real scene is a real world shot by the mobile terminal at present.
In this step, since the front-end camera takes images of the face of the user and the AR video is viewed through the rear-end camera, the light estimation angle needs to be mirrored to realize image-oriented switching. In addition, since the user may rotate the mobile terminal while watching the AR video, the light angle of the ambient light in the real scene needs to be constructed according to the light estimation angle and the rotation angle of the mobile terminal. Alternatively, the light estimation angle may be rotated according to the rotation angle of the mobile terminal, so as to obtain the light angle of the real scene.
S104, generating a shadow image of the virtual object according to the light angle of the real scene and the preset position of the virtual object in the real scene.
In this step, the shadow position of the virtual object in the target picture may be determined according to the light angle of the real scene and the preset position of the virtual object. Next, a shadow shape at the shadow position is determined based on the shape of the virtual object. A shadow image of the virtual object is generated based on the shadow position and the shadow shape. When a user wears AR glasses to watch an AR video, the shadow effect of the virtual object can be seen, namely the vivid reflection effect of the virtual object obtained based on the light of the current real scene.
According to the AR imaging method provided by the embodiment of the invention, the light ray information in the real world can be acquired only by using the conventional double-camera mobile terminal, so that a more vivid effect is provided for AR, and the cost is reduced; in addition, the embodiment of the invention can be applied to indoor and outdoor conditions in daytime or at night, does not need to know the time of the real world and acquire the local position by utilizing GPS positioning, and has low performance requirement on the mobile terminal equipment.
Fig. 4 is a structural diagram of an AR imaging apparatus according to an embodiment of the present invention. As shown in fig. 4, the apparatus specifically includes: an extraction module 100, a comparison module 200, an acquisition module 300 and a generation module 400. Wherein the content of the first and second substances,
the extraction module 100 is configured to detect a face image captured by a mobile terminal, and extract a sub-image of a nose region in the face image; the comparison module 200 is configured to determine a light intensity weighting center of the sub-image based on light, and compare the light intensity weighting center with a weighting center of the face image to obtain a light estimation angle; an obtaining module 300, configured to obtain a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, where the real scene is a real world currently photographed by the mobile terminal; a generating module 400, configured to generate a shadow image of the virtual object according to the light angle of the real scene and a preset position of the virtual object in the real scene.
The AR imaging apparatus provided in the embodiment of the present invention is specifically configured to perform the method provided in the embodiment shown in fig. 1, and the implementation principle, the method, the functional purpose, and the like of the AR imaging apparatus are similar to those of the embodiment shown in fig. 1, and are not described herein again.
Fig. 5 is a structural diagram of an AR imaging apparatus according to an embodiment of the present invention. As shown in fig. 5, the apparatus specifically includes: an extraction module 100, a comparison module 200, an acquisition module 300 and a generation module 400. Wherein the content of the first and second substances,
the extraction module 100 is configured to detect a face image captured by a mobile terminal, and extract a sub-image of a nose region in the face image; the comparison module 200 is configured to determine a light intensity weighting center of the sub-image based on light, and compare the light intensity weighting center with a weighting center of the face image to obtain a light estimation angle; an obtaining module 300, configured to obtain a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, where the real scene is a real world currently photographed by the mobile terminal; a generating module 400, configured to generate a shadow image of the virtual object according to the light angle of the real scene and a preset position of the virtual object in the real scene.
Specifically, the comparing module 200 includes a decomposition unit 210, a comparison unit 220, a determination unit 230, and a calculation unit 240. Wherein the content of the first and second substances,
a decomposition unit 210, configured to divide the sub-image into a plurality of sub-regions, and determine a sub-light intensity weighting center of each sub-region; a comparing unit 220, configured to compare each of the sub-light intensity weighted centers with the weighted center of the face image, to obtain a sub-light estimation angle determining unit 230 for each of the sub-regions, configured to calculate sub-illumination intensity of each of the sub-regions, and determine a weight of the sub-light estimation angle of the sub-region according to the sub-illumination intensity of the sub-region; the calculating unit 240 is configured to calculate the estimated light angle according to each of the estimated sub-light angles and the weight of the estimated sub-light angle.
Specifically, the comparing module 200 further includes an adjusting unit 250, where the adjusting unit 250 is configured to extract an eye socket region in the face image, and compare the illumination intensity of the eye socket; when the illumination intensity difference value of the eye sockets is larger than or equal to a preset threshold value, determining the head inclination angle of the human face according to the illumination intensity of the eye sockets; and adjusting the light estimation angle according to the face inclination angle.
Optionally, the comparing module 200 is further configured to calculate a distance value between the light intensity weighted center and the weighted center of the face image, and determine a confidence factor based on the distance value, where the confidence factor reflects a contrast between adjacent regions of the tip of the nose in the nose region.
Optionally, the comparing module 200 is further configured to calculate a sub-distance value between each of the sub-light intensity weighted centers and the weighted center of the face image, and determine a sub-confidence factor based on the sub-distance value; and determining a confidence factor according to each sub-confidence factor and the weight of the sub-ray estimation angle, wherein the confidence factor reflects the contrast between adjacent regions of the nose tip in the nose region.
Optionally, the comparing module 200 further includes an adjusting unit 250, where the adjusting unit 250 is configured to extract an eye socket region in the face image, and compare the illumination intensity of the eye socket; when the illumination intensity of the eye sockets is greater than or equal to a preset threshold value, determining the head inclination angle of the human face according to the illumination intensity of the eye sockets; and adjusting the light estimation angle according to the face inclination angle.
The AR imaging apparatus provided in the embodiment of the present invention is specifically configured to perform the method provided in the embodiment shown in fig. 1 to 3, and the implementation principle, the method, and the functional use thereof are similar to those of the embodiment shown in fig. 1 to 3, and are not described herein again.
The AR imaging apparatus according to the embodiments of the present invention may be independently disposed in the electronic device as one of software or hardware functional units, or may be integrated in a processor as one of functional modules to execute the AR imaging method according to the embodiments of the present invention.
As shown in fig. 6, an electronic device provided in an embodiment of the present invention includes: one or more processors 610 and a memory 620, with one processor 610 being an example in fig. 6. The apparatus for performing the AR imaging method may further include: an input device 630 and an output device 630. Wherein the content of the first and second substances,
the processor 610, the memory 620, the input device 630, and the output device 640 may be connected by a bus or other means, such as the bus connection in fig. 6.
The memory 620, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the AR imaging method in the embodiments of the present invention. The processor 610 performs various functional applications of the server and data processing, i.e., implements the AR imaging method, by executing nonvolatile software programs, instructions, and modules stored in the memory 620.
The memory 620 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the AR imaging apparatus provided according to an embodiment of the present invention, and the like. Further, the memory 620 may include high speed random access memory 620, and may also include non-volatile memory 620, such as at least one piece of disk memory 620, flash memory devices, or other non-volatile solid state memory 620. In some embodiments, memory 620 optionally includes memory 620 located remotely from processor 66, and such remote memory 620 may be connected to the AR imaging apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 630 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the AR imaging device. The input device 630 may include a pressing module or the like.
The one or more modules are stored in the memory 620 and, when executed by the one or more processors 610, perform the AR imaging method.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) And (4) a server.
(6) And other electronic devices with data interaction functions.
Specifically, the electronic device includes an image acquisition module, as shown in fig. 7, the image acquisition module of this embodiment includes a lens 1000, an auto-focus voice coil motor 2000, a mechanical anti-shake device 3000, and an image sensor 4000, the lens 1000 is fixedly mounted on the auto-focus voice coil motor 2000, the lens 1000 is used to acquire an image, the image sensor 4000 transmits the image acquired by the lens 1000 to the identification module, the auto-focus voice coil motor 2000 is mounted on the mechanical anti-shake device 3000, and the processing module drives the mechanical anti-shake device 3000 to perform shake compensation according to feedback of shake of the lens 1000 detected by a gyroscope in the lens 1000, so as to implement shake compensation of the lens 1000.
Most of the existing anti-shake devices generate lorentz magnetic force in a magnetic field by an electrified coil to drive the lens 1000 to move, to achieve optical anti-shake, the lens 1000 needs to be driven in at least two directions, which means that a plurality of coils need to be arranged, which poses certain challenges to the miniaturization of the overall structure, and is easily interfered by external magnetic fields, further affecting the anti-shake effect, the chinese patent publication No. CN106131435A provides a micro optical anti-shake camera module, the stretching and shortening of the memory alloy wire are realized through the temperature change, so as to pull the automatic focusing voice coil motor 2000 to move, realize the jitter compensation of the lens 1000, the control chip of the micro memory alloy optical anti-jitter actuator can control the change of the driving signal to change the temperature of the memory alloy wire, thereby controlling the elongation and contraction of the memory alloy wire, and calculating the position and moving distance of the actuator according to the resistance of the memory alloy wire. When the micro memory alloy optical anti-shake actuator moves to a specified position, the resistance of the memory alloy wire at the moment is fed back, and the movement deviation of the micro memory alloy optical anti-shake actuator can be corrected by comparing the deviation of the resistance value with a target value.
However, the applicant finds that due to randomness and uncertainty of jitter, the structure of the above technical solution cannot realize accurate compensation of the lens 1000 when multiple jitters occur, because a certain time is required for both temperature rise and temperature fall of the shape memory alloy, when a jitter occurs in a first direction, the above technical solution can realize compensation of the lens 1000 for the jitter in the first direction, but when a subsequent jitter occurs in a second direction, the memory alloy wire cannot be instantly deformed, so that the compensation is not timely, and the compensation of the jitter of the lens 1000 for multiple jitters and continuous jitter in different directions cannot be accurately realized, so that structural improvement of the lens 1000 is required.
With reference to fig. 8-11, the optical anti-shake device of the present embodiment is modified to be designed as a mechanical anti-shake device 3000, and the specific structure thereof is as follows:
the mechanical anti-shake device 3000 of the present embodiment includes a movable plate 3100, a movable frame 3200, an elastic restoring mechanism 3300, a substrate 3400, and a compensating mechanism 3500; the movable plate 3100 and the substrate 3400 are provided at the middle portions thereof with a through hole 3700 through which the lens passes, the auto-focus voice coil motor is mounted on the movable plate 3100, and the movable plate 3100 is mounted in the movable frame 3200, and as can be seen from the drawing, the width of the movable plate 3100 in the left-right direction is substantially the same as the inner width of the movable frame 3200, so that opposite sides (left and right sides) of the movable plate 3100 are slidably engaged with inner walls of opposite sides (left and right sides) of the movable frame 3200, so that the movable plate 3100 is reciprocally slidable in a first direction, i.e., a vertical direction in the drawing, in the movable frame 3200.
Specifically, the size of the movable frame 3200 of this embodiment is smaller than the size of the substrate 3400, two opposite sides of the movable frame 3200 are respectively connected to the substrate 3400 through two elastic restoring mechanisms 3300, the elastic restoring mechanism 3300 of this embodiment is a telescopic spring or other elastic member, and it should be noted that the elastic restoring mechanism 3300 of this embodiment only allows the movable frame 3200 to have the capability of stretching and rebounding along the left-right direction (i.e. the second direction described below) in the drawing, and cannot move along the first direction, and the elastic restoring mechanism 3300 is designed to facilitate the movable frame 3200 to drive the movable plate 3100 to restore after the movable frame 3200 is compensated and displaced, and the specific operation process of this embodiment will be described in detail in the following working process.
The compensation mechanism 3500 of this embodiment drives the movable plate 3100 and the lens on the movable plate 3100 to move under the driving of the processing module (which may be a motion command sent by the processing module), so as to implement the shake compensation of the lens.
Specifically, the compensating mechanism 3500 of the present embodiment includes a driving shaft 3510, a gear 3520, a gear track 3530 and a limit track 3540, wherein the driving shaft 3510 is mounted on the base plate 3400, specifically on the upper surface of the base plate 3400, the driving shaft 3510 is in transmission connection with the gear 3520, the driving shaft 3510 can be driven by a micro motor (not shown in the figure) or other structures, and the micro motor is controlled by the processing module; the gear rail 3530 is disposed on the movable plate 3100, the gear 3520 is mounted in the gear rail 3530 and moves along a preset direction of the gear rail 3530, and the gear 3520 enables the movable plate 3100 to generate a displacement in a first direction and a displacement in a second direction through the gear rail 3530 when rotating, wherein the first direction is perpendicular to the second direction; the limit rail 3540 is disposed on the movable plate 3100 or the base plate 3400, and the limit rail 3540 serves to prevent the gear 3520 from being disengaged from the gear rail 3530.
Specifically, the gear track 3530 and the limit track 3540 of the present embodiment have the following two structural forms:
as shown in fig. 7-9, a waist-shaped hole 3550 is disposed at a lower side of the movable plate 3100, the waist-shaped hole 3550 is disposed along a circumferential direction (i.e., a surrounding direction of the waist-shaped hole 3550) thereof with a plurality of teeth 3560 engaged with the gear 3520, the waist-shaped hole 3550 and the plurality of teeth 3560 together form the gear rail 3530, and the gear 3520 is located in the waist-shaped hole 3550 and engaged with the teeth 3560, such that the gear 3520 can drive the gear rail 3530 to move when rotating, and further directly drive the movable plate 3100 to move; in order to ensure that the gear 3520 can constantly keep meshed with the gear rail 3530 during rotation, in the embodiment, the limiting rail 3540 is arranged on the base plate 3400, the bottom of the movable plate 3100 is provided with a limiting member installed in the limiting rail 3540, and the limiting rail 3540 makes the motion track of the limiting member in a kidney shape, that is, the motion track of the limiting member in the limiting rail is the same as the motion track of the movable plate 3100, specifically, the limiting member of the embodiment is a protrusion arranged on the bottom surface of the movable plate 3100.
As shown in fig. 10 and 11, the gear rail 3530 of the present embodiment may further include a plurality of cylindrical protrusions 3580 disposed on the movable plate 3100, the plurality of cylindrical protrusions 3580 are uniformly spaced along the second direction, and the gear 3520 is engaged with the plurality of protrusions; the limiting rail 3540 is a first arc-shaped limiting member 3590 and a second arc-shaped limiting member 3600 which are arranged on the movable plate 3100, the first arc-shaped limiting member 3590 and the second arc-shaped limiting member 3600 are respectively arranged on two opposite sides of the gear rail 3530 along a first direction, and therefore, when the movable plate 3100 moves to a preset position, the gear 3520 is located on one side of the gear rail 3530, the gear 3520 is easy to disengage from the gear rail 3530 formed by the cylindrical protrusions 3580, and therefore, the first arc-shaped limiting member 3590 or the second arc-shaped limiting member 3600 can play a guiding role, so that the movable plate 3100 can move along the preset direction of the gear rail 3530, that is, the first arc-shaped limiting member 3590, the second arc-shaped limiting member 3600 and the plurality of protrusions cooperate to make the movement trajectory of the movable plate 3100 be waist-shaped.
The operation of the mechanical anti-shake device 3000 of the present embodiment will be described in detail with reference to the above structure, taking the example that the lens 1000 shakes twice, the shaking directions of the two times are opposite, and it is necessary to make the movable plate 3100 motion-compensate once in the first direction and then once in the second direction. When the movable plate 3100 is required to be compensated for motion in the first direction, the gyroscope feeds the detected shaking direction and distance of the lens 1000 back to the processing module in advance, the processing module calculates the motion distance of the movable plate 3100, so that the driving shaft 3510 drives the gear 3520 to rotate, the gear 3520 is matched with the gear rail 3530 and the limiting rail 3540, the processing module wirelessly sends a driving signal, the movable plate 3100 is further driven to move to a compensation position in the first direction, the movable plate 3100 is driven to reset through the driving shaft 3510 after compensation, in the resetting process, the elastic restoring mechanism 3300 also provides resetting force for resetting the movable plate 3100, and the movable plate 3100 is convenient to restore to the initial position. When the movable plate 3100 needs to perform motion compensation in the second direction, the processing method is the same as the compensation step in the first direction, and will not be described herein.
Of course, the above-mentioned two simple shakes are only performed twice, when a plurality of shakes occur, or when the shake direction is not reciprocating, the shake can be compensated by driving a plurality of compensation assemblies, the basic working process is the same as the above-mentioned description principle, which is not described herein in detail, and the detection feedback of the gyroscope, the sending of the control command to the driving shaft 3510 by the processing module, and the like are all the prior art, and are also described herein in many cases.
As can be seen from the above description, the mechanical compensator provided in this embodiment not only does not suffer from interference of an external magnetic field, but also has a good anti-shake effect, and can realize accurate compensation of the lens 1000 under the condition of multiple shakes, and the compensation is timely and accurate. In addition, the mechanical anti-shake device adopting the embodiment is simple in structure, small in installation space required by each component, convenient to integrate of the whole anti-shake device and high in compensation precision.
Specifically, the electronic device of the embodiment includes a mobile phone and a bracket for mounting the mobile phone. The purpose of the electronic device including the stand is to support and fix the electronic device using the stand due to uncertainty of an image acquisition environment.
In addition, the applicant finds that the existing mobile phone support only has a function of supporting a mobile phone, but does not have a function of a self-stick, so that the applicant makes a first improvement on the support, and combines the mobile phone support 5000 and the support 5200, and in combination with fig. 12, the support 5000 of this embodiment includes a mobile phone mount 5100 and a retractable support 5200, and the support 5200 is connected to a middle portion of the mobile phone mount 5100 (specifically, a middle portion of a substrate 3200 described below) through a damping hinge, so that when the support 5200 is rotated to the state of fig. 13, the support 5000 can form a self-stick structure, and when the support 5200 is rotated to the state of fig. 14, the support 5000 can form a mobile phone support 5000 structure.
The applicant of the above-mentioned support structure finds that, after the mobile phone mounting seat 5100 is combined with the support rod 5200, the occupied space is large, and even if the support rod 5200 is telescopic, the mobile phone mounting seat 5100 cannot change the structure, the size cannot be further reduced, and the mobile phone mounting seat 5100 cannot be placed in a pocket or a small bag, which causes the problem that the support 5000 is inconvenient to carry, so that the second step of improvement is performed on the support 5000 in the present embodiment, so that the overall accommodation performance of the support 5000 is further improved.
As shown in fig. 12-15, the mobile phone mounting base 5100 of the present embodiment includes a retractable connecting plate 5110 and folding plate sets 5120 installed at two opposite ends of the connecting plate 5110, wherein the supporting rod 5200 is connected to the middle portion of the connecting plate 5110 by a damping hinge; the folded plate group 5120 includes a first plate body 5121, a second plate body 5122 and a third plate body 5123, wherein one of two opposite ends of the first plate body 5121 is hinged to the connecting plate 5110, and the other of the two opposite ends of the first plate body 5121 is hinged to one of two opposite ends of the second plate body 5122; the other end of the second plate body 5122 opposite to both ends thereof is hinged to one end of the third plate body 5123 opposite to both ends thereof; the second plate body 5122 is provided with an opening 5130 for inserting a corner of the mobile phone.
Referring to fig. 15, when the mobile phone holder 5100 is used for mounting a mobile phone, the first plate 5121, the second plate 5122 and the third plate 5123 are folded to form a right triangle, the second plate 5122 is a hypotenuse of the right triangle, and the first plate 5121 and the third plate 5123 are right-angled sides of the right triangle, wherein one side surface of the third plate 5123 is attached to one side surface of the connecting plate 5110 side by side, and the other end of the opposite two ends of the third plate 5123 abuts against one end of the opposite two ends of the first plate 5121, so that the three folding plates are in a self-locking state, and when two corners of the lower portion of the mobile phone are inserted into the two openings 5130 of the two sides, the two sides of the lower portion of the mobile phone 6000 are located in the two right triangles, and the mobile phone 6000 can be fixed by the cooperation of the mobile phone, the connecting plate 5110 and the folding plate 5120, the triangular state cannot be opened by an external force, and the triangular state of the folding plate group 5120 can be released only after the mobile phone is drawn out from the opening 5130.
When the mobile phone mount 5100 is not in a working state, the connecting plate 5110 is reduced to the minimum length, the folding plate group 5120 and the connecting plate 5110 are mutually folded, a user can fold the mobile phone mount 5100 to form the minimum volume, and due to the scalability of the supporting rod 5200, the whole support 5000 can be accommodated to form the minimum volume state, so that the gorgeous and prosperous property of the support 5000 is improved, and the user can even directly place the support 5000 into a pocket or a small handbag, which is very convenient.
Preferably, in this embodiment, a first connecting portion is further disposed on one side surface of the third plate 5123, a first matching portion matched with the first connecting portion is disposed on a side surface of the connecting plate 5110, which is attached to the third plate 5123, and when the mobile phone mounting base 5100 of the support 5000 is used for mounting a mobile phone, the first connecting portion is connected with the first matching portion in a clamping manner. Specifically, the first connecting portion of this embodiment is a protruding strip or protrusion (not shown), and the first engaging portion is a slot (not shown) formed on the connecting plate 5110. This structure not only improves the stability of the folded plate group 5120 in the triangular state, but also facilitates the connection between the folded plate group 5120 and the connecting plate 5110 when the handset mounting seat 5100 needs to be folded to the minimum state.
Preferably, in this embodiment, a second connecting portion is further disposed at one of two opposite ends of the first plate 5121, a second matching portion matched with the second connecting portion is disposed at the other of two opposite ends of the third plate 5123, and when the support 5000 is used for mounting a mobile phone, the second connecting portion is engaged with the second matching portion. The second connecting portion may be a protrusion (not shown), and the second matching portion is an opening 5130 or a slot (not shown) matched with the protrusion. The structure improves the stability of the laminated plate assembly in a triangular state
In addition, in this embodiment, a base (not shown in the figure) may be detachably connected to the other end of the supporting rod 5200, when the mobile phone is required to be fixed and the mobile phone 6000 has a certain height, the supporting rod 5200 may be stretched to a certain length, the bracket 5000 may be placed on a plane through the base, and then the mobile phone may be placed in the mobile phone mounting seat 5100, so as to complete the fixation of the mobile phone; the detachable connection of the support bar 5200 and the base enables the two to be carried separately, thereby further improving the accommodation and carrying convenience of the support 5000.
The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Embodiments of the present invention provide a non-transitory computer-readable storage medium, which stores computer-executable instructions, where the computer-executable instructions, when executed by an electronic device, cause the electronic device to execute an AR imaging method in any of the above method embodiments.
Embodiments of the present invention provide a computer program product, wherein the computer program product comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, wherein the program instructions, when executed by an electronic device, cause the electronic device to perform the AR imaging method in any of the above method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a software product that can be stored on a computer-readable storage medium including any mechanism for storing or transmitting information in a form readable by a computer (e.g., a computer). For example, a machine-readable medium includes Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory storage media, electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others, and the computer software product includes instructions for causing a computing device (which may be a personal computer, server, or network device, etc.) to perform the methods described in the various embodiments or portions of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An AR imaging method, comprising:
detecting a face image shot by a mobile terminal, and extracting a sub-image of a nose region in the face image;
determining a light intensity weighting center of the subimage based on light rays, and comparing the light intensity weighting center with the weighting center of the face image to obtain a light ray estimation angle;
obtaining a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, wherein the real scene is a real world currently shot by the mobile terminal;
and generating a shadow image of the virtual object according to the light angle of the real scene and the preset position of the virtual object in the real scene.
2. The method of claim 1, wherein determining a light intensity weighted center of the sub-image based on the light rays, and comparing the light intensity weighted center with a weighted center of the face image to obtain an estimated angle of the light rays comprises:
dividing the sub-image into a plurality of sub-regions, and determining the sub-light intensity weighting center of each sub-region;
comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light ray estimation angle of each sub-area;
calculating the sub-illumination intensity of each sub-region, and determining the weight of the sub-ray estimation angle of each sub-region according to the sub-illumination intensity of each sub-region;
and calculating to obtain the light ray estimation angle according to each sub light ray estimation angle and the weight of the sub light ray estimation angle.
3. The method of claim 1, further comprising:
and calculating a distance value between the light intensity weighted center and the weighted center of the face image, and determining a confidence factor based on the distance value, wherein the confidence factor reflects the contrast between the nose tip adjacent regions in the nose region.
4. The method of claim 2, further comprising:
calculating sub-distance values between the sub-light intensity weighting centers and the weighting center of the face image, and determining sub-confidence factors based on the sub-distance values;
and determining a confidence factor according to each sub-confidence factor and the weight of the sub-ray estimation angle, wherein the confidence factor reflects the contrast between adjacent regions of the nose tip in the nose region.
5. The method according to any one of claims 1-4, further comprising:
extracting eye socket areas in the face image, and comparing the illumination intensity of the eye sockets;
when the illumination intensity difference value of the eye sockets is larger than or equal to a preset threshold value, determining the head inclination angle of the human face according to the illumination intensity of the eye sockets;
and adjusting the light ray estimation angle according to the face inclination angle.
6. An AR imaging apparatus, comprising:
the extraction module is used for detecting a face image shot by the mobile terminal and extracting a sub-image of a nose region in the face image;
the comparison module is used for determining a light intensity weighting center of the subimage based on light rays, and comparing the light intensity weighting center with the weighting center of the face image to obtain a light ray estimation angle;
the obtaining module is used for obtaining a light angle of a real scene according to the light estimation angle and the rotation angle of the mobile terminal, wherein the real scene is a real world currently shot by the mobile terminal;
and the generating module is used for generating a shadow image of the virtual object according to the light angle of the real scene and the preset position of the virtual object in the real scene.
7. The apparatus of claim 6, wherein the comparison module comprises:
the decomposition unit is used for dividing the sub-image into a plurality of sub-regions and determining the sub-light intensity weighting center of each sub-region;
a comparison unit for comparing the sub-light intensity weighted center with the weighted center of the face image to obtain the sub-light ray estimated angle of each sub-region
The determining unit is used for calculating the sub-illumination intensity of each sub-region and determining the weight of the sub-ray estimation angle of each sub-region according to the sub-illumination intensity of each sub-region;
and the calculating unit is used for calculating to obtain the light ray estimation angle according to each sub light ray estimation angle and the weight of the sub light ray estimation angle.
8. The apparatus of claim 6, wherein the comparing module is further configured to calculate a distance value between the weighted center of light intensity and the weighted center of the face image, and determine a confidence factor based on the distance value, wherein the confidence factor reflects a contrast between adjacent regions of the tip of the nose in the nose region.
9. The apparatus of claim 7, wherein the comparing module is further configured to calculate a sub-distance value between each of the sub-intensity weighted centers and the weighted center of the face image, and determine a sub-confidence factor based on the sub-distance values; and determining a confidence factor according to each sub-confidence factor and the weight of the sub-ray estimation angle, wherein the confidence factor reflects the contrast between adjacent regions of the nose tip in the nose region.
10. An electronic device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the AR imaging method of any one of claims 1 to 5.
CN201811110297.1A 2018-09-20 2018-09-21 AR imaging method and device and electronic equipment Active CN109214351B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2018/106784 2018-09-20
PCT/CN2018/106784 WO2020056689A1 (en) 2018-09-20 2018-09-20 Ar imaging method and apparatus and electronic device

Publications (2)

Publication Number Publication Date
CN109214351A CN109214351A (en) 2019-01-15
CN109214351B true CN109214351B (en) 2020-07-07

Family

ID=64985149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811110297.1A Active CN109214351B (en) 2018-09-20 2018-09-21 AR imaging method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN109214351B (en)
WO (1) WO2020056689A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033423B (en) * 2019-04-16 2020-08-28 北京字节跳动网络技术有限公司 Method and apparatus for processing image
WO2021151380A1 (en) * 2020-01-30 2021-08-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for rendering virtual object based on illumination estimation, method for training neural network, and related products
CN111340931A (en) * 2020-02-17 2020-06-26 广州虎牙科技有限公司 Scene processing method and device, user side and storage medium
CN116224600A (en) * 2020-11-17 2023-06-06 闪耀现实(无锡)科技有限公司 Augmented reality device, control method thereof and wearable augmented reality equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426695A (en) * 2011-09-30 2012-04-25 北京航空航天大学 Virtual-real illumination fusion method of single image scene
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN105182662A (en) * 2015-09-28 2015-12-23 神画科技(深圳)有限公司 Projection method and system with augmented reality effect
CN105741343A (en) * 2016-01-28 2016-07-06 联想(北京)有限公司 Information processing method and electronic equipment
CN107025683A (en) * 2017-03-30 2017-08-08 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107749076A (en) * 2017-11-01 2018-03-02 太平洋未来科技(深圳)有限公司 The method and apparatus that real illumination is generated in augmented reality scene
CN108021241A (en) * 2017-12-01 2018-05-11 西安枭龙科技有限公司 A kind of method for realizing AR glasses virtual reality fusions
CN108305317A (en) * 2017-08-04 2018-07-20 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240277B (en) * 2013-06-24 2019-07-19 腾讯科技(深圳)有限公司 Augmented reality exchange method and system based on Face datection
US9852550B2 (en) * 2015-08-05 2017-12-26 Civic Resource Group International Inc. System and method of markerless injection of ads in AR
CN105825544B (en) * 2015-11-25 2019-08-20 维沃移动通信有限公司 A kind of image processing method and mobile terminal
US10373385B2 (en) * 2016-12-14 2019-08-06 Microsoft Technology Licensing, Llc Subtractive rendering for augmented and virtual reality systems
CN108427498A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of exchange method and device based on augmented reality
CN106940897A (en) * 2017-03-02 2017-07-11 苏州蜗牛数字科技股份有限公司 A kind of method that real shadow is intervened in AR scenes
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal
CN107656619A (en) * 2017-09-26 2018-02-02 广景视睿科技(深圳)有限公司 A kind of intelligent projecting method, system and intelligent terminal
CN107749075B (en) * 2017-10-26 2021-02-12 太平洋未来科技(深圳)有限公司 Method and device for generating shadow effect of virtual object in video

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426695A (en) * 2011-09-30 2012-04-25 北京航空航天大学 Virtual-real illumination fusion method of single image scene
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN105182662A (en) * 2015-09-28 2015-12-23 神画科技(深圳)有限公司 Projection method and system with augmented reality effect
CN105741343A (en) * 2016-01-28 2016-07-06 联想(北京)有限公司 Information processing method and electronic equipment
CN107025683A (en) * 2017-03-30 2017-08-08 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN108305317A (en) * 2017-08-04 2018-07-20 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107749076A (en) * 2017-11-01 2018-03-02 太平洋未来科技(深圳)有限公司 The method and apparatus that real illumination is generated in augmented reality scene
CN108021241A (en) * 2017-12-01 2018-05-11 西安枭龙科技有限公司 A kind of method for realizing AR glasses virtual reality fusions

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《A Review of Shadow Techniques in Augmented Reality》;Z. Noh,et al;《2009 Second International Conference on Machine Vision》;20100115;全文 *
《基于高动态范围图像中光晕分析的光照方向测算算法》;李华等;《计算机应用》;20160510;第36卷(第5期);全文 *

Also Published As

Publication number Publication date
CN109214351A (en) 2019-01-15
WO2020056689A1 (en) 2020-03-26

Similar Documents

Publication Publication Date Title
CN109214351B (en) AR imaging method and device and electronic equipment
CN109151340B (en) Video processing method and device and electronic equipment
US10051182B2 (en) Methods and apparatus for compensating for motion and/or changing light conditions during image capture
US9912865B2 (en) Methods and apparatus for supporting burst modes of camera operation
US11024082B2 (en) Pass-through display of captured imagery
CN109271911B (en) Three-dimensional face optimization method and device based on light rays and electronic equipment
CN108596827B (en) Three-dimensional face model generation method and device and electronic equipment
CN108377398B (en) Infrared-based AR imaging method and system and electronic equipment
US10104292B2 (en) Multishot tilt optical image stabilization for shallow depth of field
CN113016173A (en) Apparatus and method for operating a plurality of cameras for digital photographing
CN108966017B (en) Video generation method and device and electronic equipment
US9160931B2 (en) Modifying captured image based on user viewpoint
KR20160070780A (en) Refocusable images
CN105592262A (en) Imaging apparatus
CN108573480B (en) Ambient light compensation method and device based on image processing and electronic equipment
CN109285216B (en) Method and device for generating three-dimensional face image based on shielding image and electronic equipment
CN109521869B (en) Information interaction method and device and electronic equipment
CN108495016B (en) Camera device, electronic equipment and image acquisition method
CN110177200B (en) Camera module, electronic equipment and image shooting method
CN109474801B (en) Interactive object generation method and device and electronic equipment
JP2013207344A (en) Detection device, imaging device, and program
CN115209057A (en) Shooting focusing method and related electronic equipment
JP2010034652A (en) Multi-azimuth camera mounted mobile terminal apparatus
CN109447924B (en) Picture synthesis method and device and electronic equipment
CN113973171B (en) Multi-camera shooting module, camera shooting system, electronic equipment and imaging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant