CN109521869B - Information interaction method and device and electronic equipment - Google Patents

Information interaction method and device and electronic equipment Download PDF

Info

Publication number
CN109521869B
CN109521869B CN201811129528.3A CN201811129528A CN109521869B CN 109521869 B CN109521869 B CN 109521869B CN 201811129528 A CN201811129528 A CN 201811129528A CN 109521869 B CN109521869 B CN 109521869B
Authority
CN
China
Prior art keywords
sub
user
information
scene
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811129528.3A
Other languages
Chinese (zh)
Other versions
CN109521869A (en
Inventor
奥利维尔·菲永
李建亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Future Technology Shenzhen Co ltd
Original Assignee
Pacific Future Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Future Technology Shenzhen Co ltd filed Critical Pacific Future Technology Shenzhen Co ltd
Publication of CN109521869A publication Critical patent/CN109521869A/en
Application granted granted Critical
Publication of CN109521869B publication Critical patent/CN109521869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the invention provides an information interaction method, an information interaction device and electronic equipment, wherein the method comprises the following steps: receiving interactive information corresponding to a first user and sent by a first terminal, wherein the interactive information comprises a face image and a first coordinate position of the first user; adjusting the picture of the virtual scene displayed by the second terminal according to the first coordinate position; analyzing and processing the face image to obtain a light angle of a real scene where the first user is located; and performing illumination rendering on the picture of the virtual scene according to the light angle. By the method, the device and the equipment, the virtual scene can accurately and truly display the scene change and the light and shadow change of the real world, and a second user watching the virtual world scene can communicate with a first user in the real scene, so that the interactive experience of the first user and the second user is optimized.

Description

Information interaction method and device and electronic equipment
Technical Field
The invention relates to the technical field of internet application, in particular to an information interaction method and device and electronic equipment.
Background
With the development of technology, real-world scenes can be presented in a virtual manner, and people can realize the understanding of the real-world scenes by roaming in the virtual world. The augmented reality technology is a technology for sleeving a virtual object on a real world on a screen and performing interaction, and a user can sense the existence of the virtual object by utilizing AR equipment. However, the inventors found in the course of implementing embodiments of the present invention that: in the related art, a virtual world scene is generated in advance according to a real world scene, so that the change of light and shadow of the real world cannot be accurately displayed in real time, and meanwhile, a second user watching the virtual world scene cannot communicate with a first user in the real world scene, so that the interaction experience of the user is influenced. In addition, augmented reality's scene video is more and more accomplished through the mobile device shooting, but the effect that the camera of mobile device acquireed video or picture not only receives the influence of ambient light, also receives the influence of shooting stability, and the great shake of range can influence the shooting quality and be unfavorable for follow-up image or video processing, and the support of mobile device or selfie stick also use flexibility ratio also can not satisfy the requirement.
Disclosure of Invention
The information interaction method, the information interaction device and the electronic equipment provided by the embodiment of the invention are used for at least solving the problems in the related art.
An embodiment of the present invention provides an information interaction method, including:
receiving interactive information corresponding to a first user and sent by a first terminal, wherein the interactive information comprises a face image and a first coordinate position of the first user; adjusting the picture of the virtual scene displayed by the second terminal according to the first coordinate position; analyzing and processing the face image to obtain a light angle of a real scene where the first user is located; and performing illumination rendering on the picture of the virtual scene according to the light angle.
Further, the adjusting the picture of the virtual scene displayed by the second terminal according to the first coordinate position includes: determining a second coordinate position corresponding to the first coordinate position in the virtual scene based on a pre-established coordinate corresponding relation table of the real scene and the virtual scene; acquiring size information of the second terminal, and determining a target picture of the virtual scene according to the size information and the second coordinate position; and displaying the target picture.
Further, analyzing and processing the face image to obtain a light angle of the real scene where the first user is located includes: extracting a sub-image of a nose region in the face image; and determining a light intensity weighting center of the subimage based on the light rays, and comparing the light intensity weighting center with the weighting center of the face image to obtain the light ray angle of the real scene where the first user is located.
Further, the determining a light intensity weighted center of the sub-image based on the light rays, and comparing the light intensity weighted center with a weighted center of the face image to obtain a light angle of the real scene where the first user is located includes: dividing the sub-image into a plurality of sub-regions, and determining the sub-light intensity weighting center of each sub-region; comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light angle of each sub-area; calculating the sub-illumination intensity of each sub-region, and determining the weight of the sub-ray angle of each sub-region according to the sub-illumination intensity of each sub-region; and calculating to obtain the light ray angle according to each sub light ray angle and the weight of the sub light ray angle.
Further, the interactive information further includes video information, voice information and/or text information, and the method further includes: determining a target object corresponding to the interactive information in the real scene; and displaying the interaction information at a position matched with the target object.
Another aspect of the embodiments of the present invention provides an information interaction apparatus, including:
the receiving module is used for receiving interaction information which is sent by a first terminal and corresponds to a first user, wherein the interaction information comprises a face image and a first coordinate position of the first user; the adjusting module is used for adjusting the picture of the virtual scene displayed by the second terminal according to the first coordinate position; the processing module is used for analyzing and processing the face image to obtain a light angle of a real scene where the first user is located; and the rendering module is used for performing illumination rendering on the picture of the virtual scene according to the light angle.
Further, the adjustment module includes: the determining unit is used for determining a second coordinate position corresponding to the first coordinate position in the virtual scene based on a coordinate corresponding relation table of the real scene and the virtual scene established in advance; the acquisition unit is used for acquiring the size information of the second terminal and determining a target picture of the virtual scene according to the size information and the second coordinate position; and the display unit is used for displaying the target picture.
Further, the processing module comprises: the extraction unit is used for extracting a sub-image of a nose area in the face image; and the comparison unit is used for determining a light intensity weighting center of the subimage based on light rays, and comparing the light intensity weighting center with the weighting center of the face image to obtain the light ray angle of the real scene where the first user is located.
Furthermore, the comparison unit is configured to divide the sub-image into a plurality of sub-regions, and determine a sub-light intensity weighting center of each sub-region; comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light angle of each sub-area; calculating the sub-illumination intensity of each sub-region, and determining the weight of the sub-ray angle of each sub-region according to the sub-illumination intensity of each sub-region; and calculating to obtain the light ray angle according to each sub light ray angle and the weight of the sub light ray angle.
Further, the interactive information further comprises video information, voice information and/or text information, and the device further comprises a matching module, wherein the matching module is used for determining a target object corresponding to the interactive information in the real scene; and displaying the interaction information at a position matched with the target object.
Another aspect of an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any one of the above information interaction methods.
Furthermore, the electronic device further comprises an image acquisition module, the image acquisition module comprises a lens, an automatic focusing voice coil motor, a mechanical anti-shake device and an image sensor, the lens is fixedly mounted on the automatic focusing voice coil motor, the lens is used for acquiring images, the image sensor transmits the images acquired by the lens to the identification module, the automatic focusing voice coil motor is mounted on the mechanical anti-shake device, and the processing module drives the mechanical anti-shake device to act according to feedback of lens shake detected by a gyroscope in the lens, so that shake compensation of the lens is realized.
Furthermore, the mechanical anti-shake device comprises a movable plate, a movable frame, an elastic return mechanism, a substrate and a compensation mechanism; the middle part of the movable plate is provided with a through hole for the lens to pass through, the automatic focusing voice coil motor is installed on the movable plate, the movable plate is installed in the movable frame, and two opposite sides of the movable plate are in sliding fit with the inner walls of two opposite sides of the movable frame, so that the movable plate can slide back and forth along a first direction; the size of the movable frame is smaller than that of the substrate, two opposite sides of the movable frame are respectively connected with the substrate through two elastic restoring mechanisms, and the middle of the substrate is also provided with a through hole through which the lens penetrates; the compensation mechanism is driven by the processing module to drive the movable plate and the lenses on the movable plate to act so as to realize the shake compensation of the lenses; the compensation mechanism comprises a driving shaft, a gear track and a limiting track, the driving shaft is arranged on the base plate, and the driving shaft is in transmission connection with the gear; the gear track is arranged on the movable plate, the gear is installed in the gear track, and when the gear rotates, the movable plate can generate displacement towards a first direction and displacement towards a second direction through the gear track, wherein the first direction is perpendicular to the second direction; the limiting track is arranged on the movable plate or the base plate and is used for preventing the gear from being separated from the gear track.
Furthermore, a kidney-shaped hole is formed in one side of the movable plate, a plurality of teeth meshed with the gear are arranged in the kidney-shaped hole along the circumferential direction of the kidney-shaped hole, the kidney-shaped hole and the teeth jointly form the gear track, and the gear is located in the kidney-shaped hole and meshed with the teeth; the limiting rail is arranged on the base plate, a limiting part located in the limiting rail is arranged at the bottom of the movable plate, and the limiting rail enables the motion trail of the limiting part in the limiting rail to be waist-shaped.
Further, the limiting part is a protrusion arranged on the bottom surface of the movable plate.
Further, the gear track comprises a plurality of cylindrical protrusions arranged on the movable plate, the cylindrical protrusions are uniformly distributed at intervals along the second direction, and the gear is meshed with the plurality of protrusions; the limiting track is provided with a first arc-shaped limiting part and a second arc-shaped limiting part which are arranged on the movable plate, the first arc-shaped limiting part and the second arc-shaped limiting part are respectively arranged on two opposite sides of the gear track along a first direction, and the first arc-shaped limiting part, the second arc-shaped limiting part and the plurality of protrusions are matched to enable the motion track of the movable plate to be waist-shaped.
Further, the elastic restoring mechanism comprises a telescopic spring.
Further, the image acquisition module comprises a mobile phone and a bracket for mounting the mobile phone.
Further, the support comprises a mobile phone mounting seat and a telescopic supporting rod; the mobile phone mounting seat comprises a telescopic connecting plate and folding plate groups arranged at two opposite ends of the connecting plate, and one end of the supporting rod is connected with the middle part of the connecting plate through a damping hinge; the folding plate group comprises a first plate body, a second plate body and a third plate body, wherein one end of the two opposite ends of the first plate body is hinged with the connecting plate, and the other end of the two opposite ends of the first plate body is hinged with one end of the two opposite ends of the second plate body; the other end of the second plate body at the two opposite ends is hinged with one end of the third plate body at the two opposite ends; the second plate body is provided with an opening for inserting a mobile phone corner; when the mobile phone mounting seat is used for mounting a mobile phone, the first plate body, the second plate body and the third plate body are folded to form a right-angled triangle state, the second plate body is a hypotenuse of the right-angled triangle, the first plate body and the third plate body are right-angled sides of the right-angled triangle, wherein one side face of the third plate body is attached to one side face of the connecting plate side by side, and the other end of the third plate body in the two opposite ends is abutted to one end of the first plate body in the two opposite ends.
Furthermore, a first connecting portion is arranged on one side face of the third plate body, a first matching portion matched with the first connecting portion is arranged on the side face, attached to the third plate body, of the connecting plate, and the first connecting portion and the first matching portion are connected in a clamping mode when the support mobile phone mounting seat is used for mounting a mobile phone.
Furthermore, one end of the two opposite ends of the first plate body is provided with a second connecting portion, the other end of the two opposite ends of the third plate body is provided with a second matching portion matched with the second connecting portion, and when the support mobile phone mounting seat is used for mounting a mobile phone, the second connecting portion is connected with the second matching portion in a clamping mode.
Furthermore, the other end of the supporting rod is detachably connected with a base.
According to the information interaction method, the information interaction device and the electronic equipment provided by the embodiment of the invention, the second user wants to know the environment condition of the real scene in a remote roaming mode, the roaming position of the first user in the real scene and the face image are transmitted into the second terminal in real time, so that the first user and the second user can carry out audio and video communication while walking, meanwhile, the second terminal can determine the light condition of the real scene according to the face image and carry out rendering of the virtual scene according to the light condition, so that the virtual scene can accurately and truly display the light and shadow change of the real world, and is more similar to the real scene. The first user can enable the second user to follow the first user through the positioning information, and the situation of a real scene can be explained to the second user on site through voice information or video information, so that the interactive experience of the second user and the first user is optimized. The anti-shake hardware structure of the mobile phone camera and the mobile phone self-timer support further enhance the shooting effect, and are more beneficial to subsequent image or video processing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1 is a flow chart of an information interaction method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an information interaction method according to an embodiment of the present invention;
FIG. 3 is a flowchart of an information interaction method according to an embodiment of the present invention;
FIG. 4 is a block diagram of an information interaction device according to an embodiment of the present invention;
FIG. 5 is a block diagram of an information interaction device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device for executing the information interaction method provided by the embodiment of the method of the present invention;
FIG. 7 is a schematic structural diagram of an image acquisition module according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a first mechanical anti-shake apparatus according to an embodiment of the present invention;
fig. 9 is a schematic view of a bottom structure of a first movable plate according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a second mechanical anti-shake device provided in an embodiment of the present invention;
fig. 11 is a schematic view of a bottom structure of a second movable plate according to an embodiment of the present invention;
FIG. 12 is a block diagram of a stand provided in accordance with one embodiment of the present invention;
FIG. 13 is a schematic view of a state of a stand according to an embodiment of the present invention;
FIG. 14 is a schematic view of another state of a stand according to an embodiment of the present invention;
fig. 15 is a structural state diagram of a mounting base according to an embodiment of the present invention when connected to a mobile phone.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict. Fig. 1 is a flowchart of an information interaction method according to an embodiment of the present invention.
First, an application scenario of an embodiment of the present invention is introduced, where a virtual scenario in the embodiment is established based on a real world scenario, for example, if the real world scenario is a park, the virtual scenario is generated based on simulation of the park. When the first user interacts with a second user watching a virtual scene through the first terminal in a real scene, that is, the virtual scene displayed at the second terminal is completely consistent with the real scene where the first user is located, for example, if the virtual scene seen at the second terminal by the second user is a position a of a park, the corresponding real scene where the first user is located is also the position a of the park. Specifically, a first user carries a first terminal on the site of a real scene, and can transmit interaction information to a second terminal displaying a virtual scene through the first terminal, and the second terminal can adjust a current virtual scene picture according to the interaction information, so that the virtual scene displayed by the second terminal can be consistent with the real scene described by the first user.
As shown in fig. 1, the information interaction method provided in the embodiment of the present invention includes:
step S101, receiving interaction information corresponding to a first user and sent by a first terminal, wherein the interaction information comprises a face image and a first coordinate position of the first user.
In this step, a first user and a second user perform audio and video communication through a first terminal and a second terminal, wherein the first user watches a virtual scene through the first terminal, and the second user is in a real world scene corresponding to the virtual scene. In the process, the first terminal monitors the position change of the first user in real time, when the position change exceeds a preset distance, it is indicated that the first user has a large change in a real world scene, for example, the first user moves from a scene at a position A of a park to a scene at a position B of the park, or moves from a building 1 to a building 2, a first coordinate position of the first user after the first user moves and a face image of the first user at the coordinate position are obtained, and the coordinate position and the face image are used as interactive information corresponding to the first user and sent to the second terminal.
Alternatively, the real scene coordinate system may be established by taking the horizontal plane as an X-axis, taking a direction 90 degrees from the horizontal plane as a Y-axis, and taking a certain marker of the real scene as a coordinate origin, which is not limited herein. It should be noted that, because of different real-world scenes, how much distance the first user moves can cause a large change in the scene is also different, and therefore the preset distance may be determined according to the real-world scenes.
Alternatively, the face image may be a snapshot image of a face of the user captured by a front-facing camera, and when the snapshot image is detected to include the face image of the user, the face image is captured by a face detection algorithm known to those skilled in the art or any face detection algorithm based on CNN.
And S102, adjusting the picture of the virtual scene displayed by the second terminal according to the first coordinate position.
After the second terminal receives the first coordinate position, the current picture of the virtual scene displayed by the second terminal needs to be adjusted according to the second coordinate position, so that the first user can enable the view of the second user to follow the steps of the first user in the real scene.
Specifically, as shown in fig. 2, this step may be performed by the following substeps.
Step S1021, based on the pre-established coordinate corresponding relation table of the real scene and the virtual scene, determining a second coordinate position corresponding to the first coordinate position in the virtual scene.
Since the size information of the real scene and the virtual scene is different, generally, the virtual scene is generated virtually by scaling down the real scene, and therefore a coordinate correspondence table between the real scene and the virtual scene needs to be established. The establishment of the virtual scene coordinate system may refer to a method for establishing a real scene coordinate system, which is not described herein again.
Based on the coordinate correspondence table, a second coordinate position corresponding to the first coordinate position in the virtual scene may be determined, that is, a coordinate position to which the virtual scene should be adjusted is obtained.
Step S1022, obtaining the size information of the second terminal, and determining the target frame of the virtual scene according to the size information and the second coordinate position.
Since the virtual scene is displayed through the second terminal, the target picture to which the virtual scene should be adjusted can be determined according to the size of the second terminal and the second coordinate position. Optionally, the second coordinate position may be placed in the center of the second terminal screen, and then the boundary of the target picture to be displayed is determined according to the size information of the second terminal screen; and the second coordinate position can be arranged at a preset position of the second terminal screen according to the watching time habit of the user, and the boundary of the target picture to be displayed is determined according to the size information of the second terminal screen.
In step S1023, the target screen is displayed.
When the real world scene is greatly changed, the light condition of the real world also changes, so that after the picture is adjusted by the second terminal, the picture after the virtual scene adjustment needs to be rendered according to the light condition of the real world.
Step S103, analyzing the face image to obtain the light angle of the real scene where the first user is located.
When the first user interacts with the second user, the first user generally faces the screen of the first terminal, so that the light angle of the ambient light detected in the face image only needs to be mirrored to the virtual scene to be added.
Specifically, as shown in fig. 3, this step may be performed by the following substeps.
And step S1031, extracting a sub-image of the nose region in the face image.
Because the nose in the five sense organs protrudes out of the face, the brightness of the nose area is more easily influenced by the ambient light, so the nose in the face features is extracted by the embodiment, and the sub-image of the nose area in the face image is obtained.
Step S1032, determining the light intensity weighted center of the sub-image based on the light, and comparing the light intensity weighted center with the weighted center of the face image to obtain the light angle of the real scene where the first user is located.
Specifically, the corresponding light intensity weighted center is determined according to the image moment of the sub-image. The image moment is a moment set calculated from a digital graph, generally describes global features of the image, and provides a large amount of information on different types of geometric features of the image, such as size, position, direction, shape, and the like, for example, the first moment is related to the shape, the second moment shows the expansion degree of a curve around a straight line average value, the third moment is a symmetry measure about the average value, a group of 7 constant moments can be derived from the second moment and the third moment, the constant moments are image statistical features, and accordingly, the image classification operation can be performed on the images, which belongs to common knowledge in the art, and the description of the invention is omitted herein.
Optionally, after the light intensity weighting center of the sub-picture is determined, the light intensity weighting center of the sub-picture is compared with the sub-picture (the weighting center is the geometric center of the face image), the coordinate position of the weighting center of the face image is compared with the coordinate position of the light intensity weighting center of the sub-picture, the direction from the geometric center to the light intensity weighting center is the light ray direction of the real scene ambient light, and meanwhile, a coordinate system can be established by selecting an origin of coordinates to obtain an included angle between the vector and the X axis, and the included angle is used as the light ray angle of the current scene ambient light. In addition, the light angle may also be calculated by other non-proprietary algorithms, and the invention is not limited herein. It should be noted that in the embodiments of the present invention, the ambient light will be considered unidirectional and uniform.
The ray angle of the real scene can be calculated as follows:
dividing the sub-image into a plurality of sub-regions, and determining the sub-light intensity weighting center of each sub-region; comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light angle of each sub-area; calculating the sub-illumination intensity of each sub-region, and determining the weight of the sub-ray angle of each sub-region according to the sub-illumination intensity of each sub-region; and calculating to obtain the light ray angle according to each sub light ray angle and the weight of the sub light ray angle.
Specifically, firstly, the sub-image may be equally divided into four sub-regions to obtain four sub-regions, and the sub-light intensity weighting center of each sub-region and the sub-light angle of each sub-region are determined according to the above method; secondly, for each sub-picture, obtaining the light intensity corresponding to the sub-picture according to the light and shade contrast information and the like in the sub-picture, and taking the sub-illumination intensity of each sub-region as the weight of the sub-light angle of the sub-region after obtaining the sub-illumination intensity of each sub-region; and finally, calculating the addition and the average of the sub-ray angles of the four sub-regions according to the weights respectively corresponding to the sub-ray angles to obtain the average ray angle.
And step S104, performing illumination rendering on the picture of the virtual scene according to the light angle.
In this step, the shadow position of each object may be determined according to the light angle of the real scene and the position of each object in the virtual scene. Next, the shape of the shadow at the shadow position is determined based on the shape of each object. Shadow images of the respective objects are generated based on the shadow positions and the shadow shapes. Specifically, each object in the virtual scene includes, but is not limited to, a character, an animal, a scene, a building, and the like.
After the steps, the virtual scene displayed at the second terminal can keep real-time synchronization with the real scene where the first user is located, and the light and shadow effect of the virtual scene is the same as that of the real scene through the method in the embodiment, so that the reality of the virtual scene is increased.
The first user and the second user interact with each other through audio or video, and therefore the interaction information further comprises video information, voice information and/or text information. As an optional implementation manner of the embodiment of the present invention, the method further includes: determining a target object corresponding to the interactive information in the real scene; and displaying the interaction information at a position matched with the target object.
Specifically, the first user and the second user may talk about an object in a real scene during the interaction process, for example, when the real scene is a park, may talk about flowers, trees, rivers, people, and the like in the park. When the received interactive information (audio information, video information or text information) related to the object in the real scene is analyzed by the second terminal, the interactive information can be used as a target object, and the characters of the interactive information are displayed on the target object, so that the interestingness of the interactive process is increased.
Optionally, when the interactive information is audio information or video information, the audio information or the video information may be converted into a text by using a speech recognition technology, and then whether a keyword matched with an object in a real scene exists is searched from the text, and if so, an object corresponding to the keyword in the real scene is used as a target object; when the interactive information is text information, whether a keyword matched with an object in a real scene exists in the text information can be directly determined, and if yes, an object corresponding to the keyword in the real scene is used as a target object.
According to the information interaction method provided by the embodiment of the invention, the second user wants to know the environment condition of the real scene in a remote roaming manner, the roaming position of the first user in the real scene and the face image are transmitted into the second terminal in real time, so that the first user and the second user can carry out audio and video communication while walking, meanwhile, the second terminal can determine the light condition of the real scene according to the face image and carry out rendering of the virtual scene according to the light condition, and the virtual scene can accurately and truly display the light and shadow change of the real world and is more similar to the real scene. The first user can enable the second user to follow the first user through the positioning information, and the situation of a real scene can be explained to the second user on site through voice information or video information, so that the interactive experience of the second user and the first user is optimized.
Fig. 4 is a structural diagram of an information interaction device according to an embodiment of the present invention. As shown in fig. 3, the apparatus specifically includes: a receiving module 100, an adjusting module 200, a processing module 300 and a rendering module 400. Wherein the content of the first and second substances,
a receiving module 100, configured to receive interaction information corresponding to a first user and sent by a first terminal, where the interaction information includes a face image and a first coordinate position of the first user; the adjusting module 200 is configured to adjust a picture of a virtual scene displayed by a second terminal according to the first coordinate position; the processing module 300 is configured to analyze the face image to obtain a light angle of a real scene where the first user is located; and a rendering module 400, configured to perform illumination rendering on the picture of the virtual scene according to the light angle.
The information interaction device provided in the embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1, and the implementation principle, the method, the functional purpose, and the like of the information interaction device are similar to those of the embodiment shown in fig. 1, and are not described herein again.
Fig. 5 is a structural diagram of an information interaction device according to an embodiment of the present invention. As shown in fig. 5, the apparatus specifically includes: a receiving module 100, an adjusting module 200, a processing module 300 and a rendering module 400. Wherein the content of the first and second substances,
a receiving module 100, configured to receive interaction information corresponding to a first user and sent by a first terminal, where the interaction information includes a face image and a first coordinate position of the first user; the adjusting module 200 is configured to adjust a picture of a virtual scene displayed by a second terminal according to the first coordinate position; the processing module 300 is configured to analyze the face image to obtain a light angle of a real scene where the first user is located; and a rendering module 400, configured to perform illumination rendering on the picture of the virtual scene according to the light angle.
Further, the adjusting module 200 further includes: a determination unit 210, an acquisition unit 220, and a presentation unit 230, wherein,
a determining unit 210, configured to determine, based on a coordinate correspondence table of the real scene and the virtual scene established in advance, a second coordinate position corresponding to the first coordinate position in the virtual scene; an obtaining unit 220, configured to obtain size information of the second terminal, and determine a target picture of the virtual scene according to the size information and the second coordinate position; the display unit 230 is configured to display the target picture.
Further, the processing module 300 further includes: an extraction unit 310 and a comparison unit 320, wherein,
an extracting unit 310, configured to extract a sub-image of a nose region in the face image; the comparing unit 320 is configured to determine a light intensity weighted center of the sub-image based on the light ray, and compare the light intensity weighted center with the weighted center of the face image to obtain a light ray angle of the real scene where the first user is located.
Optionally, the comparing unit 320 is configured to divide the sub-image into a plurality of sub-regions, and determine a sub-light intensity weighting center of each sub-region; comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light angle of each sub-area; calculating the sub-illumination intensity of each sub-region, and determining the weight of the sub-ray angle of each sub-region according to the sub-illumination intensity of each sub-region; and calculating to obtain the light ray angle according to each sub light ray angle and the weight of the sub light ray angle.
Further, the interaction information further includes video information, voice information and/or text information, the apparatus further includes a matching module 500, and the matching module 500 is configured to determine a target object corresponding to the interaction information in the real scene; and displaying the interaction information at a position matched with the target object.
The information interaction device provided in the embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1 to 3, and the implementation principle, the method, and the functional use of the information interaction device are similar to those in the embodiment shown in fig. 1 to 3, and are not described herein again.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device for executing the information interaction method provided by the embodiment of the method of the present invention. As shown in fig. 6, the electronic device includes:
one or more processors 610 and a memory 620, with one processor 610 being an example in fig. 6. The device for executing the information interaction method may further include: an input device 630 and an output device 630.
The processor 610, the memory 620, the input device 630, and the output device 640 may be connected by a bus or other means, such as the bus connection in fig. 6.
The memory 620, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the information interaction method in the embodiments of the present invention. The processor 610 performs various functional applications of the server and data processing by executing nonvolatile software programs, instructions, and modules stored in the memory 620, that is, implements the information interaction method.
The memory 620 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the information interacting device provided according to the embodiment of the present invention, and the like. Further, the memory 620 may include high speed random access memory 620, and may also include non-volatile memory 620, such as at least one piece of disk memory 620, flash memory devices, or other non-volatile solid state memory 620. In some embodiments, memory 620 optionally includes memory 620 located remotely from processor 66, and such remote memory 620 may be connected to the information interacting device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 630 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the information interaction device. The input device 630 may include a pressing module or the like.
The one or more modules are stored in the memory 620 and, when executed by the one or more processors 610, perform the information interaction method.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) And (4) a server.
(5) And other electronic devices with data interaction functions.
Specifically, the electronic device includes an image acquisition module, as shown in fig. 7, the image acquisition module of this embodiment includes a lens 1000, an auto-focus voice coil motor 2000, a mechanical anti-shake device 3000, and an image sensor 4000, the lens 1000 is fixedly mounted on the auto-focus voice coil motor 2000, the lens 1000 is used to acquire an image, the image sensor 4000 transmits the image acquired by the lens 1000 to the identification module, the auto-focus voice coil motor 2000 is mounted on the mechanical anti-shake device 3000, and the processing module drives the mechanical anti-shake device 3000 to perform shake compensation according to feedback of shake of the lens 1000 detected by a gyroscope in the lens 1000, so as to implement shake compensation of the lens 1000.
Most of the existing anti-shake devices generate lorentz magnetic force in a magnetic field by an electrified coil to drive the lens 1000 to move, to achieve optical anti-shake, the lens 1000 needs to be driven in at least two directions, which means that a plurality of coils need to be arranged, which poses certain challenges to the miniaturization of the overall structure, and is easily interfered by external magnetic fields, further affecting the anti-shake effect, the chinese patent publication No. CN106131435A provides a micro optical anti-shake camera module, the stretching and shortening of the memory alloy wire are realized through the temperature change, so as to pull the automatic focusing voice coil motor 2000 to move, realize the jitter compensation of the lens 1000, the control chip of the micro memory alloy optical anti-jitter actuator can control the change of the driving signal to change the temperature of the memory alloy wire, thereby controlling the elongation and contraction of the memory alloy wire, and calculating the position and moving distance of the actuator according to the resistance of the memory alloy wire. When the micro memory alloy optical anti-shake actuator moves to a specified position, the resistance of the memory alloy wire at the moment is fed back, and the movement deviation of the micro memory alloy optical anti-shake actuator can be corrected by comparing the deviation of the resistance value with a target value.
However, the applicant finds that due to randomness and uncertainty of jitter, the structure of the above technical solution cannot realize accurate compensation of the lens 1000 when multiple jitters occur, because a certain time is required for both temperature rise and temperature fall of the shape memory alloy, when a jitter occurs in a first direction, the above technical solution can realize compensation of the lens 1000 for the jitter in the first direction, but when a subsequent jitter occurs in a second direction, the memory alloy wire cannot be instantly deformed, so that the compensation is not timely, and the compensation of the jitter of the lens 1000 for multiple jitters and continuous jitter in different directions cannot be accurately realized, so that structural improvement of the lens 1000 is required.
With reference to fig. 8-11, the optical anti-shake device of the present embodiment is modified to be designed as a mechanical anti-shake device 3000, and the specific structure thereof is as follows:
the mechanical anti-shake device 3000 of the present embodiment includes a movable plate 3100, a movable frame 3200, an elastic restoring mechanism 3300, a substrate 3400, and a compensating mechanism 3500; the movable plate 3100 and the substrate 3400 are provided at the middle portions thereof with a through hole 3700 through which the lens passes, the auto-focus voice coil motor is mounted on the movable plate 3100, and the movable plate 3100 is mounted in the movable frame 3200, and as can be seen from the drawing, the width of the movable plate 3100 in the left-right direction is substantially the same as the inner width of the movable frame 3200, so that opposite sides (left and right sides) of the movable plate 3100 are slidably engaged with inner walls of opposite sides (left and right sides) of the movable frame 3200, so that the movable plate 3100 is reciprocally slidable in a first direction, i.e., a vertical direction in the drawing, in the movable frame 3200.
Specifically, the size of the movable frame 3200 of this embodiment is smaller than the size of the substrate 3400, two opposite sides of the movable frame 3200 are respectively connected to the substrate 3400 through two elastic restoring mechanisms 3300, the elastic restoring mechanism 3300 of this embodiment is a telescopic spring or other elastic member, and it should be noted that the elastic restoring mechanism 3300 of this embodiment only allows the movable frame 3200 to have the capability of stretching and rebounding along the left-right direction (i.e. the second direction described below) in the drawing, and cannot move along the first direction, and the elastic restoring mechanism 3300 is designed to facilitate the movable frame 3200 to drive the movable plate 3100 to restore after the movable frame 3200 is compensated and displaced, and the specific operation process of this embodiment will be described in detail in the following working process.
The compensation mechanism 3500 of this embodiment drives the movable plate 3100 and the lens on the movable plate 3100 to move under the driving of the processing module (which may be a motion command sent by the processing module), so as to implement the shake compensation of the lens.
Specifically, the compensating mechanism 3500 of the present embodiment includes a driving shaft 3510, a gear 3520, a gear track 3530 and a limit track 3540, wherein the driving shaft 3510 is mounted on the base plate 3400, specifically on the upper surface of the base plate 3400, the driving shaft 3510 is in transmission connection with the gear 3520, the driving shaft 3510 can be driven by a micro motor (not shown in the figure) or other structures, and the micro motor is controlled by the processing module; the gear rail 3530 is disposed on the movable plate 3100, the gear 3520 is mounted in the gear rail 3530 and moves along a preset direction of the gear rail 3530, and the gear 3520 enables the movable plate 3100 to generate a displacement in a first direction and a displacement in a second direction through the gear rail 3530 when rotating, wherein the first direction is perpendicular to the second direction; the limit rail 3540 is disposed on the movable plate 3100 or the base plate 3400, and the limit rail 3540 serves to prevent the gear 3520 from being disengaged from the gear rail 3530.
Specifically, the gear track 3530 and the limit track 3540 of the present embodiment have the following two structural forms:
as shown in fig. 7-9, a waist-shaped hole 3550 is disposed at a lower side of the movable plate 3100, the waist-shaped hole 3550 is disposed along a circumferential direction (i.e., a surrounding direction of the waist-shaped hole 3550) thereof with a plurality of teeth 3560 engaged with the gear 3520, the waist-shaped hole 3550 and the plurality of teeth 3560 together form the gear rail 3530, and the gear 3520 is located in the waist-shaped hole 3550 and engaged with the teeth 3560, such that the gear 3520 can drive the gear rail 3530 to move when rotating, and further directly drive the movable plate 3100 to move; in order to ensure that the gear 3520 can be constantly kept meshed with the gear rail 3530 during rotation, the limiting rail 3540 is disposed on the base plate 3400, the bottom of the movable plate 3100 is provided with a limiting member 3570 installed in the limiting rail 3540, and the limiting rail 3540 makes the motion track of the limiting member 3570 in a kidney-shaped manner, that is, the motion track of the limiting member 3570 in the current track is the same as the motion track of the movable plate 3100, specifically, the limiting member 3570 of the present embodiment is a protrusion disposed on the bottom of the movable plate 3100.
As shown in fig. 10 and 11, the gear rail 3530 of the present embodiment may further include a plurality of cylindrical protrusions 3580 disposed on the movable plate 3100, the plurality of cylindrical protrusions 3580 are uniformly spaced along the second direction, and the gear 3520 is engaged with the plurality of protrusions; the limiting rail 3540 is a first arc-shaped limiting member 3590 and a second arc-shaped limiting member 3600 which are arranged on the movable plate 3100, the first arc-shaped limiting member 3590 and the second arc-shaped limiting member 3600 are respectively arranged on two opposite sides of the gear rail 3530 along a first direction, and therefore, when the movable plate 3100 moves to a preset position, the gear 3520 is located on one side of the gear rail 3530, the gear 3520 is easy to disengage from the gear rail 3530 formed by the cylindrical protrusions 3580, and therefore, the first arc-shaped limiting member 3590 or the second arc-shaped limiting member 3600 can play a guiding role, so that the movable plate 3100 can move along the preset direction of the gear rail 3530, that is, the first arc-shaped limiting member 3590, the second arc-shaped limiting member 3600 and the plurality of protrusions cooperate to make the movement trajectory of the movable plate 3100 be waist-shaped.
The operation of the mechanical anti-shake device 3000 of the present embodiment will be described in detail with reference to the above structure, taking the example that the lens 1000 shakes twice, the shaking directions of the two times are opposite, and it is necessary to make the movable plate 3100 motion-compensate once in the first direction and then once in the second direction. When the movable plate 3100 is required to be compensated for motion in the first direction, the gyroscope feeds the detected shaking direction and distance of the lens 1000 back to the processing module in advance, the processing module calculates the motion distance of the movable plate 3100, so that the driving shaft 3510 drives the gear 3520 to rotate, the gear 3520 is matched with the gear rail 3530 and the limiting rail 3540, the processing module wirelessly sends a driving signal, the movable plate 3100 is further driven to move to a compensation position in the first direction, the movable plate 3100 is driven to reset through the driving shaft 3510 after compensation, in the resetting process, the elastic restoring mechanism 3300 also provides resetting force for resetting the movable plate 3100, and the movable plate 3100 is convenient to restore to the initial position. When the movable plate 3100 needs to perform motion compensation in the second direction, the processing method is the same as the compensation step in the first direction, and will not be described herein.
Of course, the above-mentioned two simple shakes are only performed twice, when a plurality of shakes occur, or when the shake direction is not reciprocating, the shake can be compensated by driving a plurality of compensation assemblies, the basic working process is the same as the above-mentioned description principle, which is not described herein in detail, and the detection feedback of the gyroscope, the sending of the control command to the driving shaft 3510 by the processing module, and the like are all the prior art, and are also described herein in many cases.
As can be seen from the above description, the mechanical compensator provided in this embodiment not only does not suffer from interference of an external magnetic field, but also has a good anti-shake effect, and can realize accurate compensation of the lens 1000 under the condition of multiple shakes, and the compensation is timely and accurate. In addition, the mechanical anti-shake device adopting the embodiment is simple in structure, small in installation space required by each component, convenient to integrate of the whole anti-shake device and high in compensation precision.
Specifically, the electronic device of the embodiment includes a mobile phone and a bracket for mounting the mobile phone. The purpose of the electronic device including the stand is to support and fix the electronic device using the stand due to uncertainty of an image acquisition environment.
In addition, the applicant finds that the existing mobile phone support only has a function of supporting a mobile phone, but does not have a function of a self-stick, so that the applicant makes a first improvement on the support, and combines the mobile phone support 5000 and the support 5200, and in combination with fig. 12, the support 5000 of this embodiment includes a mobile phone mount 5100 and a retractable support 5200, and the support 5200 is connected to a middle portion of the mobile phone mount 5100 (specifically, a middle portion of a substrate 3200 described below) through a damping hinge, so that when the support 5200 is rotated to the state of fig. 13, the support 5000 can form a self-stick structure, and when the support 5200 is rotated to the state of fig. 14, the support 5000 can form a mobile phone support 5000 structure.
The applicant of the above-mentioned support structure finds that, after the mobile phone mounting seat 5100 is combined with the support rod 5200, the occupied space is large, and even if the support rod 5200 is telescopic, the mobile phone mounting seat 5100 cannot change the structure, the size cannot be further reduced, and the mobile phone mounting seat 5100 cannot be placed in a pocket or a small bag, which causes the problem that the support 5000 is inconvenient to carry, so that the second step of improvement is performed on the support 5000 in the present embodiment, so that the overall accommodation performance of the support 5000 is further improved.
As shown in fig. 12-15, the mobile phone mounting base 5100 of the present embodiment includes a retractable connecting plate 5110 and folding plate sets 5120 installed at two opposite ends of the connecting plate 5110, wherein the supporting rod 5200 is connected to the middle portion of the connecting plate 5110 by a damping hinge; the folded plate group 5120 includes a first plate body 5121, a second plate body 5122 and a third plate body 5123, wherein one of two opposite ends of the first plate body 5121 is hinged to the connecting plate 5110, and the other of the two opposite ends of the first plate body 5121 is hinged to one of two opposite ends of the second plate body 5122; the other end of the second plate body 5122 opposite to both ends thereof is hinged to one end of the third plate body 5123 opposite to both ends thereof; the second plate body 5122 is provided with an opening 5130 for inserting a corner of the mobile phone.
Referring to fig. 15, when the mobile phone holder 5100 is used for mounting a mobile phone, the first plate 5121, the second plate 5122 and the third plate 5123 are folded to form a right triangle, the second plate 5122 is a hypotenuse of the right triangle, and the first plate 5121 and the third plate 5123 are right-angled sides of the right triangle, wherein one side surface of the third plate 5123 is attached to one side surface of the connecting plate 5110 side by side, and the other end of the opposite two ends of the third plate 5123 abuts against one end of the opposite two ends of the first plate 5121, so that the three folding plates are in a self-locking state, and when two corners of the lower portion of the mobile phone are inserted into the two openings 5130 of the two sides, the two sides of the lower portion of the mobile phone 6000 are located in the two right triangles, and the mobile phone 6000 can be fixed by the cooperation of the mobile phone, the connecting plate 5110 and the folding plate 5120, the triangular state cannot be opened by an external force, and the triangular state of the folding plate group 5120 can be released only after the mobile phone is drawn out from the opening 5130.
When the mobile phone mount 5100 is not in a working state, the connecting plate 5110 is reduced to the minimum length, the folding plate group 5120 and the connecting plate 5110 are mutually folded, a user can fold the mobile phone mount 5100 to form the minimum volume, and due to the scalability of the supporting rod 5200, the whole support 5000 can be accommodated to form the minimum volume state, so that the gorgeous and prosperous property of the support 5000 is improved, and the user can even directly place the support 5000 into a pocket or a small handbag, which is very convenient.
Preferably, in this embodiment, a first connecting portion is further disposed on one side surface of the third plate 5123, a first matching portion matched with the first connecting portion is disposed on a side surface of the connecting plate 5110, which is attached to the third plate 5123, and when the mobile phone mounting base 5100 of the support 5000 is used for mounting a mobile phone, the first connecting portion is connected with the first matching portion in a clamping manner. Specifically, the first connecting portion of this embodiment is a protruding strip or protrusion (not shown), and the first engaging portion is a slot (not shown) formed on the connecting plate 5110. This structure not only improves the stability of the folded plate group 5120 in the triangular state, but also facilitates the connection between the folded plate group 5120 and the connecting plate 5110 when the handset mounting seat 5100 needs to be folded to the minimum state.
Preferably, in this embodiment, a second connecting portion is further disposed at one of two opposite ends of the first plate 5121, a second matching portion matched with the second connecting portion is disposed at the other of two opposite ends of the third plate 5123, and when the support 5000 is used for mounting a mobile phone, the second connecting portion is engaged with the second matching portion. The second connecting portion may be a protrusion (not shown), and the second matching portion is an opening 5130 or a slot (not shown) matched with the protrusion. The structure improves the stability of the laminated plate assembly in a triangular state
In addition, in this embodiment, a base (not shown in the figure) may be detachably connected to the other end of the supporting rod 5200, when the mobile phone is required to be fixed and the mobile phone 6000 has a certain height, the supporting rod 5200 may be stretched to a certain length, the bracket 5000 may be placed on a plane through the base, and then the mobile phone may be placed in the mobile phone mounting seat 5100, so as to complete the fixation of the mobile phone; the detachable connection of the support bar 5200 and the base enables the two to be carried separately, thereby further improving the accommodation and carrying convenience of the support 5000.
The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the invention provides a non-transitory computer-readable storage medium, which stores computer-executable instructions, wherein when the computer-executable instructions are executed by an electronic device, the electronic device is caused to execute the information interaction method in any method embodiment.
The present invention provides a computer program product, wherein the computer program product comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, wherein the program instructions, when executed by an electronic device, cause the electronic device to perform the information interaction method in any of the above method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a software product that can be stored on a computer-readable storage medium including any mechanism for storing or transmitting information in a form readable by a computer (e.g., a computer). For example, a machine-readable medium includes Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory storage media, electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others, and the computer software product includes instructions for causing a computing device (which may be a personal computer, server, or network device, etc.) to perform the methods described in the various embodiments or portions of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (4)

1. An information interaction method, comprising:
receiving interactive information corresponding to a first user and sent by a first terminal, wherein the interactive information comprises a face image and a first coordinate position of the first user;
adjusting the picture of a virtual scene displayed by a second terminal according to the first coordinate position, wherein the virtual scene is a simulated scene established on the basis of the real scene where the first user is located;
analyzing and processing the face image to obtain a light angle of a real scene where the first user is located;
performing illumination rendering on the picture of the virtual scene according to the light angle;
wherein the step of adjusting the picture of the virtual scene displayed by the second terminal according to the first coordinate position comprises: determining a second coordinate position corresponding to the first coordinate position in the virtual scene based on a pre-established coordinate corresponding relation table of the real scene and the virtual scene; acquiring size information of the second terminal screen, and determining a target picture of the virtual scene according to the size information and the second coordinate position; displaying the target picture;
the step of analyzing and processing the face image to obtain the light angle of the real scene where the first user is located includes: extracting a sub-image of a nose region in the face image; dividing the sub-image into a plurality of sub-regions, and determining the sub-light intensity weighting center of each sub-region; comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light angle of each sub-area; calculating the sub-illumination intensity of each sub-region, and determining the weight of the sub-ray angle of each sub-region according to the sub-illumination intensity of each sub-region; and calculating to obtain the light ray angle according to each sub light ray angle and the weight of the sub light ray angle.
2. The method of claim 1, wherein the interaction information further comprises video information, voice information, and/or text information, the method further comprising:
determining a target object corresponding to the interactive information in the real scene;
and displaying the interaction information at a position matched with the target object.
3. An information interaction apparatus, comprising:
the receiving module is used for receiving interaction information which is sent by a first terminal and corresponds to a first user, wherein the interaction information comprises a face image and a first coordinate position of the first user;
the adjusting module is used for adjusting the picture of a virtual scene displayed by the second terminal according to the first coordinate position, wherein the virtual scene is a simulated scene established on the basis of a real scene where the first user is located;
the processing module is used for analyzing and processing the face image to obtain a light angle of a real scene where the first user is located;
the rendering module is used for performing illumination rendering on the picture of the virtual scene according to the light angle;
the adjusting module is further configured to determine a second coordinate position corresponding to the first coordinate position in the virtual scene based on a pre-established coordinate correspondence table between the real scene and the virtual scene; acquiring size information of the second terminal screen, and determining a target picture of the virtual scene according to the size information and the second coordinate position; displaying the target picture;
the processing module is further used for extracting a sub-image of a nose region in the face image; dividing the sub-image into a plurality of sub-regions, and determining the sub-light intensity weighting center of each sub-region; comparing each sub-light intensity weighting center with the weighting center of the face image to obtain the sub-light angle of each sub-area; calculating the sub-illumination intensity of each sub-region, and determining the weight of the sub-ray angle of each sub-region according to the sub-illumination intensity of each sub-region; and calculating to obtain the light ray angle according to each sub light ray angle and the weight of the sub light ray angle.
4. An electronic device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the information interaction method of claim 1 or 2.
CN201811129528.3A 2018-09-20 2018-09-27 Information interaction method and device and electronic equipment Active CN109521869B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2018/106787 2018-09-20
PCT/CN2018/106787 WO2020056692A1 (en) 2018-09-20 2018-09-20 Information interaction method and apparatus, and electronic device

Publications (2)

Publication Number Publication Date
CN109521869A CN109521869A (en) 2019-03-26
CN109521869B true CN109521869B (en) 2022-01-18

Family

ID=65769924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811129528.3A Active CN109521869B (en) 2018-09-20 2018-09-27 Information interaction method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN109521869B (en)
WO (1) WO2020056692A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473293B (en) 2019-07-30 2023-03-24 Oppo广东移动通信有限公司 Virtual object processing method and device, storage medium and electronic equipment
CN110674422A (en) * 2019-09-17 2020-01-10 西安时代科技有限公司 Method and system for realizing virtual scene display according to real scene information
CN111667590B (en) * 2020-06-12 2024-03-22 上海商汤智能科技有限公司 Interactive group photo method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281348A (en) * 2010-06-08 2011-12-14 Lg电子株式会社 Method for guiding route using augmented reality and mobile terminal using the same
CN106600638A (en) * 2016-11-09 2017-04-26 深圳奥比中光科技有限公司 Realization method of augmented reality
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal
CN107479701A (en) * 2017-07-28 2017-12-15 深圳市瑞立视多媒体科技有限公司 Virtual reality exchange method, apparatus and system
CN107845132A (en) * 2017-11-03 2018-03-27 太平洋未来科技(深圳)有限公司 The rendering intent and device of virtual objects color effect
CN107944420A (en) * 2017-12-07 2018-04-20 北京旷视科技有限公司 The photo-irradiation treatment method and apparatus of facial image
CN108537870A (en) * 2018-04-16 2018-09-14 太平洋未来科技(深圳)有限公司 Image processing method, device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2447915A1 (en) * 2010-10-27 2012-05-02 Sony Ericsson Mobile Communications AB Real time three-dimensional menu/icon shading
CN105653035B (en) * 2015-12-31 2019-01-11 上海摩软通讯技术有限公司 virtual reality control method and system
CN107330978B (en) * 2017-06-26 2020-05-22 山东大学 Augmented reality modeling experience system and method based on position mapping

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281348A (en) * 2010-06-08 2011-12-14 Lg电子株式会社 Method for guiding route using augmented reality and mobile terminal using the same
CN106600638A (en) * 2016-11-09 2017-04-26 深圳奥比中光科技有限公司 Realization method of augmented reality
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal
CN107479701A (en) * 2017-07-28 2017-12-15 深圳市瑞立视多媒体科技有限公司 Virtual reality exchange method, apparatus and system
CN107845132A (en) * 2017-11-03 2018-03-27 太平洋未来科技(深圳)有限公司 The rendering intent and device of virtual objects color effect
CN107944420A (en) * 2017-12-07 2018-04-20 北京旷视科技有限公司 The photo-irradiation treatment method and apparatus of facial image
CN108537870A (en) * 2018-04-16 2018-09-14 太平洋未来科技(深圳)有限公司 Image processing method, device and electronic equipment

Also Published As

Publication number Publication date
CN109521869A (en) 2019-03-26
WO2020056692A1 (en) 2020-03-26

Similar Documents

Publication Publication Date Title
CN109151340B (en) Video processing method and device and electronic equipment
CN108614638B (en) AR imaging method and apparatus
CN109214351B (en) AR imaging method and device and electronic equipment
CN108596827B (en) Three-dimensional face model generation method and device and electronic equipment
CN106550182B (en) Shared unmanned aerial vehicle viewing system
CN109521869B (en) Information interaction method and device and electronic equipment
KR102365721B1 (en) Apparatus and Method for Generating 3D Face Model using Mobile Device
CN108377398B (en) Infrared-based AR imaging method and system and electronic equipment
US10104292B2 (en) Multishot tilt optical image stabilization for shallow depth of field
CN105554367B (en) A kind of moving camera shooting method and mobile terminal
CN108966017B (en) Video generation method and device and electronic equipment
CN109271911B (en) Three-dimensional face optimization method and device based on light rays and electronic equipment
EP2590396A1 (en) Information processing system, information processing device, and information processing method
KR20160070780A (en) Refocusable images
CN109474801B (en) Interactive object generation method and device and electronic equipment
CN108573480B (en) Ambient light compensation method and device based on image processing and electronic equipment
CN109285216B (en) Method and device for generating three-dimensional face image based on shielding image and electronic equipment
KR20110122333A (en) Mobile device and method for implementing augmented reality using the same
CN109218697B (en) Rendering method, device and the electronic equipment at a kind of video content association interface
CN103731599A (en) Photographing method and camera
WO2019055388A1 (en) 4d camera tracking and optical stabilization
CN106657792B (en) Shared viewing device
CN113645410B (en) Image acquisition method, device and machine-readable storage medium
CN110740246A (en) image correction method, mobile device and terminal device
CN108702442B (en) System and method for time of day capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant