WO2020056692A1 - Procédé et appareil d'interaction d'informations et dispositif électronique - Google Patents

Procédé et appareil d'interaction d'informations et dispositif électronique Download PDF

Info

Publication number
WO2020056692A1
WO2020056692A1 PCT/CN2018/106787 CN2018106787W WO2020056692A1 WO 2020056692 A1 WO2020056692 A1 WO 2020056692A1 CN 2018106787 W CN2018106787 W CN 2018106787W WO 2020056692 A1 WO2020056692 A1 WO 2020056692A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
user
light
information
scene
Prior art date
Application number
PCT/CN2018/106787
Other languages
English (en)
Chinese (zh)
Inventor
菲永奥利维尔
李建亿
Original Assignee
太平洋未来科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来科技(深圳)有限公司 filed Critical 太平洋未来科技(深圳)有限公司
Priority to PCT/CN2018/106787 priority Critical patent/WO2020056692A1/fr
Priority to CN201811129528.3A priority patent/CN109521869B/zh
Publication of WO2020056692A1 publication Critical patent/WO2020056692A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present invention relates to the field of Internet application technology, and in particular, to an information interaction method, device, and electronic device.
  • Augmented reality technology is a technology that puts virtual objects on the screen and interacts with them in the real world. Users can use AR devices to sense the existence of virtual objects.
  • the virtual world scene is generated in advance based on the real world scene, and it is not possible to accurately display the real world light and shadow changes in real time. The user cannot communicate with the first user in the real scene, which affects the user's interactive experience.
  • the information interaction method, device, and electronic device provided by the embodiments of the present invention are used to solve at least the foregoing problems in related technologies.
  • An embodiment of the present invention provides an information interaction method, including:
  • Receiving interaction information corresponding to a first user sent by a first terminal the interaction information including a face image of the first user and a first coordinate position; adjusting a virtual scene displayed by the second terminal according to the first coordinate position Analyzing the face image to obtain the light angle of the real scene where the first user is located; and performing lighting rendering on the picture of the virtual scene according to the light angle.
  • adjusting the picture of the virtual scene displayed by the second terminal according to the first coordinate position includes: determining the first coordinate based on a pre-established correspondence table between the real scene and the virtual scene.
  • the position is a corresponding second coordinate position in the virtual scene; obtaining size information of the second terminal, determining a target picture of the virtual scene according to the size information and the second coordinate position; displaying the target picture .
  • analyzing and processing the face image to obtain a light angle of a real scene in which the first user is located includes: extracting a sub-image of a nose area in the face image; and determining the sub-image based on the light.
  • the light intensity weighting center of the image is compared with the light intensity weighting center and the weighting center of the face image to obtain the light angle of the real scene in which the first user is located.
  • the light intensity weighting center of the sub-image is determined based on the light, and the light intensity weighting center is compared with the weight center of the face image to obtain the light of the real scene in which the first user is located.
  • the angle includes: dividing the sub-image into several sub-regions, determining a sub-light intensity weighting center of each of the sub-regions; comparing each of the sub-light intensity weighting centers with a weight center of the face image to obtain The sub-ray angle of each of the sub-areas; calculating the sub-light intensity of each of the sub-areas; determining the weight of the sub-ray angle of the sub-area according to the sub-light intensity of the sub-area; The weight of the sub-ray angle is calculated to obtain the light angle.
  • the interaction information further includes video information, voice information, and / or text information
  • the method further includes: determining a target object corresponding to the interaction information in the real scene; and displaying the interaction information in a relationship with The target object matches the position.
  • an information interaction device including:
  • a receiving module configured to receive interaction information corresponding to a first user sent by a first terminal, where the interaction information includes a face image and a first coordinate position of the first user; an adjustment module is configured to The coordinate position adjusts the picture of the virtual scene displayed by the second terminal; the processing module is configured to analyze and process the face image to obtain the light angle of the real scene where the first user is located; the rendering module is configured to The light angle performs light rendering on the picture of the virtual scene.
  • the adjustment module includes: a determining unit, configured to determine a second corresponding to the first coordinate position in the virtual scene based on a pre-established coordinate correspondence table between the real scene and the virtual scene. A coordinate position; an acquiring unit configured to acquire size information of the second terminal, and determining a target picture of the virtual scene according to the size information and the second coordinate position; and a display unit configured to display the target picture.
  • the processing module includes: an extraction unit for extracting a sub-image of a nasal region in the face image; a comparison unit for determining a light intensity weighting center of the sub-image based on light, and converting the light The strong weighting center is compared with the weighting center of the face image to obtain the light angle of the real scene in which the first user is located.
  • the comparison unit is configured to divide the sub-image into a plurality of sub-regions, and determine a sub-light intensity weighting center of each of the sub-regions; The weighted centers are compared to obtain the sub-ray angle of each of the sub-regions; the sub-light intensity of each of the sub-regions is calculated; the weight of the sub-ray angle of the sub-region is determined according to the sub-light intensity of the sub-region; The sub-ray angle and the weight of the sub-ray angle are calculated to obtain the ray angle.
  • the interaction information further includes video information, voice information, and / or text information
  • the device further includes a matching module configured to determine a target object corresponding to the interaction information in the real scene. Displaying the interaction information at a position that matches the target object.
  • Another aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores a memory that can be used by the at least one processor An executed instruction, where the instruction is executed by the at least one processor, so that the at least one processor can execute any one of the foregoing information interaction methods.
  • the electronic device further includes an image acquisition module including a lens, an auto-focusing voice coil motor, a mechanical image stabilizer, and an image sensor, and the lens is fixed on the auto-focusing voice coil motor.
  • the lens is used to acquire an image
  • the image sensor transmits the image acquired by the lens to the recognition module
  • the autofocus voice coil motor is mounted on the mechanical image stabilizer
  • the processing module is based on the inside of the lens
  • the feedback of the lens shake detected by the gyroscope drives the action of the mechanical image stabilizer to achieve lens shake compensation.
  • the mechanical anti-shake device includes a movable plate, a movable frame, an elastic restoring mechanism, a base plate, and a compensation mechanism; a central portion of the movable plate is provided with a through hole through which the lens passes, and the auto-focusing voice coil motor Installed on the movable plate, the movable plate is installed in the movable frame, and the opposite sides of the movable plate are slidingly fitted with the inner walls of the opposite sides of the movable frame so that the movable plate can be moved along Reciprocating sliding in the first direction; the size of the movable frame is smaller than that of the substrate, and two opposite sides of the movable frame are connected to the substrate through two elastic restoring mechanisms, respectively.
  • the compensation mechanism includes a drive shaft, gears, A gear track and a limit track, the drive shaft is mounted on the base plate, the drive shaft is connected with the gear drive;
  • the gear track is provided on the movable plate, and the gear is mounted In the gear track, when the gear rotates, the gear track enables the movable plate to generate a displacement in a first direction and a displacement in a second direction, wherein the first direction is perpendicular to the second direction;
  • the limit track is disposed on the movable plate or the base plate, and the limit track is used to prevent the gear from detaching from the gear track.
  • a side of the movable plate is provided with a waist-shaped hole, and the waist-shaped hole is provided with a plurality of teeth that mesh with the gear along a circumferential direction thereof, and the waist-shaped hole and the plurality of teeth together constitute the A gear track, wherein the gear is located in the waist-shaped hole and meshes with the teeth; the limit track is disposed on the base plate, and a bottom of the movable plate is provided with a limit position within the limit track Piece, the limit track makes the movement track of the limit piece in a waist shape.
  • the limiting member is a protrusion provided on the bottom surface of the movable plate.
  • the gear track includes a plurality of cylindrical protrusions provided on the movable plate, the plurality of cylindrical protrusions are evenly spaced along the second direction, and the gear is in phase with the plurality of protrusions.
  • the limit track is a first arc-shaped stopper and a second arc-shaped stopper provided on the movable plate, and the first arc-shaped stopper and the second arc-shaped stopper are respectively It is arranged on two opposite sides of the gear track in the first direction, and the first arc-shaped stopper, the second arc-shaped stopper, and a plurality of the protrusions cooperate to make the moving track of the movable plate Waist-shaped.
  • the elastic recovery mechanism includes a telescopic spring.
  • the image acquisition module includes a mobile phone and a bracket for mounting the mobile phone.
  • the bracket includes a mobile phone mounting base and a retractable support rod;
  • the mobile phone mounting base includes a retractable connection plate and a folding plate group installed at opposite ends of the connection plate, and one end of the support rod is connected to the connection The middle portions of the plates are connected by a damping hinge;
  • the folded plate group includes a first plate body, a second plate body, and a third plate body, wherein one of two opposite ends of the first plate body is in phase with the connecting plate.
  • the other end of the opposite ends of the first plate body is hinged to one of the opposite ends of the second plate body; the other end of the opposite ends of the second plate body is connected to the third plate One end of the two opposite ends of the body is hinged; the second plate body is provided with an opening for the corner of the mobile phone to be inserted; when the mobile phone mount is used to install the mobile phone, the first plate body, the second plate body, and the first plate body
  • the three plates are folded in a right triangle state, the second plate is a hypotenuse of a right triangle, and the first plate and the third plate are right angles of a right triangle, wherein the third plate is One side of the side is attached to one side of the connecting plate Then, the other end of the opposite two ends of the third plate body abuts one end of the opposite two ends of the first plate body.
  • a first connection portion is provided on one side surface of the third plate body, and a first connection portion that is matched with the first connection portion is provided on a side surface where the connection plate is in contact with the third plate body.
  • a second connection portion is provided at one end of the opposite ends of the first plate body, and a second connection is provided at the other end of the opposite ends of the third plate body to cooperate with the second connection portion.
  • the other end of the support rod is detachably connected to a base.
  • the second user wants to learn about the environment of the real scene through remote roaming.
  • the first user roams in the real scene and the face image. It will be transmitted to the second terminal in real time.
  • the second terminal can determine the light conditions of the real scene according to the face image, and virtualize accordingly.
  • the rendering of the scene enables the virtual scene to show the real world light and shadow changes accurately and in real time, which is more similar to the real scene.
  • the first user can let the second user follow himself through the positioning information, and can also explain the real scene situation to the second user on the spot through voice information or video information, which optimizes the interaction experience between the second user and the first user.
  • the anti-shake hardware structure of the mobile phone camera and the mobile phone selfie stand also further enhance the shooting effect, which is more conducive to subsequent image or video processing.
  • FIG. 1 is a flowchart of an information interaction method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an information interaction method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of an information interaction method according to an embodiment of the present invention.
  • FIG. 4 is a structural diagram of an information interaction device according to an embodiment of the present invention.
  • FIG. 5 is a structural diagram of an information interaction device according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a hardware structure of an electronic device that executes an information interaction method provided by a method embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of an image acquisition module according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a first mechanical vibration stabilizer provided by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a bottom surface of a first movable board according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a second mechanical image stabilizer provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a bottom surface of a second movable board according to an embodiment of the present invention.
  • FIG. 12 is a structural diagram of a stent provided by an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a state of a stent provided by an embodiment of the present invention.
  • FIG. 14 is a schematic view of another state of a stent provided by an embodiment of the present invention.
  • FIG. 15 is a structural state diagram when the mounting base and the mobile phone are connected according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of an information interaction method according to an embodiment of the present invention.
  • the virtual scene in this embodiment is created based on a real-world scene.
  • the real-world scene is a park
  • the virtual scene is generated based on the simulation of the park.
  • the first user interacts with the second user watching the virtual scene through the first terminal in the real scene through the first terminal, that is, the virtual scene displayed on the second terminal is completely the same as the real scene in which the first user is located
  • Consistent scenes for example, if the virtual scene seen by the second user on the second terminal is at park A, then the real scene at which the corresponding first user is located is also at park A.
  • the first user carries the first terminal on the scene of the real scene, and can transmit the interaction information to the second terminal displaying the virtual scene through the first terminal, and the second terminal can adjust the current virtual scene picture according to the interaction information, so that the first The virtual scene displayed by the two terminals can be consistent with the real scene described by the first user.
  • an information interaction method provided by an embodiment of the present invention includes:
  • Step S101 Receive interaction information corresponding to a first user sent by a first terminal, where the interaction information includes a face image and a first coordinate position of the first user.
  • the first user and the second user perform audio and video communication through the first terminal and the second terminal.
  • the first user watches the virtual scene through the first terminal, and the second user is in a real situation corresponding to the virtual scene. World scene.
  • the first terminal monitors the position change of the first user in real time. When the position change exceeds a preset distance, it indicates that the first user has undergone a large change in the real-world scene, for example, from the scene A in the park to The scene at B, or moving from the first floor to the second floor of the building, then obtain the first coordinate position after the first user moves and the face image of the first user at the coordinate position, and the coordinate position and the person The face image is sent to the second terminal as the interaction information corresponding to the first user.
  • the establishment of the real scene coordinate system can be based on the horizontal plane as the X axis, the direction at 90 degrees to the horizontal plane as the Y axis, and a certain landmark of the real scene as the coordinate origin, which is not limited in the present invention. It should be noted that, due to different real-world scenes, how much distance the first user moves can cause a large change in the scene is also different, so the preset distance can be determined according to the real-world scene.
  • the face image may be a snapshot image of a user's face captured by a front camera.
  • a face detection algorithm well known to those skilled in the art is used.
  • any CNN-based face detection algorithm, etc. to obtain a face image.
  • Step S102 Adjust the picture of the virtual scene displayed by the second terminal according to the first coordinate position.
  • the second terminal After receiving the first coordinate position, the second terminal needs to adjust the picture of the virtual scene displayed by the second terminal according to the second coordinate position, so that the first user can let the second user's field of vision follow his footsteps in the real scene.
  • this step may be performed through the following sub-steps.
  • Step S1021 Determine a second coordinate position corresponding to the first coordinate position in the virtual scene based on a pre-established coordinate correspondence table between the real scene and the virtual scene.
  • the establishment of the virtual scene coordinate system may refer to the method of establishing the real scene coordinate system, which is not described in the present invention.
  • a second coordinate position corresponding to the first coordinate position in the virtual scene can be determined, that is, a coordinate position to which the virtual scene should be adjusted is obtained.
  • Step S1022 Acquire size information of the second terminal, and determine a target picture of the virtual scene according to the size information and the second coordinate position.
  • the target picture to which the virtual scene should be adjusted can be determined according to the size of the second terminal and the second coordinate position.
  • the second coordinate position can be placed at the center of the second terminal screen, and then the boundary of the target picture to be displayed is determined according to the size information of the second terminal screen; or the second time can be used according to the user's viewing time habits.
  • the coordinate position is placed on a preset position of the second terminal screen, and then the boundary of the target picture to be displayed is determined according to the size information of the second terminal screen.
  • Step S1023 displaying the target picture.
  • the real-world lighting conditions will change accordingly. Therefore, after adjusting the picture on the second terminal, the virtual scene needs to be adjusted according to the real-world lighting conditions.
  • the picture is rendered.
  • Step S103 Analyze and process the face image to obtain a light angle of a real scene in which the first user is located.
  • the first user generally faces the screen of the first terminal when interacting with the second user, it is only necessary to mirror the light angle of the ambient light detected in the face image to the virtual scene that needs to be added Just fine.
  • this step may be performed by the following sub-steps.
  • Step S1031 extracting a sub-image of a nasal region in the face image.
  • the nose in the facial features is extracted to obtain a sub-image of the nose area in the face image.
  • step S1032 the light intensity weighting center of the sub-image is determined based on the light, and the light intensity weighting center is compared with the weight center of the face image to obtain the light angle of the real scene where the first user is located.
  • the corresponding light intensity weighting center is determined according to the image moment of the sub-image.
  • An image moment is a set of moments calculated from digital graphics. It usually describes the global features of the image and provides a lot of information about the different types of geometric features of the image, such as size, position, orientation, and shape. For example, a The order moment is related to the shape, the second order moment shows the degree of expansion of the curve around the straight line average value, and the third order moment is a measure of the symmetry of the average value. From the second and third order moments, a group of 7 constants can be derived. Order moments and invariant order moments are statistical characteristics of images, which can be used to classify images based on this, which are common knowledge in the art, and the present invention will not repeat them here.
  • the weighting center is the geometric center of the face image
  • the coordinate position of the weighting center of the face image is the direction of the ambient light in the real scene.
  • we can establish the coordinate system by selecting the coordinate origin and obtain the angle between the vector and the X axis. , For the light angle of the ambient light of the current scene.
  • the angle of the light can also be calculated by other non-proprietary algorithms, which is not limited in the present invention. It should be noted that, in the embodiment of the present invention, the ambient light will be considered to be unidirectional and uniform.
  • the light angle of a real scene can be calculated as follows:
  • the sub-image can be divided into four equal parts to obtain four sub-regions, and the sub-light intensity weighting center of each sub-region and the sub-ray angle of each sub-region are determined according to the above method.
  • Second, for each sub-picture Obtain the light intensity corresponding to the sub-picture according to the light and dark contrast information therein, and after obtaining the sub-light intensity of each sub-region, use the sub-light intensity of each sub-region as the weight of the sub-ray angle of the sub-region; finally, The sub-ray angles of the four sub-areas are calculated by adding and averaging according to their respective weights to obtain the average light angle.
  • Step S104 perform lighting rendering on the picture of the virtual scene according to the light angle.
  • the shadow position of each object can be determined according to the light angle of the real scene and the position of each object in the virtual scene.
  • the shape of the shadow at the shadow position is determined based on the shape of each object.
  • a shadow image of each object is generated.
  • each object in the virtual scene includes, but is not limited to, people, animals and plants, scenery, buildings, and the like.
  • the virtual scene displayed on the second terminal can be synchronized in real time with the real scene where the first user is located.
  • the virtual scene is added. Authenticity.
  • the foregoing interaction information further includes video information, voice information, and / or text information.
  • the foregoing method further includes: determining a target object corresponding to the interaction information in the real scene; and displaying the interaction information at a position matching the target object.
  • the first user and the second user may talk about objects in the real scene.
  • the real scene is a park
  • the second terminal parses that the received interactive information (audio information, video information, or text information) relates to an object in a real scene, it can be used as a target object, and the text of the interactive information is displayed on the target object.
  • the received interactive information audio information, video information, or text information
  • the audio information or video information may be converted into text by using a speech recognition technology, and then the text is used to find whether there are keywords matching the objects in the real scene.
  • the object corresponding to the keyword in the real scene is the target object; when the interactive information is text information, it can be directly determined whether there is a keyword matching the object in the real scene in the text information, and if it exists, the key The corresponding object of the word in the real scene is taken as the target object.
  • the second user wants to know the environment of the real scene through remote roaming.
  • the first user s roaming location and face image in the real scene will be transferred to the second terminal in real time.
  • the first user and the second user can perform audio and video communication while walking.
  • the second terminal can determine the light conditions of the real scene according to the face image, and render the virtual scene accordingly, so that the virtual scene can be accurately and in real time. Show the changes of light and shadow in the real world, more similar to the real scene.
  • the first user can let the second user follow himself through the positioning information, and can also explain the real scene situation to the second user on the spot through voice information or video information, which optimizes the interaction experience between the second user and the first user.
  • FIG. 4 is a structural diagram of an information interaction device according to an embodiment of the present invention. As shown in FIG. 3, the device specifically includes: a receiving module 100, an adjustment module 200, a processing module 300, and a rendering module 400. among them,
  • the receiving module 100 is configured to receive interaction information corresponding to a first user sent by a first terminal, where the interaction information includes a face image and a first coordinate position of the first user; an adjustment module 200 is configured to The first coordinate position adjusts the picture of the virtual scene displayed by the second terminal; the processing module 300 is configured to analyze and process the face image to obtain the light angle of the real scene where the first user is located; the rendering module 400, And is configured to perform lighting rendering on a picture of the virtual scene according to the light angle.
  • the information interaction device provided by the embodiment of the present invention is specifically configured to execute the method provided by the embodiment shown in FIG. 1, and the implementation principles, methods, and functional uses thereof are similar to the embodiment shown in FIG. 1, and details are not described herein again.
  • FIG. 5 is a structural diagram of an information interaction device according to an embodiment of the present invention. As shown in FIG. 5, the device specifically includes a receiving module 100, an adjustment module 200, a processing module 300, and a rendering module 400. among them,
  • the receiving module 100 is configured to receive interaction information corresponding to a first user sent by a first terminal, where the interaction information includes a face image and a first coordinate position of the first user; an adjustment module 200 is configured to The first coordinate position adjusts the picture of the virtual scene displayed by the second terminal; the processing module 300 is configured to analyze and process the face image to obtain the light angle of the real scene where the first user is located; the rendering module 400, And is configured to perform lighting rendering on a picture of the virtual scene according to the light angle.
  • the adjustment module 200 further includes: a determining unit 210, an obtaining unit 220, and a display unit 230, where:
  • a determining unit 210 is configured to determine a second coordinate position corresponding to the first coordinate position in the virtual scene based on a coordinate correspondence table of the real scene and the virtual scene established in advance; the obtaining unit 220 is configured to: For obtaining size information of the second terminal, determining a target picture of the virtual scene according to the size information and the second coordinate position; and a display unit 230 for displaying the target picture.
  • processing module 300 further includes: an extraction unit 310 and a comparison unit 320, where:
  • An extraction unit 310 is configured to extract a sub-image of a nasal region in the face image; a comparison unit 320 is configured to determine a light-intensity weighted center of the sub-image based on light, and compare the light-intensity weighted center with the person The weighted centers of the face images are compared to obtain the light angle of the real scene in which the first user is located.
  • the comparison unit 320 is configured to divide the sub-image into a plurality of sub-regions, and determine a sub-light intensity weighting center of each of the sub-regions; The weighted centers are compared to obtain the sub-ray angle of each of the sub-regions; the sub-light intensity of each of the sub-regions is calculated; the weight of the sub-ray angle of the sub-region is determined according to the sub-light intensity of the sub-region; The sub-ray angle and the weight of the sub-ray angle are calculated to obtain the ray angle.
  • the interaction information further includes video information, voice information, and / or text information
  • the device further includes a matching module 500 for determining a target object corresponding to the interaction information in the real scene. Displaying the interaction information at a position that matches the target object.
  • the information interaction device provided by the embodiment of the present invention is specifically configured to execute the method provided by the embodiment shown in FIG. 1 to FIG. 3, and its implementation principles, methods, and functional uses are similar to the embodiment shown in FIG. 1-3, and here No longer.
  • FIG. 6 is a schematic diagram of a hardware structure of an electronic device that executes an information interaction method provided by a method embodiment of the present invention.
  • the electronic device includes:
  • One or more processors 610 and a memory 620 are taken as an example in FIG. 6.
  • the device for performing the information interaction method may further include: an input device 630 and an output device 630.
  • the processor 610, the memory 620, the input device 630, and the output device 640 may be connected through a bus or other methods. In FIG. 6, the connection through the bus is taken as an example.
  • the memory 620 is a non-volatile computer-readable storage medium, and may be used to store a non-volatile software program, a non-volatile computer executable program, and a module, as corresponding to the information interaction method in the embodiment of the present invention.
  • the processor 610 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the information interaction method.
  • the memory 620 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store the information created by using the information interaction device according to the embodiment of the present invention. Data, etc.
  • the memory 620 may include a high-speed random access memory 620, and may further include a non-volatile memory 620, such as at least one magnetic disk memory 620, a flash memory device, or other non-volatile solid-state memory 620.
  • the memory 620 may optionally include a memory 620 remotely disposed with respect to the processor 66, and these remote memories 620 may be connected to the information interaction device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 630 may receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the information interaction device.
  • the input device 630 may include a device such as a pressing module.
  • the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, execute the information interaction method.
  • the electronic devices in the embodiments of the present invention exist in various forms, including but not limited to:
  • Mobile communication equipment This type of equipment is characterized by mobile communication functions, and its main goal is to provide voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, feature phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has the characteristics of mobile Internet access.
  • Such terminals include: PDA, MID and UMPC devices, such as iPad.
  • Portable entertainment equipment This type of equipment can display and play multimedia content.
  • Such devices include: audio and video players (such as iPod), handheld game consoles, e-books, as well as smart toys and portable car navigation devices.
  • the electronic device includes an image acquisition module.
  • the image acquisition module of this embodiment includes a lens 1000, an autofocus voice coil motor 2000, a mechanical image stabilizer 3000, and an image sensor 4000.
  • the lens 1000 is fixed on the auto-focusing voice coil motor 2000, and the lens 1000 is fixed on the auto-focusing voice coil motor 2000.
  • the lens 1000 is used to acquire an image, and the image sensor 4000 sets the lens 1000
  • the acquired image is transmitted to the recognition module, the auto-focusing voice coil motor 2000 is mounted on the mechanical image stabilizer 3000, and the processing module drives the camera based on the feedback of the lens 1000 shake detected by the gyroscope in the lens 1000.
  • the operation of the mechanical image stabilizer 3000 is described to realize the shake compensation of the lens 1000.
  • the lens 1000 needs to be driven in at least two directions, which means that multiple coils need to be arranged. It brings certain challenges to the miniaturization of the overall structure, and is easily affected by external magnetic fields, which affects the anti-shake effect. Therefore, the Chinese patent published as CN106131435A provides a miniature optical anti-shake camera module, which realizes memory through temperature changes.
  • the alloy wire is stretched and shortened to pull the auto-focusing voice coil motor 2000 to achieve the shake compensation of the lens 1000.
  • the control chip of the micro memory alloy optical anti-shake actuator can control the change of the driving signal to change the memory alloy wire.
  • the temperature is used to control the elongation and shortening of the memory alloy wire, and the position and moving distance of the actuator are calculated based on the resistance of the memory alloy wire.
  • the micro memory alloy optical image stabilization actuator moves to the specified position, the resistance of the memory alloy wire at this time is fed back. By comparing the deviation of this resistance value and the target value, the movement on the micro memory alloy optical image stabilization actuator can be corrected. deviation.
  • the structure of the above technical solution alone cannot achieve accurate compensation for the lens 1000 in the case of multiple shakes, which is due to the heating of the shape memory alloy It takes a certain time to cool down and cool down.
  • the above technical solution can achieve the compensation of the lens 1000 for the shake in the first direction, but when the subsequent shake in the second direction occurs, due to the memory alloy It is too late to deform in an instant, so it is easy to cause compensation in a timely manner, and it is impossible to accurately achieve 1000-shake compensation for a lens that has multiple shakes and continuous shakes in different directions, so it is necessary to improve its structure.
  • this embodiment improves the optical image stabilizer and design it as a mechanical image stabilizer 3000.
  • the specific structure is as follows:
  • the mechanical image stabilizer 3000 of this embodiment includes a movable plate 3100, a movable frame 3200, an elastic restoring mechanism 3300, a base plate 3400, and a compensation mechanism 3500.
  • the movable plate 3100 and the base plate 3400 are provided in the middle of the plate for the lens to pass through.
  • the auto-focusing voice coil motor is installed on the movable plate 3100, and the movable plate 3100 is installed in the movable frame 3200.
  • the movable plate 3100 of this embodiment The width in the left-to-right direction is substantially the same as the internal width of the movable frame 3200, so that the opposite sides (left and right sides) of the movable plate 3100 and the inner walls of the opposite sides (left and right sides) of the movable frame 3200 slide to fit.
  • the movable plate 3100 can slide back and forth along the first direction within the movable frame 3200.
  • the first direction in this embodiment is the vertical direction in the figure.
  • the size of the movable frame 3200 in this embodiment is smaller than the size of the substrate 3400, and two opposite sides of the movable frame 3200 are connected to the substrate 3400 through two elastic recovery mechanisms 3300, respectively.
  • the elastic restoring mechanism 3300 is a telescopic spring or other elastic member, and it should be noted that the elastic restoring mechanism 3300 of this embodiment only allows it to expand and retract in the left-right direction in the figure (that is, the second direction described below). The ability to move along the first direction cannot be designed.
  • the purpose of designing the elastic recovery mechanism 3300 is also to facilitate the movable frame 3200 to reset the movable plate 3100 after the movable frame 3200 compensates for displacement.
  • the specific action process of this embodiment will be described below. The process is described in detail.
  • the compensation mechanism 3500 of this embodiment drives the movable plate 3100 and the lens on the movable plate 3100 under the driving of the processing module (which may be an action instruction sent by the processing module) to implement lens shake compensation.
  • the compensation mechanism 3500 in this embodiment includes a driving shaft 3510, a gear 3520, a gear track 3530, and a limit track 3540.
  • the driving shaft 3510 is mounted on the substrate 3400, and specifically is mounted on the substrate 3400. On the surface, the drive shaft 3510 is drivingly connected to the gear 3520.
  • the drive shaft 3510 can be driven by a structure such as a micro motor (not shown), and the micro motor is controlled by the processing module described above; the gear track 3530 is provided On the movable plate 3100, the gear 3520 is installed in the gear track 3530 and moves in a preset direction of the gear track 3530.
  • the gear track 3530 makes the movement
  • the plate 3100 can generate a displacement in a first direction and a displacement in a second direction, wherein the first direction is perpendicular to the second direction; the limit track 3540 is provided on the movable plate 3100 or the base plate 3400 In the above, the limit track 3540 is used to prevent the gear 3520 from leaving the gear track 3530.
  • gear track 3530 and the limit track 3540 of this embodiment have the following two structural forms:
  • a waist-shaped hole 3550 is provided on the lower side of the movable plate 3100 in this embodiment, and a plurality of waist-shaped holes 3550 are provided along the circumferential direction (that is, the surrounding direction of the waist-shaped hole 3550).
  • a tooth 3560 meshing with the gear 3520, the waist-shaped hole 3550 and a plurality of the teeth 3560 together form the gear track 3530, and the gear 3520 is located in the waist-shaped hole 3550 and communicates with the tooth 3560.
  • the meshing makes the gear 3520 move the gear track 3530 when it rotates, and then directly drives the movement of the movable plate 3100.
  • this embodiment describes A limiting rail 3540 is provided on the base plate 3400, and a bottom of the movable plate 3100 is provided with a limiting member 3570 installed in the limiting rail 3540.
  • the limiting rail 3540 enables the limiting member 3570 to be located therein.
  • the movement trajectory is waist-shaped, that is, the movement trajectory of the limiter 3570 in the current track is the same as the movement trajectory of the movable plate 3100.
  • the limiter 3570 of this embodiment is provided on the movable plate 3100. Bulge on the bottom.
  • the gear track 3530 of this embodiment may also be composed of a plurality of cylindrical protrusions 3580 provided on the movable plate 3100, and a plurality of the cylindrical protrusions 3580 along the
  • the gears 3520 are arranged at regular intervals in the second direction, and the gears 3520 are engaged with a plurality of the protrusions; and the limit track 3540 is a first arc-shaped stopper 3590 and a second arc provided on the movable plate 3100.
  • Shape limiting member 3600, the first arc-shaped limiting member 3590 and the second arc-shaped limiting member 3600 are respectively disposed on opposite sides of the gear rail 3530 in the first direction to prevent movement on the movable plate 3100
  • the gear 3520 is located on one side of the gear rail 3530.
  • the gear 3520 is easily separated from the gear rail 3530 formed by the cylindrical protrusion 3580.
  • the first arc-shaped stopper 3590 or the second arc-shaped stopper 3600 can Plays a guiding role, so that the movable plate 3100 can move in a preset direction of the gear track 3530, that is, the first arc-shaped limiting member 3590, the second arc-shaped limiting member 3600, and a plurality of the protrusions cooperate
  • the motion trajectory of the movable board 3100 is waist-shaped.
  • the following describes the working process of the mechanical image stabilizer 3000 of this embodiment in detail with reference to the above structure.
  • the two shake directions are opposite, and the movable plate 3100 needs to be compensated once in the first direction. And then motion compensation once in the second direction.
  • the gyroscope feeds back the detected lens 1000 shake direction and distance to the processing module in advance, and the processing module calculates the required moving distance of the movable plate 3100, so that the driving shaft 3510 drives The gear 3520 rotates.
  • the gear 3520 cooperates with the gear track 3530 and the limit track 3540, and the processing module wirelessly sends a driving signal, thereby driving the movable plate 3100 to move to the compensation position in the first direction.
  • the processing module wirelessly sends a driving signal, thereby driving the movable plate 3100 to move to the compensation position in the first direction.
  • the movable plate is driven by the movable shaft 3510 3100 reset.
  • the elastic recovery mechanism 3300 also provides a reset force for resetting the movable plate 3100, which is more convenient for the movable plate 3100 to return to the initial position.
  • the processing method is the same as the compensation steps in the first direction described above.
  • the mechanical compensator provided in this embodiment not only does not receive interference from external magnetic fields and has a good anti-shake effect, but also can accurately compensate the lens 1000 in the event of multiple shakes, and the compensation is timely and accurate.
  • the mechanical anti-shake device using this embodiment is not only simple in structure, but also requires small installation space for each component, which facilitates the integration of the entire anti-shake device and has higher compensation accuracy.
  • the electronic device in this embodiment includes a mobile phone and a bracket for mounting the mobile phone.
  • the purpose of the electronic device including a bracket is to support and fix the electronic device due to the uncertainty of the image acquisition environment.
  • the bracket 5000 in this embodiment includes a mobile phone mounting base 5100 and a retractable supporting rod 5200.
  • the middle portion of the supporting rod 5200 and the mobile phone mounting base 5100 passes through a damping hinge.
  • the applicant found that the mobile phone mount 5100 combined with the support pole 5200 occupies a large space. Even if the support pole 5200 is retractable, the mobile phone mount 5100 cannot undergo structural changes and the volume will not be further reduced. Putting it in a pocket or a small bag causes the inconvenience of carrying the stent 5000. Therefore, in this embodiment, a second step improvement is made to the stent 5000, so that the overall storability of the stent 5000 is further improved.
  • the mobile phone mounting base 5100 of this embodiment includes a retractable connecting plate 5110 and a folding plate group 5120 installed at opposite ends of the connecting plate 5110.
  • the support rod 5200 and the connecting plate 5110 The middle part is connected by a damping hinge;
  • the folding plate group 5120 includes a first plate body 5121, a second plate body 5122, and a third plate body 5123, wherein one of the two opposite ends of the first plate body 5121 is connected to the first plate body 5121.
  • the connecting plate 5110 is hinged, the other end of the opposite ends of the first plate body 5121 is hinged with one of the opposite ends of the second plate body 5122, and the opposite ends of the second plate body 5122 The other end is hinged to one of opposite ends of the third plate body 5123; the second plate body 5122 is provided with an opening 5130 for a corner of the mobile phone to be inserted.
  • the first plate 5121, the second plate 5122, and the third plate 5123 are folded into a right triangle state, and the second plate 5122 is a hypotenuse of a right-angled triangle, and the first plate body 5121 and the third plate 5123 are right-angled sides of a right triangle, wherein one side surface of the third plate body 5123 and one of the connection plate 5110 are The side is attached side by side, and the other end of the opposite ends of the third plate body 5123 and the one of the opposite ends of the first plate body 5121 are against each other.
  • This structure can make the three folding plates in a self-locking state, and When the two lower corners of the mobile phone are inserted into the two openings 5130 on both sides, the lower sides of the mobile phone 6000 are located in two right-angled triangles.
  • the mobile phone 6000 can be completed through the joint work of the mobile phone, the connecting plate 5110, and the folding plate group 5120.
  • the triangle state cannot be opened under external force.
  • the triangle state of 5120 pieces of folding plate group can only be released after the mobile phone is pulled out from the opening 5130.
  • the connecting plate 5110 When the mobile phone mounting base 5100 is not in working state, the connecting plate 5110 is reduced to a minimum length, and the folding plate group 5120 and the connecting plate 5110 are folded to each other.
  • the user can fold the mobile phone mounting base 5100 to a minimum volume. Due to the scalability of the lever 5200, the entire bracket 5000 can be accommodated in the smallest state, which improves the collection of the bracket 5000. The user can even put the bracket 5000 directly into the pocket or small handbag, which is very convenient.
  • a first connection portion is also provided on one side of the third plate body 5123, and a side surface where the connection plate 5110 is in contact with the third plate body 5123 is provided with the first connection portion.
  • a first mating portion that mates with the connecting portion.
  • the first connecting portion of this embodiment is a convex strip or protrusion (not shown in the figure), and the first matching portion is a card slot (not shown in the figure) opened on the connecting plate 5110.
  • This structure not only improves the stability when the 5120 pieces of the folding plate group are in a triangular state, but also facilitates the connection of the 5120 pieces of the folding plate group and the connecting plate 5110 when the mobile phone mounting base 5100 needs to be folded to a minimum state.
  • a second connection portion is also provided at one end of the opposite ends of the first plate body 5121, and the other end of the opposite ends of the third plate body 5123 is provided with the second connection portion.
  • the second connection portion may be a protrusion (not shown in the figure), and the second mating portion is an opening 5130 or a card slot (not shown in the figure) that cooperates with the protrusion.
  • a base (not shown in the figure) can be detachably connected to the other end of the support rod 5200.
  • the support rod 5200 can be stretched to A certain length, put the bracket 5000 on a plane through the base, and then place the mobile phone in the mobile phone mount 5100 to complete the fixing of the mobile phone; and the detachable connection of the support rod 5200 and the base can make the two can be carried separately, further The accommodating and carrying convenience of the bracket 5000 are improved.
  • the device embodiments described above are only schematic, and the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, which may be located in One place, or can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment. Those of ordinary skill in the art can understand and implement without creative labor.
  • An embodiment of the present invention provides a non-transitory computer-readable storage storage medium, where the computer storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by an electronic device, the electronic device is caused.
  • the information interaction method in any of the foregoing method embodiments is performed on the above.
  • An embodiment of the present invention provides a computer program product, wherein the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions When executed by an electronic device, the electronic device is caused to execute the information interaction method in any of the foregoing method embodiments.
  • each embodiment can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware.
  • the above-mentioned technical solution in essence or a part that contributes to the existing technology may be embodied in the form of a software product, and the computer software product may be stored in a computer-readable storage medium, the computer-readable record A medium includes any mechanism for storing or transmitting information in a form readable by a computer (eg, a computer).
  • machine-readable media include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (e.g., carrier waves , Infrared signals, digital signals, etc.), the computer software product includes a number of instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute various embodiments or certain parts of the embodiments Methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un procédé et un appareil d'interaction d'informations et un dispositif électronique. Le procédé consiste : à recevoir des informations d'interaction correspondant à un premier utilisateur envoyées par un premier terminal, les informations d'interaction comprenant une image faciale et une première position de coordonnées du premier utilisateur (S101) ; à régler, en fonction de la première position de coordonnées, l'image d'une scène virtuelle affichée par un second terminal (S102) ; à analyser l'image faciale afin d'obtenir l'angle de lumière d'une scène réelle dans laquelle se trouve le premier utilisateur (S103) ; et à réaliser un rendu d'éclairage sur l'image de la scène virtuelle en fonction de l'angle de lumière (S104). Au moyen du procédé, de l'appareil et du dispositif, une scène virtuelle peut présenter des changements de scène et des changements de lumière et d'ombre du monde réel précisément et en temps réel ; en outre, un second utilisateur visualisant la scène du monde virtuel peut communiquer avec un premier utilisateur dans la scène réelle, ce qui permet d'optimiser l'expérience d'interaction entre le premier utilisateur et le second utilisateur.
PCT/CN2018/106787 2018-09-20 2018-09-20 Procédé et appareil d'interaction d'informations et dispositif électronique WO2020056692A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/106787 WO2020056692A1 (fr) 2018-09-20 2018-09-20 Procédé et appareil d'interaction d'informations et dispositif électronique
CN201811129528.3A CN109521869B (zh) 2018-09-20 2018-09-27 一种信息交互方法、装置及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/106787 WO2020056692A1 (fr) 2018-09-20 2018-09-20 Procédé et appareil d'interaction d'informations et dispositif électronique

Publications (1)

Publication Number Publication Date
WO2020056692A1 true WO2020056692A1 (fr) 2020-03-26

Family

ID=65769924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106787 WO2020056692A1 (fr) 2018-09-20 2018-09-20 Procédé et appareil d'interaction d'informations et dispositif électronique

Country Status (2)

Country Link
CN (1) CN109521869B (fr)
WO (1) WO2020056692A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667590A (zh) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 一种互动合影方法、装置、电子设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473293B (zh) 2019-07-30 2023-03-24 Oppo广东移动通信有限公司 虚拟对象处理方法及装置、存储介质和电子设备
CN110674422A (zh) * 2019-09-17 2020-01-10 西安时代科技有限公司 一种根据真实场景信息实现虚拟场景显示的方法及系统
CN117369633A (zh) * 2023-10-07 2024-01-09 上海铱奇科技有限公司 一种基于ar的信息交互方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281348A (zh) * 2010-06-08 2011-12-14 Lg电子株式会社 使用增强现实引导路线的方法以及使用该方法的移动终端
CN105653035A (zh) * 2015-12-31 2016-06-08 上海摩软通讯技术有限公司 虚拟现实控制方法及系统
CN106600638A (zh) * 2016-11-09 2017-04-26 深圳奥比中光科技有限公司 一种增强现实的实现方法
CN107134005A (zh) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 光照适配方法、装置、存储介质、处理器及终端
CN107330978A (zh) * 2017-06-26 2017-11-07 山东大学 基于位置映射的增强现实建模体验系统及方法
CN107479701A (zh) * 2017-07-28 2017-12-15 深圳市瑞立视多媒体科技有限公司 虚拟现实交互方法、装置及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2447915A1 (fr) * 2010-10-27 2012-05-02 Sony Ericsson Mobile Communications AB Ombrage en temps réel de menu/icône tridimensionnel
CN107845132B (zh) * 2017-11-03 2021-03-02 太平洋未来科技(深圳)有限公司 虚拟对象色彩效果的渲染方法和装置
CN107944420B (zh) * 2017-12-07 2020-10-27 北京旷视科技有限公司 人脸图像的光照处理方法和装置
CN108537870B (zh) * 2018-04-16 2019-09-03 太平洋未来科技(深圳)有限公司 图像处理方法、装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281348A (zh) * 2010-06-08 2011-12-14 Lg电子株式会社 使用增强现实引导路线的方法以及使用该方法的移动终端
CN105653035A (zh) * 2015-12-31 2016-06-08 上海摩软通讯技术有限公司 虚拟现实控制方法及系统
CN106600638A (zh) * 2016-11-09 2017-04-26 深圳奥比中光科技有限公司 一种增强现实的实现方法
CN107134005A (zh) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 光照适配方法、装置、存储介质、处理器及终端
CN107330978A (zh) * 2017-06-26 2017-11-07 山东大学 基于位置映射的增强现实建模体验系统及方法
CN107479701A (zh) * 2017-07-28 2017-12-15 深圳市瑞立视多媒体科技有限公司 虚拟现实交互方法、装置及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667590A (zh) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 一种互动合影方法、装置、电子设备及存储介质
CN111667590B (zh) * 2020-06-12 2024-03-22 上海商汤智能科技有限公司 一种互动合影方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN109521869B (zh) 2022-01-18
CN109521869A (zh) 2019-03-26

Similar Documents

Publication Publication Date Title
WO2020056692A1 (fr) Procédé et appareil d'interaction d'informations et dispositif électronique
CN109151340B (zh) 视频处理方法、装置及电子设备
CN108596827B (zh) 三维人脸模型生成方法、装置及电子设备
CN108614638B (zh) Ar成像方法和装置
WO2020056690A1 (fr) Procédé et appareil de présentation d'une interface associée à un contenu vidéo et dispositif électronique
KR102365721B1 (ko) 모바일 기기를 이용한 3차원 얼굴 모델 생성 장치 및 방법
CN108377398B (zh) 基于红外的ar成像方法、系统、及电子设备
WO2020056689A1 (fr) Procédé et appareil d'imagerie ra et dispositif électronique
CN109285216B (zh) 基于遮挡图像生成三维人脸图像方法、装置及电子设备
WO2020037676A1 (fr) Procédé et appareil de génération d'images tridimensionnelles de visage, et dispositif électronique
CN108966017B (zh) 视频生成方法、装置及电子设备
KR20180073327A (ko) 영상 표시 방법, 저장 매체 및 전자 장치
US10104292B2 (en) Multishot tilt optical image stabilization for shallow depth of field
WO2020037680A1 (fr) Procédé et appareil d'optimisation de visage en trois dimensions à base de lumière et dispositif électronique
GB2525232A (en) A device orientation correction method for panorama images
CN108573480B (zh) 基于图像处理的环境光补偿方法、装置及电子设备
WO2019200718A1 (fr) Procédé, appareil et dispositif électronique de traitement d'image
WO2020056691A1 (fr) Procédé de génération d'objet interactif, dispositif, et appareil électronique
WO2020056693A1 (fr) Procédé et appareil de synthétisation d'image et dispositif électronique
US20170195543A1 (en) Remote control between mobile communication devices for capturing images
KR20190061165A (ko) 광고를 포함하는 360°비디오 생성 시스템 및 방법
WO2021026782A1 (fr) Procédé de commande et appareil de commande pour tête à berceau portable, tête à berceau portable et support de stockage
CN112738404A (zh) 电子设备的控制方法及电子设备
KR101741149B1 (ko) 가상 카메라의 시점 제어 방법 및 장치
KR20150097267A (ko) 공간 축척을 초월한 영상 촬영 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18933782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18933782

Country of ref document: EP

Kind code of ref document: A1