WO2020056691A1 - Procédé de génération d'objet interactif, dispositif, et appareil électronique - Google Patents

Procédé de génération d'objet interactif, dispositif, et appareil électronique Download PDF

Info

Publication number
WO2020056691A1
WO2020056691A1 PCT/CN2018/106786 CN2018106786W WO2020056691A1 WO 2020056691 A1 WO2020056691 A1 WO 2020056691A1 CN 2018106786 W CN2018106786 W CN 2018106786W WO 2020056691 A1 WO2020056691 A1 WO 2020056691A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
light intensity
light
target user
interactive object
Prior art date
Application number
PCT/CN2018/106786
Other languages
English (en)
Chinese (zh)
Inventor
菲永奥利维尔
李建亿
Original Assignee
太平洋未来科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来科技(深圳)有限公司 filed Critical 太平洋未来科技(深圳)有限公司
Priority to PCT/CN2018/106786 priority Critical patent/WO2020056691A1/fr
Priority to CN201811123907.1A priority patent/CN109474801B/zh
Publication of WO2020056691A1 publication Critical patent/WO2020056691A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/14Use of phonemic categorisation or speech recognition prior to speaker recognition or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Definitions

  • the present invention relates to the field of Internet application technology, and in particular, to a method, an apparatus, and an electronic device for generating an interactive object.
  • the method, device, and electronic device for generating an interactive object according to the embodiments of the present invention are used to solve at least the foregoing problems in related technologies.
  • An embodiment of the present invention provides a method for generating an interactive object, including:
  • the method further includes: establishing an interaction object information database in advance, where the interaction object information store stores a plurality of interaction objects, a plurality of keywords, and a correspondence relationship between the interaction objects and the keywords.
  • the analyzing and processing the face image to obtain light information of a scene in which the target user is located includes: extracting a sub-image of a nose area in the face image; and determining the sub-image based on the light. Comparing the light intensity weighting center with the weighting center of the face image to obtain the light angle of the scene where the target user is located; obtaining the light intensity of the sub-picture, according to the sub-picture The light intensity of the picture obtains the average light intensity of the scene in which the target user is located.
  • determining the light intensity weighting center of the sub-image based on the light, and comparing the light intensity weighting center with the weight center of the face image to obtain a light angle of the scene where the target user is located includes: : Dividing the sub-image into several sub-regions, and determining a sub-light intensity weighting center of each of the sub-regions; comparing each of the sub-light intensity weighting centers with the weight center of the face image to obtain each of the The sub-ray angle of the sub-region; calculating the sub-light intensity of each of the sub-regions, and determining the weight of the sub-ray angle of the sub-region according to the sub-light intensity of the sub-region; according to each of the sub-ray angles and the sub-light The weight of the light angle is calculated to obtain the light angle.
  • displaying the interactive object rendered according to the light information on the video interface includes: finding a target position corresponding to the interactive object on the video interface; and according to the target position and The light angle determines the shadow position of the interactive object; adjusts the contrast of the interactive object according to the interactive object and the light intensity and generates a shadow of the interactive object at the shadow position.
  • Another aspect of the embodiments of the present invention provides an apparatus for generating an interaction object, including:
  • a detection module is configured to acquire audio information in real time and detect whether a preset keyword exists in the audio information; an acquisition module is configured to acquire an interaction object corresponding to the preset keyword if the preset keyword exists, and determine A target user corresponding to the audio information; a processing module, configured to capture a face image of the target user in a video interface displaying the target user, and analyze and process the face image to obtain the target user's Light information of the scene; a display module, configured to display the interactive object rendered according to the light information on the video interface.
  • the apparatus further includes a establishing module, which is configured to pre-establish an interactive object information database, where the interactive object information database stores multiple interactive objects, multiple keywords, and the interactive objects and the Correspondence of keywords.
  • the processing module includes: an extraction unit for extracting a sub-image of a nasal region in the face image; a comparison unit for determining a light intensity weighting center of the sub-image based on light, and converting the light The strong weighting center is compared with the weighting center of the face image to obtain the light angle of the scene in which the target user is located; an obtaining unit is configured to obtain the light intensity of the sub-picture, and obtain it according to the light intensity of the sub-picture The average light intensity of the scene where the target user is located.
  • the comparison unit is configured to divide the sub-image into a plurality of sub-regions, and determine a sub-light intensity weighting center of each of the sub-regions; The weighted centers are compared to obtain the sub-ray angle of each of the sub-regions; the sub-light intensity of each of the sub-regions is calculated; the weight of the sub-ray angle of the sub-region is determined according to the sub-light intensity of the sub-region; The sub-ray angle and the weight of the sub-ray angle are calculated to obtain the ray angle.
  • the display module is configured to find a target position corresponding to the interactive object on the video interface; determine a shadow position of the interactive object according to the target position and the light angle; and according to the interaction The object and the light intensity adjust the contrast of the interactive object and generate a shadow of the interactive object at the shadow position.
  • Another aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores a memory that can be used by the at least one processor An executed instruction, where the instruction is executed by the at least one processor, so that the at least one processor can execute any one of the foregoing interaction object generation methods.
  • the electronic device further includes an image acquisition module including a lens, an auto-focusing voice coil motor, a mechanical image stabilizer, and an image sensor, and the lens is fixed on the auto-focusing voice coil motor.
  • the lens is used to acquire an image
  • the image sensor transmits the image acquired by the lens to the recognition module
  • the autofocus voice coil motor is mounted on the mechanical image stabilizer
  • the processing module is based on the inside of the lens
  • the feedback of the lens shake detected by the gyroscope drives the action of the mechanical image stabilizer to achieve lens shake compensation.
  • the mechanical anti-shake device includes a movable plate, a movable frame, an elastic restoring mechanism, a base plate, and a compensation mechanism; a central portion of the movable plate is provided with a through hole through which the lens passes, and the auto-focusing voice coil motor Installed on the movable plate, the movable plate is installed in the movable frame, and the opposite sides of the movable plate are slidingly fitted with the inner walls of the opposite sides of the movable frame so that the movable plate can be moved along Reciprocating sliding in the first direction; the size of the movable frame is smaller than that of the substrate, and two opposite sides of the movable frame are connected to the substrate through two elastic restoring mechanisms, respectively.
  • the compensation mechanism includes a drive shaft, gears, A gear track and a limit track, the drive shaft is mounted on the base plate, the drive shaft is connected with the gear drive;
  • the gear track is provided on the movable plate, and the gear is mounted In the gear track, when the gear rotates, the gear track enables the movable plate to generate a displacement in a first direction and a displacement in a second direction, wherein the first direction is perpendicular to the second direction;
  • the limit track is disposed on the movable plate or the base plate, and the limit track is used to prevent the gear from detaching from the gear track.
  • a side of the movable plate is provided with a waist-shaped hole, and the waist-shaped hole is provided with a plurality of teeth that mesh with the gear along a circumferential direction thereof, and the waist-shaped hole and the plurality of teeth together constitute the A gear track, wherein the gear is located in the waist-shaped hole and meshes with the teeth; the limit track is disposed on the base plate, and a bottom of the movable plate is provided with a limit position within the limit track Piece, the limit track makes the movement track of the limit piece in a waist shape.
  • the limiting member is a protrusion provided on the bottom surface of the movable plate.
  • the gear track includes a plurality of cylindrical protrusions provided on the movable plate, the plurality of cylindrical protrusions are evenly spaced along the second direction, and the gear is in phase with the plurality of protrusions.
  • the limit track is a first arc-shaped stopper and a second arc-shaped stopper provided on the movable plate, and the first arc-shaped stopper and the second arc-shaped stopper are respectively It is arranged on two opposite sides of the gear track in the first direction, and the first arc-shaped stopper, the second arc-shaped stopper, and a plurality of the protrusions cooperate to make the moving track of the movable plate Waist-shaped.
  • the elastic recovery mechanism includes a telescopic spring.
  • the image acquisition module includes a mobile phone and a bracket for mounting the mobile phone.
  • the bracket includes a mobile phone mounting base and a retractable support rod;
  • the mobile phone mounting base includes a retractable connection plate and a folding plate group installed at opposite ends of the connection plate, and one end of the support rod is connected to the connection The middle portions of the plates are connected by a damping hinge;
  • the folded plate group includes a first plate body, a second plate body, and a third plate body, wherein one of two opposite ends of the first plate body is in phase with the connecting plate.
  • the other end of the opposite ends of the first plate body is hinged to one of the opposite ends of the second plate body; the other end of the opposite ends of the second plate body is connected to the third plate One end of the two opposite ends of the body is hinged; the second plate body is provided with an opening for the corner of the mobile phone to be inserted; when the mobile phone mount is used to install the mobile phone, the first plate body, the second plate body, and the first plate body
  • the three plates are folded in a right triangle state, the second plate is a hypotenuse of a right triangle, and the first plate and the third plate are right angles of a right triangle, wherein the third plate is One side of the side is attached to one side of the connecting plate Then, the other end of the opposite two ends of the third plate body abuts one end of the opposite two ends of the first plate body.
  • a first connection portion is provided on one side surface of the third plate body, and a first connection portion that is matched with the first connection portion is provided on a side surface where the connection plate is in contact with the third plate body.
  • a second connection portion is provided at one end of the opposite ends of the first plate body, and a second connection is provided at the other end of the opposite ends of the third plate body to cooperate with the second connection portion.
  • the other end of the support rod is detachably connected to a base.
  • the method, device and electronic device for generating an interactive object can generate the interactive content of the communication content in the user's video process, and display it in the video image interface of the other user in real time, and simultaneously interact with the object
  • the light and shadow effect can be consistent with the scene of the other user, so that the interaction object is integrated with the video image interface of the other user, which enriches the interactive experience of the user's video chat process and increases the interactive correlation between the chat content and the video scene.
  • the anti-shake hardware structure of the mobile phone camera and the mobile phone selfie stand also further enhance the shooting effect, which is more conducive to subsequent image or video processing.
  • FIG. 1 is a flowchart of a method for generating an interaction object according to an embodiment of the present invention
  • step S103 is a specific flowchart of step S103 provided by an embodiment of the present invention.
  • step S103 is a specific flowchart of step S103 provided by an embodiment of the present invention.
  • FIG. 4 is a structural diagram of an apparatus for generating an interactive object according to an embodiment of the present invention.
  • FIG. 5 is a structural diagram of an apparatus for generating an interactive object according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a hardware structure of an electronic device that executes a method for generating an interactive object according to an embodiment of the method of the present invention
  • FIG. 7 is a schematic structural diagram of an image acquisition module according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a first mechanical vibration stabilizer provided by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a bottom surface of a first movable board according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a second mechanical image stabilizer provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a bottom surface of a second movable board according to an embodiment of the present invention.
  • FIG. 12 is a structural diagram of a stent provided by an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a state of a stent provided by an embodiment of the present invention.
  • FIG. 14 is a schematic view of another state of a stent provided by an embodiment of the present invention.
  • FIG. 15 is a structural state diagram when the mounting base and the mobile phone are connected according to an embodiment of the present invention.
  • the video interface of each user When two or more users have a video chat, the video interface of each user will be displayed on the terminal screen.
  • the video interface of each user displays the user and the user's current Scene.
  • the audio information output by each user is obtained in real time.
  • the audio information of a user for example, user A
  • the keyword is passed through the interactive object It is displayed on the video interface of other users, which enriches the interactivity of the video chat process.
  • the interactive object will be adjusted according to the light and shadow conditions of the scene where each other user is located, so as to realize the fusion of the interactive object and the light and shadow effect of the scene where each user is located.
  • FIG. 1 is a flowchart of a method for generating an interaction object according to an embodiment of the present invention.
  • a method for generating an interaction object according to an embodiment of the present invention includes:
  • Step S101 Acquire audio information in real time, and detect whether a preset keyword exists in the audio information.
  • the audio information of each user is obtained in real time, and the audio information is identified to obtain the text information corresponding to the audio information, and the existence of the text information is determined based on the text information.
  • Preset keywords There are many existing speech recognition methods, including the use of artificial neural network models and Markov models for speech recognition, which are not described in detail in the present invention.
  • an interaction object information database is established in advance, and the interaction object information database stores multiple interaction objects, multiple keywords, and the interaction objects and the interaction objects. Correspondence of keywords.
  • the keywords may be nouns (such as buildings, scenery, animals, star names, video names, etc.), adjectives expressing emotions, etc.
  • the interaction objects may be emoticons, virtual characters, virtual animals, pictures, etc.
  • the keywords in the object information database and the interactive objects corresponding to the keywords can be replaced and supplemented in real time according to the current hotspot information, which is not limited in the present invention.
  • a keyword is included in the audio information output by the user, that is, an interactive object corresponding to the keyword is displayed in a video interface image of another user chatting with the user, which increases the interactivity in the video process.
  • step S102 After the audio information chatting between the users is converted into text information, based on the above-mentioned interaction object information database, it is found whether there is a keyword in the text information, and if there is a keyword, step S102 is performed.
  • Step S102 Obtain an interaction object corresponding to the preset keyword, and determine a target user corresponding to the audio information.
  • the interaction objects corresponding to the keywords in step S101 are obtained.
  • user A is having a video chat with user B.
  • user A says “Recently played Rugao Chuan is very good looking”
  • “Rugao Chuan” is a preset keyword
  • the user can use the interactive object information database.
  • Correspondence relationship in Chinese finds the interactive object corresponding to "Rugao Biography”, which can be a promotional picture of "Rugao Biography", a virtual image of a character in Rugao Biography, etc.
  • the interaction object since the audio information is sent by the user in the video chat to other users, the interaction object also needs to be displayed in the video interfaces displaying other users, so the audio information can be determined first by the audio information.
  • the first user determines other users other than the first user as target users corresponding to the audio information.
  • the target user may be one or more.
  • step S103 a face image of the target user is captured in a video interface displaying the target user, and the face image is analyzed and processed to obtain light information of a scene in which the target user is located.
  • the interaction object After the interaction object is determined, the interaction object needs to be displayed on the video interface of each target user. Therefore, according to the lighting information on the video interface of each target user, it is necessary to perform lighting rendering on the interactive object separately.
  • a video interface image of the target user is determined to be displayed, and a face image of the target is intercepted from the video interface image of the target user; then, the light information of the scene in which the target user is located is determined through the face image.
  • the light information includes, but is not limited to, light angle and light intensity.
  • the process of determining light information includes the following sub-steps:
  • Step S1031 extracting a sub-image of a nasal region in the face image.
  • a nose in a face feature is extracted to obtain a sub-image of a nasal region in a face image.
  • Step S1032 Determine the light intensity weighting center of the sub-image based on the light, compare the light intensity weighting center with the weight center of the face image to obtain the light angle of the scene where the target user is located.
  • the corresponding light intensity weighting center is determined according to the image moment of the sub-image.
  • An image moment is a set of moments calculated from digital graphics. It usually describes the global features of the image and provides a lot of information about the different types of geometric features of the image, such as size, position, orientation, and shape. For example, a The order moment is related to the shape, the second order moment shows the degree of expansion of the curve around the straight line average value, and the third order moment is a measure of the symmetry of the average value. From the second and third order moments, a group of 7 constants can be derived. Order moments and invariant order moments are statistical characteristics of images, which can be used to classify images based on this, which are common knowledge in the art, and the present invention will not repeat them here.
  • the weighting center is the geometric center of the face image
  • the coordinate position of the weighting center of the face image is the direction of the ambient light in the real scene.
  • we can establish the coordinate system by selecting the coordinate origin and obtain the angle between the vector and the X axis. , For the light angle of the ambient light of the current scene.
  • the angle of the light can also be calculated by other non-proprietary algorithms, which is not limited in the present invention. It should be noted that, in the embodiment of the present invention, the ambient light will be considered to be unidirectional and uniform.
  • the process includes the following steps:
  • step S1032a the sub-image is divided into a plurality of sub-regions, and a sub-light intensity weighting center of each of the sub-regions is determined.
  • step S1032b the weighted center of each sub-light intensity is compared with the weighted center of the face image to obtain a sub-ray angle of each of the sub-regions.
  • Step S1032c Calculate the sub-light intensity of each sub-region, and determine the weight of the sub-ray angle of the sub-region according to the sub-light intensity of the sub-region.
  • Step S1032d calculating the light angle according to each of the sub-ray angles and the weight of the sub-ray angles.
  • the number of sub-regions may be determined according to the size of the picture.
  • the sub-image can be divided into four equal parts to obtain four sub-regions.
  • the sub-light intensity weighting center of each sub-region and the sub-ray angle of each sub-region are determined.
  • the sub-light intensity weighting center of each sub-region and the sub-ray angle of each sub-region are determined.
  • the sub-light intensity weighting center of each sub-region and the sub-ray angle of each sub-region are determined.
  • the light intensity corresponding to the sub-picture is obtained by using the light and dark contrast information, etc.
  • the sub-light intensity of each sub-region is used as the weight of the sub-light angle of the sub-region; finally, the four sub-regions are weighted.
  • the sub-ray angle of is calculated by adding and averaging according to their respective weights to obtain an average light angle, and determining the average light angle as the light angle of the scene in which the target user is located.
  • Step S1033 Obtain the light intensity of the sub-picture, and obtain the average light intensity of the scene where the target user is located according to the light intensity of the sub-picture.
  • the light intensity corresponding to the sub-picture is obtained according to the contrast information therein. Since the light intensity is a scalar, there is no need to add a vector, so only the light intensity of each sub-picture needs to be added to obtain an average value, and the average value is the average light intensity of the scene where the target user is located.
  • Step S104 Display the interactive object rendered according to the light information on the video interface.
  • the interactive object After determining the light information (light angle and light intensity) of the scene where each target user is located, in this step, the interactive object is subjected to light rendering according to the light information.
  • this step includes: finding a target position corresponding to the interactive object on the video interface; and determining a shadow position of the interactive object according to the target position and the light angle. Adjusting the contrast of the interactive object according to the interactive object and the light intensity and generating a shadow of the interactive object at the shadow position.
  • the target position is a position that does not block the face of the target user, or a position associated with an object in the video interface.
  • a sofa is displayed in a video interface
  • the audio information output by a user is "Sofa is cute”
  • the interactive object is an expression mark corresponding to the keyword "cute”
  • the target position of the expression mark is the position of the sofa. According to the target position and light direction of the interactive object, by establishing a simple geometric relationship, the position of the shadow corresponding to the interactive object can be obtained.
  • a shadow can be generated at the position according to the shape of the interactive object, and at the same time, the contrast of the interactive object is adjusted according to the light intensity to make it consistent with the light intensity, so that the interactive object is fused with the scene.
  • the interactive object can be displayed on the video interface displaying the target user, that is, the target user can see the interactive object related to the audio information in addition to seeing his or her avatar on the video interface. , Enrich the interactivity of video chat.
  • the method for generating an interactive object can generate the interactive content of the communication content in the user's video process and display it in the video image interface of the other user in real time.
  • the light and shadow effects of the interactive object can be related to the scene in which the other user is located.
  • the interaction between the interaction object and the video image interface of the other user is enriched, which enriches the interactive experience of the user's video chat process, and increases the interactive correlation between the chat content and the video scene.
  • FIG. 4 is a structural diagram of an apparatus for generating an interactive object according to an embodiment of the present invention.
  • the device specifically includes: a detection module 100, an acquisition module 200, a processing module 300, and a display module 400. among them,
  • a detection module 100 is configured to acquire audio information in real time and detect whether a preset keyword exists in the audio information; an acquisition module 200 is configured to acquire an interaction object corresponding to the preset keyword if the preset keyword exists, And determine a target user corresponding to the audio information; a processing module 300 is configured to capture a face image of the target user in a video interface displaying the target user, and analyze and process the face image to obtain the Light information of the scene where the target user is located; a display module 400 is configured to display the interactive object rendered according to the light information on the video interface.
  • the apparatus for generating an interactive object provided by the embodiment of the present invention is specifically configured to execute the method provided by the embodiment shown in FIG. 1, and the implementation principles, methods, and functional uses thereof are similar to the embodiment shown in FIG. 1, and are not described herein again. .
  • FIG. 5 is a structural diagram of an apparatus for generating an interactive object according to an embodiment of the present invention.
  • the device specifically includes a setup module 500, a detection module 100, an acquisition module 200, a processing module 300, and a display module 400. among them,
  • the establishing module 500 is configured to establish an interaction object information database in advance, where the interaction object information database stores a plurality of interaction objects, a plurality of keywords, and a correspondence relationship between the interaction objects and the keywords; a detection module 100 is configured to: Acquiring audio information in real time, detecting whether the audio information has a preset keyword stored in the interaction object information database; and an obtaining module 200, configured to, if the preset keyword exists, according to the interaction object and the The corresponding relationship of the keywords acquires the interaction object corresponding to the preset keywords, and determines the target user corresponding to the audio information; the processing module 300 is configured to intercept the target user in a video interface displaying the target user A face image, analyzing and processing the face image to obtain light information of a scene in which the target user is located; a display module 400 for displaying on the video interface the interaction rendered according to the light information Object.
  • the processing module 300 includes: an extraction unit 310, a comparison unit 320, and an acquisition unit 330. among them,
  • An extraction unit 310 is configured to extract a sub-image of a nasal region in the face image; a comparison unit 320 is configured to determine a light-intensity weighted center of the sub-image based on light, and compare the light-intensity weighted center with the person The weighted centers of the face images are compared to obtain the light angle of the scene where the target user is located; an obtaining unit 330 is configured to obtain the light intensity of the sub-picture, and obtain the target user's location based on the light intensity of the sub-picture The average light intensity of the scene.
  • the comparison unit 320 is specifically configured to divide the sub-image into a plurality of sub-regions, determine a sub-light intensity weighting center of each of the sub-regions, and divide each of the sub-light intensity weighting centers with the face image Compare the weighting centers of the sub-regions to obtain the sub-ray angle of each of the sub-regions; calculate the sub-light intensity of each of the sub-regions; determine the weight of the sub-ray angles of the sub-regions according to the sub-light intensity of the sub-regions; The weights of the sub-ray angles and the sub-ray angles are calculated to obtain the light angles.
  • the display module 400 is configured to find a target position corresponding to the interactive object on the video interface; determine a shadow position of the interactive object according to the target position and the light angle; and according to the interaction The object and the light intensity adjust the contrast of the interactive object and generate a shadow of the interactive object at the shadow position.
  • the apparatus for generating an interaction object provided by the embodiment of the present invention is specifically configured to execute the method provided by the embodiment shown in FIG. 1 to FIG. 3, and its implementation principles, methods, and functional uses are similar to the embodiment shown in FIG. 1-3, I will not repeat them here.
  • the above-mentioned interactive object generating device in the embodiments of the present invention can be used as one of the software or hardware functional units, which can be independently set in the above-mentioned electronic device, or can be used as one of the functional modules integrated in the processor to execute the embodiments of the present invention.
  • Interaction object generation method can be used as one of the software or hardware functional units, which can be independently set in the above-mentioned electronic device, or can be used as one of the functional modules integrated in the processor to execute the embodiments of the present invention.
  • FIG. 6 is a schematic diagram of a hardware structure of an electronic device that executes a method for generating an interactive object provided by a method embodiment of the present invention.
  • the electronic device includes:
  • One or more processors 610 and a memory 620 are taken as an example in FIG. 6.
  • the device that executes the method for generating an interactive object may further include: an input device 630 and an output device 630.
  • the processor 610, the memory 620, the input device 630, and the output device 640 may be connected through a bus or other methods. In FIG. 6, the connection through the bus is taken as an example.
  • the memory 620 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, as described in the method for generating an interaction object in the embodiment of the present invention. Corresponding program instructions / modules.
  • the processor 610 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 620, that is, a method for generating the interaction object is implemented.
  • the memory 620 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store the use of an interactive object generating device provided according to an embodiment of the present invention Data created, etc.
  • the memory 620 may include a high-speed random access memory 620, and may further include a non-volatile memory 620, such as at least one magnetic disk memory 620, a flash memory device, or other non-volatile solid-state memory 620.
  • the memory 620 may optionally include a memory 620 remotely disposed with respect to the processor 66, and these remote memories 620 may be connected to the generating device of the interactive object through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 630 may receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of a generating device of an interactive object.
  • the input device 630 may include a device such as a pressing module.
  • the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, execute a method of generating the interaction object.
  • the electronic devices in the embodiments of the present invention exist in various forms, including but not limited to:
  • Mobile communication equipment This type of equipment is characterized by mobile communication functions, and its main goal is to provide voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, feature phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has the characteristics of mobile Internet access.
  • Such terminals include: PDA, MID and UMPC devices, such as iPad.
  • Portable entertainment equipment This type of equipment can display and play multimedia content.
  • Such devices include: audio and video players (such as iPod), handheld game consoles, e-books, as well as smart toys and portable car navigation devices.
  • the electronic device includes an image acquisition module.
  • the image acquisition module of this embodiment includes a lens 1000, an autofocus voice coil motor 2000, a mechanical image stabilizer 3000, and an image sensor 4000.
  • the lens 1000 is fixed on the auto-focusing voice coil motor 2000, and the lens 1000 is fixed on the auto-focusing voice coil motor 2000.
  • the lens 1000 is used to acquire an image, and the image sensor 4000 sets the lens 1000
  • the acquired image is transmitted to the recognition module, the auto-focusing voice coil motor 2000 is mounted on the mechanical image stabilizer 3000, and the processing module drives the camera based on the feedback of the lens 1000 shake detected by the gyroscope in the lens 1000.
  • the operation of the mechanical image stabilizer 3000 is described to realize the shake compensation of the lens 1000.
  • the lens 1000 needs to be driven in at least two directions, which means that multiple coils need to be arranged. It brings certain challenges to the miniaturization of the overall structure, and is easily affected by external magnetic fields, which affects the anti-shake effect. Therefore, the Chinese patent published as CN106131435A provides a miniature optical anti-shake camera module, which realizes memory through temperature changes.
  • the alloy wire is stretched and shortened to pull the auto-focusing voice coil motor 2000 to achieve the shake compensation of the lens 1000.
  • the control chip of the micro memory alloy optical anti-shake actuator can control the change of the driving signal to change the memory alloy wire.
  • the temperature is used to control the elongation and shortening of the memory alloy wire, and the position and moving distance of the actuator are calculated based on the resistance of the memory alloy wire.
  • the micro memory alloy optical image stabilization actuator moves to the specified position, the resistance of the memory alloy wire at this time is fed back. By comparing the deviation of this resistance value and the target value, the movement on the micro memory alloy optical image stabilization actuator can be corrected. deviation.
  • the structure of the above technical solution alone cannot achieve accurate compensation for the lens 1000 in the case of multiple shakes, which is due to the heating of the shape memory alloy It takes a certain time to cool down and cool down.
  • the above technical solution can achieve the compensation of the lens 1000 for the shake in the first direction, but when the subsequent shake in the second direction occurs, due to the memory alloy It is too late to deform in an instant, so it is easy to cause compensation in a timely manner, and it is impossible to accurately achieve 1000-shake compensation for a lens that has multiple shakes and continuous shakes in different directions, so it is necessary to improve its structure.
  • this embodiment improves the optical image stabilizer and design it as a mechanical image stabilizer 3000.
  • the specific structure is as follows:
  • the mechanical image stabilizer 3000 of this embodiment includes a movable plate 3100, a movable frame 3200, an elastic restoring mechanism 3300, a base plate 3400, and a compensation mechanism 3500.
  • the movable plate 3100 and the base plate 3400 are provided in the middle of the plate for the lens to pass through.
  • the auto-focusing voice coil motor is installed on the movable plate 3100, and the movable plate 3100 is installed in the movable frame 3200.
  • the movable plate 3100 of this embodiment The width in the left-to-right direction is substantially the same as the internal width of the movable frame 3200, so that the opposite sides (left and right sides) of the movable plate 3100 and the inner walls of the opposite sides (left and right sides) of the movable frame 3200 slide to fit.
  • the movable plate 3100 can slide back and forth along the first direction within the movable frame 3200.
  • the first direction in this embodiment is the vertical direction in the figure.
  • the size of the movable frame 3200 in this embodiment is smaller than the size of the substrate 3400, and two opposite sides of the movable frame 3200 are connected to the substrate 3400 through two elastic recovery mechanisms 3300, respectively.
  • the elastic restoring mechanism 3300 is a telescopic spring or other elastic member, and it should be noted that the elastic restoring mechanism 3300 of this embodiment only allows it to expand and retract in the left-right direction in the figure (that is, the second direction described below). The ability to move along the first direction cannot be designed.
  • the purpose of designing the elastic recovery mechanism 3300 is also to facilitate the movable frame 3200 to reset the movable plate 3100 after the movable frame 3200 compensates for displacement.
  • the specific action process of this embodiment will be described below. The process is described in detail.
  • the compensation mechanism 3500 of this embodiment drives the movable plate 3100 and the lens on the movable plate 3100 under the driving of the processing module (which may be an action instruction sent by the processing module) to implement lens shake compensation.
  • the compensation mechanism 3500 in this embodiment includes a driving shaft 3510, a gear 3520, a gear track 3530, and a limit track 3540.
  • the driving shaft 3510 is mounted on the substrate 3400, and specifically is mounted on the substrate 3400. On the surface, the drive shaft 3510 is drivingly connected to the gear 3520.
  • the drive shaft 3510 can be driven by a structure such as a micro motor (not shown), and the micro motor is controlled by the processing module described above; the gear track 3530 is provided On the movable plate 3100, the gear 3520 is installed in the gear track 3530 and moves in a preset direction of the gear track 3530.
  • the gear track 3530 makes the movement
  • the plate 3100 can generate a displacement in a first direction and a displacement in a second direction, wherein the first direction is perpendicular to the second direction; the limit track 3540 is provided on the movable plate 3100 or the base plate 3400 In the above, the limit track 3540 is used to prevent the gear 3520 from leaving the gear track 3530.
  • gear track 3530 and the limit track 3540 of this embodiment have the following two structural forms:
  • a waist-shaped hole 3550 is provided on the lower side of the movable plate 3100 in this embodiment, and a plurality of waist-shaped holes 3550 are provided along the circumferential direction (that is, the surrounding direction of the waist-shaped hole 3550).
  • a tooth 3560 meshing with the gear 3520, the waist-shaped hole 3550 and a plurality of the teeth 3560 together form the gear track 3530, and the gear 3520 is located in the waist-shaped hole 3550 and communicates with the tooth 3560.
  • the meshing makes the gear 3520 move the gear track 3530 when it rotates, and then directly drives the movement of the movable plate 3100.
  • this embodiment describes A limiting rail 3540 is provided on the base plate 3400, and a bottom of the movable plate 3100 is provided with a limiting member 3570 installed in the limiting rail 3540.
  • the limiting rail 3540 enables the limiting member 3570 to be located therein.
  • the movement trajectory is waist-shaped, that is, the movement trajectory of the limiter 3570 in the current track is the same as the movement trajectory of the movable plate 3100.
  • the limiter 3570 of this embodiment is provided on the movable plate 3100. Bulge on the bottom.
  • the gear track 3530 of this embodiment may also be composed of a plurality of cylindrical protrusions 3580 provided on the movable plate 3100, and a plurality of the cylindrical protrusions 3580 along the
  • the gears 3520 are arranged at regular intervals in the second direction, and the gears 3520 are engaged with a plurality of the protrusions; and the limit track 3540 is a first arc-shaped stopper 3590 and a second arc provided on the movable plate 3100.
  • Shape limiting member 3600, the first arc-shaped limiting member 3590 and the second arc-shaped limiting member 3600 are respectively disposed on opposite sides of the gear rail 3530 in the first direction to prevent movement on the movable plate 3100
  • the gear 3520 is located on one side of the gear rail 3530.
  • the gear 3520 is easily separated from the gear rail 3530 formed by the cylindrical protrusion 3580.
  • the first arc-shaped stopper 3590 or the second arc-shaped stopper 3600 can Plays a guiding role, so that the movable plate 3100 can move in a preset direction of the gear track 3530, that is, the first arc-shaped limiting member 3590, the second arc-shaped limiting member 3600, and a plurality of the protrusions cooperate
  • the motion trajectory of the movable board 3100 is waist-shaped.
  • the following describes the working process of the mechanical image stabilizer 3000 of this embodiment in detail with reference to the above structure.
  • the two shake directions are opposite, and the movable plate 3100 needs to be compensated once in the first direction. And then motion compensation once in the second direction.
  • the gyroscope feeds back the detected lens 1000 shake direction and distance to the processing module in advance, and the processing module calculates the required moving distance of the movable plate 3100, so that the driving shaft 3510 drives The gear 3520 rotates.
  • the gear 3520 cooperates with the gear track 3530 and the limit track 3540, and the processing module wirelessly sends a driving signal, thereby driving the movable plate 3100 to move to the compensation position in the first direction.
  • the processing module wirelessly sends a driving signal, thereby driving the movable plate 3100 to move to the compensation position in the first direction.
  • the movable plate is driven again by the driving shaft 3510. 3100 reset.
  • the elastic recovery mechanism 3300 also provides a reset force for resetting the movable plate 3100, which is more convenient for the movable plate 3100 to return to the initial position.
  • the processing method is the same as the compensation steps in the first direction described above.
  • the mechanical compensator provided in this embodiment not only does not receive interference from external magnetic fields and has a good anti-shake effect, but also can accurately compensate the lens 1000 in the event of multiple shakes, and the compensation is timely and accurate.
  • the mechanical anti-shake device using this embodiment is not only simple in structure, but also requires small installation space for each component, which facilitates the integration of the entire anti-shake device and has higher compensation accuracy.
  • the electronic device in this embodiment includes a mobile phone and a bracket for mounting the mobile phone.
  • the purpose of the electronic device including a bracket is to support and fix the electronic device due to the uncertainty of the image acquisition environment.
  • the bracket 5000 in this embodiment includes a mobile phone mounting base 5100 and a retractable supporting rod 5200.
  • the middle portion of the supporting rod 5200 and the mobile phone mounting base 5100 passes through a damping hinge.
  • the applicant found that the mobile phone mount 5100 combined with the support pole 5200 occupies a large space. Even if the support pole 5200 is retractable, the mobile phone mount 5100 cannot undergo structural changes and the volume will not be further reduced. Putting it in a pocket or a small bag causes the inconvenience of carrying the stent 5000. Therefore, in this embodiment, a second step improvement is made to the stent 5000, so that the overall storability of the stent 5000 is further improved.
  • the mobile phone mounting base 5100 of this embodiment includes a retractable connecting plate 5110 and a folding plate group 5120 installed at opposite ends of the connecting plate 5110.
  • the support rod 5200 and the connecting plate 5110 The middle part is connected by a damping hinge;
  • the folding plate group 5120 includes a first plate body 5121, a second plate body 5122, and a third plate body 5123, wherein one of the two opposite ends of the first plate body 5121 is connected to the first plate body 5121.
  • the connecting plate 5110 is hinged, the other end of the opposite ends of the first plate body 5121 is hinged with one of the opposite ends of the second plate body 5122, and the opposite ends of the second plate body 5122 The other end is hinged to one of opposite ends of the third plate body 5123; the second plate body 5122 is provided with an opening 5130 for a corner of the mobile phone to be inserted.
  • the first plate 5121, the second plate 5122, and the third plate 5123 are folded into a right triangle state, and the second plate 5122 is a hypotenuse of a right-angled triangle, and the first plate body 5121 and the third plate 5123 are right-angled sides of a right triangle, wherein one side surface of the third plate body 5123 and one of the connection plate 5110 are The side is attached side by side, and the other end of the opposite ends of the third plate body 5123 and the one of the opposite ends of the first plate body 5121 are against each other.
  • This structure can make the three folding plates in a self-locking state, and When the two lower corners of the mobile phone are inserted into the two openings 5130 on both sides, the lower sides of the mobile phone 6000 are located in two right-angled triangles.
  • the mobile phone 6000 can be completed through the joint work of the mobile phone, the connecting plate 5110, and the folding plate group 5120.
  • the triangle state cannot be opened under external force.
  • the triangle state of 5120 pieces of folding plate group can only be released after the mobile phone is pulled out from the opening 5130.
  • the connecting plate 5110 When the mobile phone mounting base 5100 is not in working state, the connecting plate 5110 is reduced to a minimum length, and the folding plate group 5120 and the connecting plate 5110 are folded to each other.
  • the user can fold the mobile phone mounting base 5100 to a minimum volume. Due to the scalability of the lever 5200, the entire bracket 5000 can be accommodated in the smallest state, which improves the collection of the bracket 5000. The user can even put the bracket 5000 directly into the pocket or small handbag, which is very convenient.
  • a first connection portion is also provided on one side of the third plate body 5123, and a side surface where the connection plate 5110 is in contact with the third plate body 5123 is provided with the first connection portion.
  • a first mating portion that mates with the connecting portion.
  • the first connecting portion of this embodiment is a convex strip or protrusion (not shown in the figure), and the first matching portion is a card slot (not shown in the figure) opened on the connecting plate 5110.
  • This structure not only improves the stability when the 5120 pieces of the folding plate group are in a triangular state, but also facilitates the connection of the 5120 pieces of the folding plate group and the connecting plate 5110 when the mobile phone mounting base 5100 needs to be folded to a minimum state.
  • a second connection portion is also provided at one end of the opposite ends of the first plate body 5121, and the other end of the opposite ends of the third plate body 5123 is provided with the second connection portion.
  • the second connection portion may be a protrusion (not shown in the figure), and the second mating portion is an opening 5130 or a card slot (not shown in the figure) that cooperates with the protrusion.
  • a base (not shown in the figure) can be detachably connected to the other end of the support rod 5200.
  • the support rod 5200 can be stretched to A certain length, put the bracket 5000 on a plane through the base, and then place the mobile phone in the mobile phone mount 5100 to complete the fixing of the mobile phone; and the detachable connection of the support rod 5200 and the base can make the two can be carried separately, further The accommodating and carrying convenience of the bracket 5000 are improved.
  • the device embodiments described above are only schematic, and the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, which may be located in One place, or can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment. Those of ordinary skill in the art can understand and implement without creative labor.
  • An embodiment of the present invention provides a non-transitory computer-readable storage storage medium, where the computer storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by an electronic device, the electronic device is caused.
  • the method for generating an interactive object in any of the foregoing method embodiments is performed on the above.
  • An embodiment of the present invention provides a computer program product, wherein the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions When executed by an electronic device, the electronic device is caused to execute the method for generating an interactive object in any of the foregoing method embodiments.
  • each embodiment can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware.
  • the above-mentioned technical solution in essence or a part that contributes to the existing technology may be embodied in the form of a software product, and the computer software product may be stored in a computer-readable storage medium, the computer-readable record A medium includes any mechanism for storing or transmitting information in a form readable by a computer (eg, a computer).
  • machine-readable media include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (e.g., carrier waves , Infrared signals, digital signals, etc.), the computer software product includes a number of instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute various embodiments or certain parts of the embodiments Methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Des modes de réalisation de la présente invention concernent un procédé de génération d'un objet interactif, un dispositif, et un appareil électronique. Le procédé comprend les étapes consistant à : acquérir des informations audio en temps réel, et détecter si un mot-clé prédéterminé est présent dans les informations audio ; si tel est le cas, acquérir un objet interactif correspondant au mot-clé prédéterminé, et déterminer un utilisateur cible correspondant aux informations audio ; capturer une image de visage de l'utilisateur cible à partir d'une interface vidéo affichant l'utilisateur cible, analyser l'image de visage, et obtenir des informations d'éclairage d'une scène autour de l'utilisateur cible ; et afficher, sur l'interface vidéo, l'objet interactif rendu selon les informations d'éclairage. L'invention réalise la génération d'un objet interactif sur la base de contenus de communication vidéo d'un utilisateur, et l'affichage en temps réel de celui-ci dans une interface d'image vidéo d'un utilisateur homologue, et un effet d'éclairage de l'objet interactif est cohérent avec une scène autour de l'utilisateur homologue, ce qui permet d'enrichir l'expérience interactive de conversation vidéo en ligne pour des utilisateurs, et d'améliorer les corrélations d'interaction entre le contenu de conversation en ligne et les scènes vidéo.
PCT/CN2018/106786 2018-09-20 2018-09-20 Procédé de génération d'objet interactif, dispositif, et appareil électronique WO2020056691A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/106786 WO2020056691A1 (fr) 2018-09-20 2018-09-20 Procédé de génération d'objet interactif, dispositif, et appareil électronique
CN201811123907.1A CN109474801B (zh) 2018-09-20 2018-09-26 一种交互对象的生成方法、装置及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/106786 WO2020056691A1 (fr) 2018-09-20 2018-09-20 Procédé de génération d'objet interactif, dispositif, et appareil électronique

Publications (1)

Publication Number Publication Date
WO2020056691A1 true WO2020056691A1 (fr) 2020-03-26

Family

ID=65663158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106786 WO2020056691A1 (fr) 2018-09-20 2018-09-20 Procédé de génération d'objet interactif, dispositif, et appareil électronique

Country Status (2)

Country Link
CN (1) CN109474801B (fr)
WO (1) WO2020056691A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016505A (zh) * 2020-09-03 2020-12-01 平安科技(深圳)有限公司 基于人脸图像的活体检测方法、设备、存储介质及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112492400B (zh) * 2019-09-12 2023-03-31 阿里巴巴集团控股有限公司 互动方法、装置、设备以及通信方法、拍摄方法
CN112188115B (zh) * 2020-09-29 2023-10-17 咪咕文化科技有限公司 一种图像处理方法、电子设备及存储介质
CN113407850B (zh) * 2021-07-15 2022-08-26 北京百度网讯科技有限公司 一种虚拟形象的确定和获取方法、装置以及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377975A (zh) * 2010-08-10 2012-03-14 华为终端有限公司 用于视频通信的视频处理方法、装置及系统
US20160037148A1 (en) * 2014-07-29 2016-02-04 LiveLocation, Inc. 3d-mapped video projection based on on-set camera positioning
CN105554429A (zh) * 2015-11-19 2016-05-04 掌赢信息科技(上海)有限公司 一种视频通话显示方法及视频通话设备
US20170124753A1 (en) * 2015-11-03 2017-05-04 Electronic Arts Inc. Producing cut-out meshes for generating texture maps for three-dimensional surfaces
CN107911643A (zh) * 2017-11-30 2018-04-13 维沃移动通信有限公司 一种视频通信中展现场景特效的方法和装置
CN108525298A (zh) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8359303B2 (en) * 2007-12-06 2013-01-22 Xiaosong Du Method and apparatus to provide multimedia service using time-based markup language
JP5531093B2 (ja) * 2009-04-17 2014-06-25 トラップコード・アーベー コンピュータグラフィックスでオブジェクトにシャドウを付ける方法
KR20110052118A (ko) * 2009-11-12 2011-05-18 연세대학교 산학협력단 프레넬 렌즈의 그루브각 최적화 방법 및 이를 이용한 프레넬 제조방법 및 프레넬 렌즈
CN105681684A (zh) * 2016-03-09 2016-06-15 北京奇虎科技有限公司 基于移动终端的图像实时处理方法及装置
CN106303658B (zh) * 2016-08-19 2018-11-30 百度在线网络技术(北京)有限公司 应用于视频直播的交互方法和装置
CN107845132B (zh) * 2017-11-03 2021-03-02 太平洋未来科技(深圳)有限公司 虚拟对象色彩效果的渲染方法和装置
CN107909057A (zh) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN108537155B (zh) * 2018-03-29 2021-01-26 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377975A (zh) * 2010-08-10 2012-03-14 华为终端有限公司 用于视频通信的视频处理方法、装置及系统
US20160037148A1 (en) * 2014-07-29 2016-02-04 LiveLocation, Inc. 3d-mapped video projection based on on-set camera positioning
US20170124753A1 (en) * 2015-11-03 2017-05-04 Electronic Arts Inc. Producing cut-out meshes for generating texture maps for three-dimensional surfaces
CN105554429A (zh) * 2015-11-19 2016-05-04 掌赢信息科技(上海)有限公司 一种视频通话显示方法及视频通话设备
CN107911643A (zh) * 2017-11-30 2018-04-13 维沃移动通信有限公司 一种视频通信中展现场景特效的方法和装置
CN108525298A (zh) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016505A (zh) * 2020-09-03 2020-12-01 平安科技(深圳)有限公司 基于人脸图像的活体检测方法、设备、存储介质及装置
CN112016505B (zh) * 2020-09-03 2024-05-28 平安科技(深圳)有限公司 基于人脸图像的活体检测方法、设备、存储介质及装置

Also Published As

Publication number Publication date
CN109474801A (zh) 2019-03-15
CN109474801B (zh) 2020-07-07

Similar Documents

Publication Publication Date Title
WO2020056691A1 (fr) Procédé de génération d'objet interactif, dispositif, et appareil électronique
WO2020037679A1 (fr) Procédé et appareil de traitement vidéo, et dispositif électronique
US11503377B2 (en) Method and electronic device for processing data
CN108596827B (zh) 三维人脸模型生成方法、装置及电子设备
CN108614638B (zh) Ar成像方法和装置
WO2020056690A1 (fr) Procédé et appareil de présentation d'une interface associée à un contenu vidéo et dispositif électronique
WO2020056692A1 (fr) Procédé et appareil d'interaction d'informations et dispositif électronique
WO2020056689A1 (fr) Procédé et appareil d'imagerie ra et dispositif électronique
CN108377398B (zh) 基于红外的ar成像方法、系统、及电子设备
WO2020037676A1 (fr) Procédé et appareil de génération d'images tridimensionnelles de visage, et dispositif électronique
WO2020037681A1 (fr) Procédé et appareil de génération de vidéo, et dispositif électronique
CN109285216B (zh) 基于遮挡图像生成三维人脸图像方法、装置及电子设备
WO2020037680A1 (fr) Procédé et appareil d'optimisation de visage en trois dimensions à base de lumière et dispositif électronique
WO2016044778A1 (fr) Procédé et système de détection, d'analyse, de composition et de direction automatiques d'un espace, d'une scène, d'un objet, et d'un équipement 3d
WO2019200718A1 (fr) Procédé, appareil et dispositif électronique de traitement d'image
TW202123178A (zh) 一種分鏡效果的實現方法、裝置及相關產品
CN113655887A (zh) 一种虚拟现实设备及静态录屏方法
CN102542300B (zh) 体感游戏中人体位置自动识别的方法及显示终端
WO2021232875A1 (fr) Procédé et appareil de commande de personne numérique, et appareil électronique
CN115442658B (zh) 直播方法、装置、存储介质、电子设备及产品
WO2020056693A1 (fr) Procédé et appareil de synthétisation d'image et dispositif électronique
CN116170624A (zh) 一种对象展示方法、装置、电子设备及存储介质
JP7293362B2 (ja) 撮影方法、装置、電子機器及び記憶媒体
CN220820449U (zh) 一种磁吸结构的指读投影仪
CN113726465B (zh) 时间戳同步方法和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18934005

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18934005

Country of ref document: EP

Kind code of ref document: A1