WO2020037681A1 - Procédé et appareil de génération de vidéo, et dispositif électronique - Google Patents
Procédé et appareil de génération de vidéo, et dispositif électronique Download PDFInfo
- Publication number
- WO2020037681A1 WO2020037681A1 PCT/CN2018/102334 CN2018102334W WO2020037681A1 WO 2020037681 A1 WO2020037681 A1 WO 2020037681A1 CN 2018102334 W CN2018102334 W CN 2018102334W WO 2020037681 A1 WO2020037681 A1 WO 2020037681A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- information
- video
- human face
- feature information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 24
- 210000000056 organ Anatomy 0.000 claims description 40
- 230000015654 memory Effects 0.000 claims description 19
- 238000000605 extraction Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 12
- 230000001815 facial effect Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 abstract description 2
- 238000007654 immersion Methods 0.000 abstract 1
- 229910001285 shape-memory alloy Inorganic materials 0.000 description 21
- 230000006870 function Effects 0.000 description 14
- 239000003381 stabilizer Substances 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000012549 training Methods 0.000 description 10
- 210000000887 face Anatomy 0.000 description 9
- 230000007246 mechanism Effects 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 7
- 210000004709 eyebrow Anatomy 0.000 description 6
- 230000013011 mating Effects 0.000 description 5
- 239000000758 substrate Substances 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000013016 damping Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 210000001747 pupil Anatomy 0.000 description 3
- 230000006641 stabilisation Effects 0.000 description 3
- 238000011105 stabilization Methods 0.000 description 3
- 230000010354 integration Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- 230000004308 accommodation Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
Definitions
- the present invention relates to the technical field of video generation, and in particular, to a video generation method, device, and electronic device.
- the video generating method, device, and electronic device provided by the embodiments of the present invention are used to solve at least the foregoing problems in related technologies.
- An embodiment of the present invention provides a video generating method, including:
- Receive a target picture extract first feature information and first pose information of a first face in the target picture; find a second face matching the first pose information in a video, and record the appearance of the second person A video frame identifier of a face; in a second human face, determining a third human face similar to the first human face based on the first feature information; obtaining a size of the third human face based on the video frame identifier Information, processing the first face according to the size information, and replacing the third face with the processed first face.
- extracting the first feature information and the first pose information of the first face in the target picture includes: obtaining a preset number of key points using a key point recognition algorithm, and determining that the key points are in the target Coordinate position on the picture; determining first pose information of the first human face based on the coordinate position; extracting a local organ image of the first human face based on the key point, and determining the local organ image based on the local organ image The first feature information of the first face.
- the method further includes: determining a face appearing in the video and a video frame identifier corresponding to the face; extracting feature information and posture information of the face, and recording the face and the feature Correspondence between the information, the posture information and the video frame identifier.
- the step of searching for a second face in the video that matches the first pose information and recording the video frame identifier where the second face appears includes: searching for the first face information in the pose information and the first face information.
- the feature information includes face shape information and local organ feature information
- determining a third face similar to the first face based on the first feature information includes: The first feature information and the second feature information respectively calculate a face shape similarity of each of the first face and the second face, and a feature similarity of each local organ; based on the face shape similarity And the degree of feature similarity of each local organ to determine a third human face that is similar to the first human face.
- the target picture is obtained through an image acquisition device, which includes a lens, an autofocus voice coil motor, a mechanical image stabilizer, and an image sensor, and the lens is fixed on the autofocus voice coil motor,
- the lens is used to acquire an image (picture)
- the image sensor transmits the image acquired by the lens to the recognition module
- the autofocus voice coil motor is mounted on the mechanical image stabilizer
- the processing module According to the feedback of the lens shake detected by the gyroscope in the lens, the action of the mechanical image stabilizer is driven to realize the lens shake compensation.
- the mechanical image stabilizer includes a movable plate, a base plate, and a compensation mechanism.
- Each of the movable plate and the base plate is provided with a through hole through which the lens passes, and the auto-focusing voice coil motor is installed at
- the movable plate is mounted on the substrate, and the size of the substrate is larger than the movable plate.
- the compensation mechanism drives the movable plate and the movable plate under the driving of the processing module.
- the lens moves to achieve lens shake compensation;
- the compensation mechanism includes a first compensation component, a second compensation component, a third compensation component, and a fourth compensation component installed around the substrate, wherein the first compensation component and The third compensation component is disposed opposite to each other, the second compensation component is disposed opposite to the fourth compensation component, and a line between the first compensation component and the third compensation component is connected to the first compensation component and the first compensation component.
- the lines between the three compensation components are perpendicular to each other; the first compensation component, the second compensation component, the third compensation component, and the fourth compensation component all include a driving member, a rotating shaft, and a one-way bearing.
- the driving member is controlled by the processing module, and the driving member is drivingly connected to the rotating shaft to drive the rotating shaft to rotate;
- the rotating shaft is connected to the inner ring of the one-way bearing to Driving the inner ring of the one-way bearing to rotate;
- the rotating ring gear is sleeved on the one-way bearing and connected to the outer ring of the one-way bearing, and an outer surface of the rotating ring gear is provided with a ring in its circumferential direction External teeth
- the bottom surface of the movable plate is provided with a plurality of rows of strip grooves arranged at even intervals, the strip grooves are engaged with the external teeth, and the external teeth can slide along the length direction of the strip grooves ;
- the rotatable direction of the one-way bearing of the first compensation component is opposite to the rotatable direction of the one-way bearing of the third compensation component, and the rotatable direction of the one-way bearing of the second compensation component is different from that The rotatable direction of the one-way
- the driving member is a micro motor, the micro motor is electrically connected to the processing module, and a rotary output end of the micro motor is connected to the rotating shaft; or the driving member includes a memory alloy wire and a crank A connecting rod, one end of the memory alloy wire is fixed on the fixing plate and connected with the processing module through a circuit, and the other end of the memory alloy wire is connected with the rotating shaft through the crank connecting rod to drive The rotation shaft rotates.
- the image acquisition device is disposed on a mobile phone, and the mobile phone includes a stand.
- the bracket includes a mobile phone mount and a retractable support rod;
- the mobile phone mount includes a retractable connection plate and a folding plate group installed at opposite ends of the connection plate, and one end of the support rod passes through the middle of the connection plate
- the damping hinge is connected;
- the folding plate group includes a first plate body, a second plate body, and a third plate body, wherein one of two opposite ends of the first plate body is hinged with the connecting plate, so The other end of the two opposite ends of the first plate is hinged to one of the two opposite ends of the second plate; the other end of the opposite ends of the second plate is two opposite to the third plate.
- One end of the ends is hinged; the second plate body is provided with an opening for inserting a corner of the mobile phone; when the mobile phone mounting seat is used to install the mobile phone, the first plate body, the second plate body, and the third plate body
- the folded state is a right triangle
- the second plate is a hypotenuse of a right triangle
- the first plate and the third plate are right angles of a right triangle
- one side of the third plate is Affixed side by side with one side of the connecting plate, the first The other end of the plate opposite ends with one end of opposite ends of said first plate offset.
- one side of the third plate body is provided with a first connection portion, and a side surface of the connection plate that is in contact with the third plate body is provided with a first fit that is matched with the first connection portion.
- a second connection portion is provided on one end of the opposite ends of the first plate body, and a second connection is provided on the other end of the opposite ends of the third plate body to cooperate with the second connection portion.
- the other end of the support rod is detachably connected with a base.
- An extraction module for receiving a target picture, extracting first feature information and first pose information of a first face in the target picture; a finding module for finding a second picture in the video that matches the first pose information A face, recording a video frame identifier where the second face appears; a first determining module, configured to determine a third person similar to the first face in the second face based on the first feature information Face; a replacement module, configured to obtain size information of the third human face based on the video frame identifier, process the first human face according to the size information, and replace the third human face with processing After the first human face.
- the extraction module includes: a recognition unit for obtaining a preset number of key points using a key point recognition algorithm to determine a coordinate position of the key point on the target picture; and a determination unit for based on the The coordinate position determines first pose information of the first human face; an extraction unit, configured to extract a local organ image of the first human face according to the key point, and determine the first human face according to the local organ image The first characteristic information.
- the device further includes: a second determination module, configured to determine a face appearing in the video and a video frame identifier corresponding to the face; a recording module, configured to extract feature information of the face and The posture information records a correspondence between the human face, the feature information, the posture information, and a video frame identifier corresponding to the human face.
- a second determination module configured to determine a face appearing in the video and a video frame identifier corresponding to the face
- a recording module configured to extract feature information of the face and The posture information records a correspondence between the human face, the feature information, the posture information, and a video frame identifier corresponding to the human face.
- the search module is specifically configured to search the posture information for second posture information that matches the first posture information; and determine a second person corresponding to the second posture information based on the correspondence relationship.
- the feature information includes face shape information and local organ feature information
- the first determining module is specifically configured to calculate the first human face and the first human face respectively based on the first feature information and the second feature information. Determining the similarity of the face shape of each of the second human faces, and the similarity of the features of each local organ; and determining a third similarity to the first face based on the similarity of the face and the features of each local organ human face.
- Another aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein,
- the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute any one of the video generating methods in the foregoing embodiments of the present invention. .
- the video generating method, device, and electronic device provided by the embodiments of the present invention allow users to interact with the characters appearing in the video, helping users to identify public characters similar to themselves, and enhancing the interest of users in watching videos And sense of integration.
- FIG. 1 is a flowchart of a video generation method according to an embodiment of the present invention
- step S101 is a specific flowchart of step S101 provided by an embodiment of the present invention.
- FIG. 3 is a specific flowchart of a video generating method according to an embodiment of the present invention.
- FIG. 4 is a structural diagram of a video generating device according to an embodiment of the present invention.
- FIG. 5 is a structural diagram of a video generating device according to an embodiment of the present invention.
- FIG. 6 is a schematic diagram of a hardware structure of an electronic device that executes a video generating method provided by an embodiment of the method of the present invention
- FIG. 7 is a structural diagram of an image acquisition device according to an embodiment of the present invention.
- FIG. 8 is a structural diagram of an optical image stabilizer provided by an embodiment of the present invention.
- FIG. 9 is an enlarged view of part A of FIG. 8; FIG.
- FIG. 10 is a schematic bottom view of a movable plate of a micro memory alloy optical image stabilizer provided by an embodiment of the present invention.
- FIG. 11 is a structural diagram of a stent provided by an embodiment of the present invention.
- FIG. 12 is a schematic state diagram of a stent according to an embodiment of the present invention.
- FIG. 13 is a schematic view of another state of a stent according to an embodiment of the present invention.
- FIG. 14 is a structural state diagram when the mounting base and the mobile phone are connected according to an embodiment of the present invention.
- FIG. 1 is a flowchart of a video generation method according to an embodiment of the present invention.
- a video generation method provided by an embodiment of the present invention includes:
- S101 Receive a target picture, and extract first feature information and first pose information of a first human face in the target picture.
- an application scenario of an embodiment of the present invention is described.
- a user can trigger a video interaction logo, use the face image in the target picture uploaded by him to replace the similar face in the video, and form interaction with the characters appearing in the video.
- the target picture may be taken by the user in real time, or may be selected by the user from among pictures saved locally.
- the target picture contains images of non-face parts, such as background environment images, so the face images in the pictures need to be identified.
- the resolution of the identified first face image is low or the face is occluded (for example, the face is wearing sunglasses, a scarf, etc.)
- the first feature information cannot be extracted, and a prompt message may be issued at this time. To prompt the user to change the target picture.
- the range of the face image can be identified according to the edge information and / or color information of the image.
- the first feature information and the first posture information are extracted through the following sub-steps.
- S1011 Use a keypoint recognition algorithm to obtain a preset number of keypoints, and determine a coordinate position of the keypoints on the target picture.
- the eyebrows, eyes, nose, face, and mouth of a human face are respectively composed of several of the key points, so a preset number of key points can be determined in advance to represent the features of the human face.
- a key point recognition algorithm can be used to obtain the key points.
- the training of the face keypoint recognition algorithm may include the following steps: first, a certain number of training sets are obtained, and the training set is a picture carrying the keypoint identifiers of human faces and their corresponding coordinate positions; second, the training set is used to form The initial regression function r 0 and the initial training set; again, the initial training set and the initial regression function r 0 are used to iterate to form the next training set and the regression function r n ; each iteration of the regression function uses a gradient lifting algorithm to learn, so When the face keypoint information in the nth training set and the training set meets the convergence conditions, the corresponding regression function rn is the face keypoint recognition algorithm after training.
- a preset number of key points and the coordinates (x i , y i ) of the first face key point are obtained through the regression function in the trained key point recognition algorithm to identify the input target picture, where i Represents the i-th key point obtained through recognition.
- the preset number may be 68, including key points such as eyebrows, eyes, nose, mouth, face, face contour, and the like.
- the coordinate system is established by using the target picture.
- the bottom left corner of the target picture can be used as the origin
- the left side of the target picture is the y-axis
- the bottom side of the target picture is the x-axis.
- the first posture information includes, but is not limited to, a rotation angle, a deflection angle, and the like of a human face.
- the coordinates of the facial features of the first face can be determined according to the coordinate positions of the key points, and the relative positions of the facial features can determine whether the current face is deflected or rotated, and the corresponding deflection angle and rotation angle.
- S1013 Extract a local organ image of the first human face according to the key points, and determine first feature information of the first human face according to the local organ image.
- the feature information includes, but is not limited to, face shape information and local organ feature information.
- an image of a local organ of the first human face is extracted according to the key points obtained in step S1011.
- the local organ includes eyebrows, eyes, nose, mouth and chin. Since the key points corresponding to each local organ have already constituted the outline of the local organ, the local organ image can be extracted according to the contour.
- several geometric proportions on the first human face are determined, and the facial shape information of the first human face is finally obtained according to the geometric proportions.
- these geometric ratios include the ratio between the width of the nose, the width of the upper lip, the width of the lower lip, the width of the eye, the vertical distance from the pupil to the nostril, the vertical distance from the eyebrow to the chin, the vertical distance from the nostril to the chin, The ratio between the vertical distance from the pupil to the corner of the mouth, the vertical distance from the corner of the mouth to the chin, the vertical distance from the eyebrow to the pupil, and so on.
- Gabor feature extraction can be performed on the local organ images to obtain the texture features corresponding to each local organ, that is, the local organ feature information of the first face.
- the local organ feature information and the face shape information as a whole constitute the first feature information of the first human face.
- the gender of the first face can be determined. Therefore, when looking for a second face that matches the first pose information in the video, it is generally Find faces of same sex
- the video Before performing this step, the video needs to be pre-processed. Generally, the pre-processing step is completed before the video is released, or it is completed before the video is enabled for interactive functions.
- the video pre-processing process includes the following steps:
- S302 Extract feature information and posture information of the face, and record a correspondence between the face, the feature information, the posture information, and a video frame identifier corresponding to the face.
- the feature information and posture information of the above five faces are extracted according to the method in step S101, and the feature information, posture information corresponding to each face, and the video frame identifier where the face appears are stored and stored Video frame number of the human face.
- records can be made in the format of the table below.
- the video processed by the method shown in FIG. 3 can be used to find the second pose information that matches the first pose information through the recorded pose information.
- the pose information generally refers to a human face Therefore, you can set a deviation threshold, such as 5 °. If the deviation between the first attitude information and the second attitude information is within 5 °, it can be considered that the two match each other.
- the second pose information is determined, based on the corresponding relationship shown in the above table, the second face corresponding to the second pose information, the second feature information, and the video frame identifier where the second face appears are determined.
- a third human face similar to the first human face is determined based on the first feature information.
- the face shape similarity between the first face and the second face may be calculated based on the first feature information and the second feature information, respectively. 3.
- the similarity of the features of each local organ; based on the similarity of the face shape of the face and the similarity of the features of each local organ, a third human face similar to the first human face is determined.
- the face shape similarity ⁇ 1 between the first and second faces and the similarity of each local organ feature are calculated respectively.
- ⁇ 3 , nose similarity ⁇ 4 , mouth similarity ⁇ 5 , chin similarity ⁇ 6 ), weights can be set for each similarity, then the total similarity between the first face and the second face ⁇ A 1 ⁇ 1 + A 2 ⁇ 2 + A 3 ⁇ 3 + A 4 ⁇ 4 + A 5 ⁇ 5 + A 6 ⁇ 6 .
- a third face that is most similar to the first face is determined.
- multiple third human faces similar to the first human face may also be determined according to the total similarity and displayed to the user for selection.
- S104 Obtain size information of the third face based on the video frame identifier, process the first face according to the size information, and replace the third face with the processed first face.
- the third face in the video frame identifier corresponding to the third face described in the above table needs to be replaced with the first face. Because the size of the third face in each video frame may vary with the distance of the lens, it is necessary to obtain the size information of the third face in each video frame, and the size of the first face according to the size information Enlarging or reducing so that the enlarged or reduced first face can match the shape of the third face in the video.
- the first face may be displayed on the corresponding video frame during the video playback. Alternatively, the first face and the first face may be displayed at a preset position of the video. The similarity of the local organs of the third human face, so that the user can understand it.
- a user can interact with characters appearing in a video, help the user determine a public figure similar to himself, and enhance the user's interest and integration of watching the video.
- FIG. 4 is a structural diagram of a video generating apparatus according to an embodiment of the present invention.
- the device specifically includes: an extraction module 100, a search module 200, a first determination module 300, and a replacement module 400. among them,
- An extraction module 100 is configured to receive a target picture and extract first feature information and first pose information of a first face in the target picture; a search module 200 is used to find a video that matches the first pose information A second face, recording a video frame identifier in which the second face appears; a first determining module 300, configured to determine, in the second face, based on the first feature information, similar to the first face A third face; a replacement module 400, configured to obtain size information of the third face based on the video frame identifier, process the first face according to the size information, and convert the third face The face is replaced with the processed first human face.
- the video generating device provided by the embodiment of the present invention is specifically configured to execute the method provided by the embodiment shown in FIG. 1, and its implementation principles, methods, and functional uses are similar to the embodiment shown in FIG. 1, and details are not described herein again.
- FIG. 5 is a structural diagram of a video generating apparatus according to an embodiment of the present invention.
- the device specifically includes: a second determination module 500, a recording module 600, an extraction module 100, a search module 200, a first determination module 300, and a replacement module 400. among them,
- a second determining module 500 is configured to determine a face appearing in a video and a video frame identifier corresponding to the face;
- a recording module 600 is configured to extract feature information and posture information of the face, and record the face, A correspondence between the feature information, the pose information, and a video frame identifier corresponding to the face;
- the extraction module 100 is configured to receive a target picture and extract first feature information of a first face in the target picture And first pose information;
- a finding module 200 for finding a second face in the video that matches the first pose information, and recording a video frame identifier where the second face appears;
- a first determining module 300 using In the second face, a third face similar to the first face is determined based on the first feature information;
- a replacement module 400 is configured to obtain the third face based on the video frame identifier. Size information, processing the first human face according to the size information, and replacing the third human face with the processed first human face.
- the extraction module 100 includes: an identification unit 110, a determination unit 120, and an extraction unit 130. among them,
- a recognition unit 110 is configured to obtain a preset number of key points using a key point recognition algorithm to determine a coordinate position of the key point on the target picture; a determination unit 120 is configured to determine the first position based on the coordinate position First pose information of a human face; an extraction unit 130 is configured to extract a local organ image of the first human face according to the key points, and determine first feature information of the first human face according to the local organ image.
- the search module 200 is specifically configured to search the posture information for second posture information that matches the first posture information; and based on the correspondence, determine a second person corresponding to the second posture information A face, second feature information, and a video frame identifier where the second human face appears.
- the feature information includes face shape information and local organ feature information
- the first determining module 300 is specifically configured to calculate the first human face and the second human face respectively based on the first feature information and the second feature information. Determining the similarity of the face shape of each of the second human faces, and the similarity of the features of each local organ; and determining a third similarity to the first face based on the similarity of the face and the features of each local organ human face.
- the video generating device provided by the embodiment of the present invention is specifically configured to execute the method provided by the embodiment shown in FIG. 1 to FIG. 3, and its implementation principles, methods, and functional uses are similar to the embodiment shown in FIG. 1-3, and here No longer.
- the above-mentioned video generating device may be used as one of the software or hardware functional units, which may be independently provided in the above-mentioned electronic device, or may be used as one of the functional modules integrated in the processor to execute the video generating of the embodiments of the present invention method.
- FIG. 6 is a schematic diagram of a hardware structure of an electronic device that executes a video generating method provided by a method embodiment of the present invention.
- the electronic device includes:
- One or more processors 610 and a memory 620 are taken as an example in FIG. 6.
- the device for performing the video generating method may further include an input device 630 and an output device 630.
- the processor 610, the memory 620, the input device 630, and the output device 640 may be connected through a bus or other methods. In FIG. 6, the connection through the bus is taken as an example.
- the memory 620 is a non-volatile computer-readable storage medium, and may be used to store non-volatile software programs, non-volatile computer executable programs, and modules, as corresponding to the video generating method in the embodiment of the present invention.
- the processor 610 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the video generating method.
- the memory 620 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store the data created by using the video generating device according to the embodiment of the present invention. Data, etc.
- the memory 620 may include a high-speed random access memory 620, and may further include a non-volatile memory 620, such as at least one magnetic disk memory 620, a flash memory device, or other non-volatile solid-state memory 620.
- the memory 620 may optionally include a memory 620 remotely disposed with respect to the processor 66, and these remote memories 620 may be connected to the video generating device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
- the input device 630 may receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the video generating device.
- the input device 630 may include a device such as a pressing module.
- the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, execute the video generation method.
- the electronic devices in the embodiments of the present invention exist in various forms, including but not limited to:
- Mobile communication equipment This type of equipment is characterized by mobile communication functions, and its main goal is to provide voice and data communication.
- Such terminals include: smart phones (such as iPhone), multimedia phones, feature phones, and low-end phones.
- Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has the characteristics of mobile Internet access.
- Such terminals include: PDA, MID and UMPC devices, such as iPad.
- Portable entertainment equipment This type of equipment can display and play multimedia content.
- Such devices include: audio and video players (such as iPod), handheld game consoles, e-books, as well as smart toys and portable car navigation devices.
- an image acquisition device for acquiring an image is provided on the electronic device, and a software or hardware image stabilizer is often provided on the image acquisition device to ensure the quality of the acquired image.
- Most of the existing image stabilizers are powered by coils that generate Loren magnetic force in the magnetic field to drive the lens.
- the lens needs to be driven in at least two directions, which means that multiple coils need to be arranged, which will give the whole.
- the miniaturization of the structure brings certain challenges, and it is easy to be affected by external magnetic fields, which will affect the anti-shake effect. Therefore, the Chinese patent published as CN106131435A provides a miniature optical anti-shake camera module, which realizes memory alloy wires through temperature changes.
- the control chip of the micro memory alloy optical anti-shake actuator can control the change of the driving signal to change the temperature of the memory alloy wire. Control the elongation and shortening of the memory alloy wire, and calculate the position and moving distance of the actuator based on the resistance of the memory alloy wire. When the micro memory alloy optical image stabilization actuator moves to the specified position, the resistance of the memory alloy wire at this time is fed back. By comparing the deviation of this resistance value and the target value, the movement on the micro memory alloy optical image stabilization actuator can be corrected. deviation.
- the above technical solution can compensate the lens for the shake in the first direction, but when the subsequent shake in the second direction occurs, it is too late due to the memory alloy wire. Deformation in an instant, so it is easy to cause untimely compensation, and it is impossible to accurately realize lens shake compensation for multiple shakes and continuous shakes in different directions. Therefore, it is necessary to improve its structure in order to obtain better image quality and facilitate subsequent 3D Image generation.
- this embodiment improves the anti-shake device and designs it as a mechanical anti-shake device 3000.
- the specific structure is as follows:
- the mechanical image stabilizer 3000 of this embodiment includes a movable plate 3100, a base plate 3200, and a compensation mechanism 3300.
- Each of the movable plate 3100 and the base plate 3200 is provided with a through hole through which the lens 1000 passes.
- An autofocus voice coil motor 2000 is mounted on the movable plate 3100, and the movable plate 3100 is mounted on the base plate 3200.
- the size of the base plate 3200 is larger than the movable plate 3100, and the movable plate 3100 passes above it.
- the auto-focusing voice coil motor limits its up and down movement, and the compensation mechanism 3300 drives the movable plate 3100 and the lens 1000 on the movable plate 3100 to move under the driving of the processing module to achieve shake compensation of the lens 1000.
- the compensation mechanism 3300 in this embodiment includes a first compensation component 3310, a second compensation component 3320, a third compensation component 3330, and a fourth compensation component 3340 installed around the substrate 3200.
- a compensation component 3310 and the third compensation component 3330 are disposed opposite to each other, the second compensation component 3320 is disposed opposite to the fourth compensation component 3340, and a connection line between the first compensation component 3310 and the third compensation component 3330
- the connection lines between the first compensation component 3310 and the third compensation component 3330 are perpendicular to each other, that is, a compensation component, a second compensation component 3320, and a third compensation component 3330 are respectively arranged in the front, rear, left, and right directions of the movable plate 3100.
- the first compensation component 3310 can make the movable plate 3100 move forward
- the third compensation component 3330 can make the movable plate 3100 move backward
- the second compensation component 3320 can make the movable plate 3100 move left
- the fourth compensation component 3340 can make The movable plate 3100 moves to the left
- the first compensation component 3310 can cooperate with the second compensation component 3320 or the fourth compensation component 3340 to realize the operation of the movable plate 3100 in an inclined direction.
- the third component 3330 may be compensated 1000 compensation and the second compensation component 3320 or the fourth compensation component 3340 cooperate to achieve movement of the movable plate 3100 to the tilt direction, the lens implemented in the respective direction of jitter.
- the first compensation component 3310, the second compensation component 3320, the third compensation component 3330, and the fourth compensation component 3340 in this embodiment each include a driving member 3301, a rotating shaft 3302, a one-way bearing 3303, and a rotating ring gear 3304.
- the driving member 3301 is controlled by the processing module, and the driving member 3301 is drivingly connected to the rotating shaft 3302 to drive the rotating shaft 3302 to rotate.
- the rotating shaft 3302 is connected to the inner ring of the one-way bearing 3303 to drive the inner ring of the one-way bearing 3303 to rotate.
- the rotating ring gear 3304 is sleeved on the one-way bearing 3303 and is connected to the one-way bearing 3303.
- the outer ring of the one-way bearing 3303 is fixedly connected, the outer surface of the rotating ring gear 3304 is provided with a ring of external teeth along its circumferential direction, and the bottom surface of the movable plate 3100 is provided with a plurality of rows of strip grooves 3110 arranged at even intervals.
- the shaped groove 3110 is meshed with the external teeth, and the external teeth can slide along the length direction of the strip groove 310; wherein the direction of rotation of the one-way bearing 3303 of the first compensation component 3310 and the external teeth
- the rotation direction of the one-way bearing 3303 of the third compensation component 3330 is opposite, and the rotation direction of the one-way bearing 3303 of the second compensation component 3320 is opposite to the rotation direction of the one-way bearing 3303 of the fourth compensation component 3340.
- One-way bearing 3303 is a bearing that can rotate freely in one direction and lock in the other direction.
- the driving member 3301 of the first compensation component 3310 causes the rotating shaft 3302 to drive
- the inner ring of the one-way bearing 3303 rotates.
- the one-way bearing 3303 is locked. Therefore, the inner ring of the one-way bearing 3303 can drive the outer ring to rotate, which in turn drives the rotating ring gear 3304 to rotate.
- the engagement of the groove 3110 drives the movable plate 3100 to move in a direction that can compensate for shake.
- the third compensation component 3330 can be used to drive the movable plate 3100 to rotate.
- the one-way bearing 3303 of the first compensation component 3310 is in a rotatable state, so the ring gear on the first compensation component 3310 follows the movable plate 3100, and will not affect the activity Reset of board 3100.
- the one-way bearing 3303 and the rotating ring gear 3304 can reduce the overall thickness of the entire mechanical vibration stabilizer 3000 by concealing parts of the one-way bearing 3303 and the rotating ring gear 3304 in the mounting holes.
- a part of the entire compensation component is directly placed in the mounting hole.
- the driving member 3301 in this embodiment may be a micro motor, the micro motor is electrically connected to the processing module, a rotation output end of the micro motor is connected to the rotating shaft 3302, and the micro motor is controlled To the processing module.
- the driving member 3301 is composed of a memory alloy wire and a crank connecting rod. One end of the memory alloy wire is fixed on the fixing plate and is connected to the processing module through a circuit. The other end of the memory alloy wire passes The crank link is connected to the rotating shaft 3302 to drive the rotating shaft 3302 to rotate.
- the processing module calculates the elongation of the memory alloy wire according to the feedback from the gyroscope, and drives the corresponding circuit to the shape memory alloy.
- the temperature of the wire is increased, and the shape memory alloy wire is stretched to drive the crank link mechanism.
- the crank of the crank link mechanism drives the rotation shaft 3302 to rotate the inner ring of the one-way bearing 3303.
- the inner The ring drives the outer ring to rotate, and the rotating ring gear 3304 drives the movable plate 3100 through the strip groove 3110.
- the following describes the working process of the mechanical image stabilizer 3000 of this embodiment in detail in combination with the above structure.
- the movable plate 3100 needs to be compensated for forward motion, and then Left motion compensation once.
- the gyroscope feeds the detected lens 1000 shake direction and distance in advance to the processing module.
- the processing module calculates the required movement distance of the movable plate 3100, and then drives the first compensation component 3310.
- the driving member 3301 causes the rotating shaft 3302 to drive the inner ring of the one-way bearing 3303.
- the one-way bearing 3303 is locked, so the inner ring can drive the outer ring to rotate, which in turn drives the rotating ring gear 3304 to rotate, and the rotating ring gear 3304 passes
- the strip groove 3110 drives the movable plate 3100 to move forward, and then the third compensation component 3330 drives the movable plate 3100 to reset.
- the gyroscope feeds back the detected lens 1000 shake direction and distance to the processing module in advance, and the processing module calculates the motion distance required for the motion board 3100 to drive the second compensation component 3320.
- the driving member 3301 causes the rotating shaft 3302 to drive the inner ring of the one-way bearing 3303.
- the one-way bearing 3303 is locked, so the inner ring can drive the outer ring to rotate, which in turn drives the rotating ring gear 3304 to rotate, and the rotating ring gear 3304 passes
- the strip groove 3110 drives the movable plate 3100 to move forward, and because the external teeth of the ring gear 3304 can slide along the length direction of the strip groove 310, when the movable plate 3100 moves to the left, the movable plate 3100 and the first compensation
- the sliding fitting between the component 3310 and the third compensation component 3330 does not affect the leftward movement of the movable plate 3100.
- the fourth compensation component 3340 is used to drive the movable plate 3100 to reset.
- the above is just two simple jitters.
- the basic working process is the same as the principle described above.
- the detection feedback of the shape memory alloy resistance and the detection feedback of the gyroscope are existing technologies, and are not described here too.
- the mechanical compensator provided by this embodiment not only is not affected by external magnetic fields and has a good anti-shake effect, but also can accurately compensate the lens 1000 in the case of multiple shakes, and the compensation is timely and accurate. Greatly improved the quality of the acquired images, and simplified the difficulty of subsequent 3D image processing.
- the electronic device includes a mobile phone with the image acquisition device.
- the mobile phone includes a stand.
- the purpose of the mobile phone stand is because of the uncertainty of the image acquisition environment, it is necessary to use a stand to support and fix the mobile phone in order to obtain more stable image quality.
- the bracket 6000 in this embodiment includes a mobile phone mounting base 6100 and a retractable supporting rod 6200.
- the supporting rod 6200 and the middle portion of the mobile phone mounting base 6100 pass through a damping hinge.
- the bracket 6000 may form a selfie stick structure
- the bracket 6000 may form a mobile phone bracket 6000 structure.
- the applicant found that the combination of the mobile phone mounting base 6100 and the support pole 6200 takes up a lot of space. Even if the support pole 6200 is retractable, the mobile phone mounting base 6100 cannot undergo structural changes and the volume will not be further reduced. Putting it in a pocket or a small bag causes the inconvenience of carrying the bracket 6000. Therefore, in this embodiment, a second step improvement is performed on the bracket 6000, so that the overall accommodation of the bracket 6000 is further improved.
- the mobile phone mounting base 6100 of this embodiment includes a retractable connection plate 6110 and a folding plate group 6120 installed at opposite ends of the connection plate 6110.
- the support rod 6200 and the connection plate 6110 The middle part is connected by a damping hinge;
- the folding plate group 6120 includes a first plate body 6121, a second plate body 6122, and a third plate body 6123, wherein one of the two opposite ends of the first plate body 6121 is connected to the first plate body 6121.
- the connecting plate 6110 is hinged, and the other end of the opposite ends of the first plate body 6121 is hinged to one of the opposite ends of the second plate body 6122; the opposite ends of the second plate body 6122 are The other end is hinged to one of opposite ends of the third plate body 6123; the second plate body 6122 is provided with an opening 6130 for inserting a corner of the mobile phone.
- the first plate body 6121, the second plate body 6122 and the third plate body 6123 are folded into a right triangle state, and the second plate body 6122 is a hypotenuse of a right-angled triangle, and the first plate body 6121 and the third plate 6123 are right-angled sides of a right triangle, wherein one side of the third plate body 6123 and one of the connection plate 6110 The sides are attached side by side, and the other end of the opposite ends of the third plate body 6123 and one of the opposite ends of the first plate body 6121 abut against each other.
- This structure can make the three folding plates in a self-locking state, and When the two corners of the lower part of the mobile phone are inserted into the two openings 6130 on both sides, the lower sides of the mobile phone 5000 are located in two right-angled triangles.
- the mobile phone 5000 can be completed through the joint work of the mobile phone, the connecting plate 6110, and the folding plate group 6120.
- the triangle state cannot be opened under external force.
- the triangle state of 6120 pieces of folding plate group can only be released after the mobile phone is pulled out from the opening 6130.
- the connecting plate 6110 When the mobile phone mounting base 6100 is not in working state, the connecting plate 6110 is reduced to a minimum length, and the folding plate group 6120 and the connecting plate 6110 are folded to each other.
- the user can fold the mobile phone mounting base 6100 to a minimum volume, and due to the support
- the scalability of the lever 6200 allows the entire bracket 6000 to be accommodated in the smallest volume, which improves the collection of the bracket 6000. Users can even put the bracket 6000 directly into their pockets or small handbags, which is very convenient.
- a first connection portion is also provided on one side of the third plate body 6123, and a side surface where the connection plate 6110 is in contact with the third plate body 6123 is provided with the first connection portion.
- a first mating portion that mates with a connecting portion.
- the first connecting portion of this embodiment is a convex strip or protrusion (not shown in the figure), and the first matching portion is a card slot (not shown in the figure) opened on the connecting plate 6110.
- This structure not only improves the stability when the 6120 pieces of the folding plate group are in a triangle state, but also facilitates the connection between the 6120 pieces of the folding plate group and the connecting plate 6110 when the mobile phone mounting base 6100 needs to be folded to a minimum state.
- a second connection portion is also provided at one end of the opposite ends of the first plate body 6121, and the other end of the opposite ends of the third plate body 6123 is provided with the second connection portion.
- the second connecting portion is a second matching portion that is matched with the second fitting portion, and the second connecting portion and the second fitting portion are engaged and connected.
- the second connecting portion may be a protrusion (not shown in the figure), and the second mating portion is an opening 6130 or a card slot (not shown in the figure) that cooperates with the protrusion.
- a base (not shown in the figure) can be detachably connected to the other end of the support rod 6200.
- the support rod 6200 can be stretched to A certain length and place the bracket 6000 on a plane through the base, and then place the mobile phone in the mobile phone mounting base 6100 to complete the fixing of the mobile phone; and the detachable connection of the support bar 6200 and the base can make the two can be carried separately, further The accommodating of the bracket 6000 and the convenience of carrying are improved.
- the device embodiments described above are only schematic, and the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, which may be located in One place, or can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment. Those of ordinary skill in the art can understand and implement without creative labor.
- An embodiment of the present invention provides a non-transitory computer-readable storage storage medium, where the computer storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by an electronic device, the electronic device is caused
- the video generation method in any of the foregoing method embodiments is performed on the above.
- An embodiment of the present invention provides a computer program product, wherein the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions When executed by an electronic device, the electronic device is caused to execute the video generating method in any of the foregoing method embodiments.
- each embodiment can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware.
- the above-mentioned technical solution in essence or a part that contributes to the existing technology may be embodied in the form of a software product, and the computer software product may be stored in a computer-readable storage medium, the computer-readable record A medium includes any mechanism for storing or transmitting information in a form readable by a computer (eg, a computer).
- machine-readable media include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (e.g., carrier waves , Infrared signals, digital signals, etc.), the computer software product includes a number of instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute various embodiments or certain parts of the embodiments Methods.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Studio Devices (AREA)
Abstract
Les modes de réalisation de la présente invention concernent un procédé et un appareil de génération de vidéo, ainsi qu'un dispositif électronique. Le procédé comprend les étapes consistant à : recevoir une image cible, et extraire des premières informations de caractéristique et des premières informations de pose d'un premier visage dans l'image cible ; rechercher dans une vidéo des deuxièmes visages correspondant aux premières informations de pose, et enregistrer des identifiants de trames vidéo où apparaissent les deuxièmes visages ; déterminer, parmi les deuxièmes visages et sur la base des premières informations de caractéristique, un troisième visage similaire au premier visage ; et acquérir des informations de taille du troisième visage sur la base des identifiants de trames vidéo, traiter le premier visage selon les informations de taille, et remplacer le troisième visage par le premier visage traité. Au moyen du procédé, de l'appareil et du dispositif, un utilisateur peut interagir avec des personnages apparaissant dans une vidéo, ce qui aide l'utilisateur à déterminer un personnage public similaire à lui-même, et améliore l'intérêt de l'utilisateur et sa sensation d'immersion lors de la visualisation de la vidéo.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/102334 WO2020037681A1 (fr) | 2018-08-24 | 2018-08-24 | Procédé et appareil de génération de vidéo, et dispositif électronique |
CN201811035719.3A CN108966017B (zh) | 2018-08-24 | 2018-09-06 | 视频生成方法、装置及电子设备 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/102334 WO2020037681A1 (fr) | 2018-08-24 | 2018-08-24 | Procédé et appareil de génération de vidéo, et dispositif électronique |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020037681A1 true WO2020037681A1 (fr) | 2020-02-27 |
Family
ID=64476031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/102334 WO2020037681A1 (fr) | 2018-08-24 | 2018-08-24 | Procédé et appareil de génération de vidéo, et dispositif électronique |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108966017B (fr) |
WO (1) | WO2020037681A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001296A (zh) * | 2020-08-20 | 2020-11-27 | 广东电网有限责任公司清远供电局 | 变电站立体化安全监控方法、装置,服务器及存储介质 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110266973B (zh) * | 2019-07-19 | 2020-08-25 | 腾讯科技(深圳)有限公司 | 视频处理方法、装置、计算机可读存储介质和计算机设备 |
CN110675433A (zh) * | 2019-10-31 | 2020-01-10 | 北京达佳互联信息技术有限公司 | 视频处理方法、装置、电子设备及存储介质 |
CN111047930B (zh) * | 2019-11-29 | 2021-07-16 | 联想(北京)有限公司 | 一种处理方法、装置及电子设备 |
CN113066497A (zh) * | 2021-03-18 | 2021-07-02 | Oppo广东移动通信有限公司 | 数据处理方法、装置、系统、电子设备和可读存储介质 |
CN113873175B (zh) * | 2021-09-15 | 2024-03-15 | 广州繁星互娱信息科技有限公司 | 视频播放方法、装置和存储介质及电子设备 |
CN113965802A (zh) * | 2021-10-22 | 2022-01-21 | 深圳市兆驰股份有限公司 | 沉浸式视频交互方法、装置、设备和存储介质 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6141431A (en) * | 1995-02-02 | 2000-10-31 | Matsushita Electric Industrial Co., Ltd. | Image processing apparatus |
US6633289B1 (en) * | 1997-10-30 | 2003-10-14 | Wouter Adrie Lotens | Method and a device for displaying at least part of the human body with a modified appearance thereof |
CN1522425A (zh) * | 2001-07-03 | 2004-08-18 | �ʼҷ����ֵ�������˾ | 在原始图像上叠加用户图像的方法和设备 |
US20070127844A1 (en) * | 2005-12-07 | 2007-06-07 | Sony Corporation | Image processing apparatus, image processing method, program, and data configuration |
CN101142586A (zh) * | 2005-03-18 | 2008-03-12 | 皇家飞利浦电子股份有限公司 | 执行人脸识别的方法 |
CN102196245A (zh) * | 2011-04-07 | 2011-09-21 | 北京中星微电子有限公司 | 一种角色互动的视频播放方法和视频播放装置 |
CN103258316A (zh) * | 2013-03-29 | 2013-08-21 | 东莞宇龙通信科技有限公司 | 一种图片处理方法和装置 |
CN105096354A (zh) * | 2014-05-05 | 2015-11-25 | 腾讯科技(深圳)有限公司 | 一种图像处理的方法和装置 |
CN105469379A (zh) * | 2014-09-04 | 2016-04-06 | 广东中星电子有限公司 | 视频目标区域遮挡方法和装置 |
CN106101771A (zh) * | 2016-06-27 | 2016-11-09 | 乐视控股(北京)有限公司 | 视频处理方法、装置及终端 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100929561B1 (ko) * | 2007-08-31 | 2009-12-03 | (주)에프엑스기어 | 사용자지정 안면이미지/음성데이터가 반영된 특화영상컨텐츠 제공 시스템 |
JP2015034912A (ja) * | 2013-08-09 | 2015-02-19 | ミツミ電機株式会社 | レンズホルダ駆動装置、カメラモジュール、およびカメラ付き携帯端末 |
JP6316559B2 (ja) * | 2013-09-11 | 2018-04-25 | クラリオン株式会社 | 情報処理装置、ジェスチャー検出方法、およびジェスチャー検出プログラム |
US9898836B2 (en) * | 2015-02-06 | 2018-02-20 | Ming Chuan University | Method for automatic video face replacement by using a 2D face image to estimate a 3D vector angle of the face image |
CN105118082B (zh) * | 2015-07-30 | 2019-05-28 | 科大讯飞股份有限公司 | 个性化视频生成方法及系统 |
CN106131435B (zh) * | 2016-08-25 | 2021-12-07 | 东莞市亚登电子有限公司 | 微型光学防抖摄像头模组 |
CN205987121U (zh) * | 2016-08-25 | 2017-02-22 | 东莞市亚登电子有限公司 | 微型光学防抖摄像头模组结构 |
CN107563336A (zh) * | 2017-09-07 | 2018-01-09 | 廖海斌 | 用于名人匹配游戏的人脸相似度分析方法、装置和系统 |
-
2018
- 2018-08-24 WO PCT/CN2018/102334 patent/WO2020037681A1/fr active Application Filing
- 2018-09-06 CN CN201811035719.3A patent/CN108966017B/zh active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6141431A (en) * | 1995-02-02 | 2000-10-31 | Matsushita Electric Industrial Co., Ltd. | Image processing apparatus |
US6633289B1 (en) * | 1997-10-30 | 2003-10-14 | Wouter Adrie Lotens | Method and a device for displaying at least part of the human body with a modified appearance thereof |
CN1522425A (zh) * | 2001-07-03 | 2004-08-18 | �ʼҷ����ֵ�������˾ | 在原始图像上叠加用户图像的方法和设备 |
CN101142586A (zh) * | 2005-03-18 | 2008-03-12 | 皇家飞利浦电子股份有限公司 | 执行人脸识别的方法 |
US20070127844A1 (en) * | 2005-12-07 | 2007-06-07 | Sony Corporation | Image processing apparatus, image processing method, program, and data configuration |
CN102196245A (zh) * | 2011-04-07 | 2011-09-21 | 北京中星微电子有限公司 | 一种角色互动的视频播放方法和视频播放装置 |
CN103258316A (zh) * | 2013-03-29 | 2013-08-21 | 东莞宇龙通信科技有限公司 | 一种图片处理方法和装置 |
CN105096354A (zh) * | 2014-05-05 | 2015-11-25 | 腾讯科技(深圳)有限公司 | 一种图像处理的方法和装置 |
CN105469379A (zh) * | 2014-09-04 | 2016-04-06 | 广东中星电子有限公司 | 视频目标区域遮挡方法和装置 |
CN106101771A (zh) * | 2016-06-27 | 2016-11-09 | 乐视控股(北京)有限公司 | 视频处理方法、装置及终端 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001296A (zh) * | 2020-08-20 | 2020-11-27 | 广东电网有限责任公司清远供电局 | 变电站立体化安全监控方法、装置,服务器及存储介质 |
CN112001296B (zh) * | 2020-08-20 | 2024-03-29 | 广东电网有限责任公司清远供电局 | 变电站立体化安全监控方法、装置,服务器及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN108966017B (zh) | 2021-02-12 |
CN108966017A (zh) | 2018-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020037681A1 (fr) | Procédé et appareil de génération de vidéo, et dispositif électronique | |
WO2020037679A1 (fr) | Procédé et appareil de traitement vidéo, et dispositif électronique | |
WO2020037676A1 (fr) | Procédé et appareil de génération d'images tridimensionnelles de visage, et dispositif électronique | |
WO2020037678A1 (fr) | Procédé, dispositif et appareil électronique permettant de générer une image tridimensionnelle de visage humain à partir d'une image occluse | |
WO2020037680A1 (fr) | Procédé et appareil d'optimisation de visage en trois dimensions à base de lumière et dispositif électronique | |
US11509817B2 (en) | Autonomous media capturing | |
CN111726536B (zh) | 视频生成方法、装置、存储介质及计算机设备 | |
US10453248B2 (en) | Method of providing virtual space and system for executing the same | |
WO2019205284A1 (fr) | Procédé et appareil d'imagerie ar | |
US9479736B1 (en) | Rendered audiovisual communication | |
US20180373413A1 (en) | Information processing method and apparatus, and program for executing the information processing method on computer | |
WO2019200719A1 (fr) | Procédé et appareil de génération de modèle de visage humain tridimensionnel et dispositif électronique | |
US10545339B2 (en) | Information processing method and information processing system | |
CN106161939B (zh) | 一种照片拍摄方法及终端 | |
US20140016871A1 (en) | Method for correcting user's gaze direction in image, machine-readable storage medium and communication terminal | |
TWI255141B (en) | Method and system for real-time interactive video | |
WO2020056692A1 (fr) | Procédé et appareil d'interaction d'informations et dispositif électronique | |
WO2020056689A1 (fr) | Procédé et appareil d'imagerie ra et dispositif électronique | |
BR112015014629A2 (pt) | método para operar um sistema que possui um monitor, uma câmera e um processador | |
WO2020056691A1 (fr) | Procédé de génération d'objet interactif, dispositif, et appareil électronique | |
CN113487709B (zh) | 一种特效展示方法、装置、计算机设备以及存储介质 | |
CN114697539B (zh) | 拍照推荐方法、装置、电子设备以及存储介质 | |
CN113453034A (zh) | 数据展示方法、装置、电子设备以及计算机可读存储介质 | |
WO2021232875A1 (fr) | Procédé et appareil de commande de personne numérique, et appareil électronique | |
JP7431903B2 (ja) | 撮影システム、撮影方法、撮影プログラム、及びぬいぐるみ |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18930851 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18930851 Country of ref document: EP Kind code of ref document: A1 |