WO2017092127A1 - 视频归类方法及装置 - Google Patents
视频归类方法及装置 Download PDFInfo
- Publication number
- WO2017092127A1 WO2017092127A1 PCT/CN2015/099610 CN2015099610W WO2017092127A1 WO 2017092127 A1 WO2017092127 A1 WO 2017092127A1 CN 2015099610 W CN2015099610 W CN 2015099610W WO 2017092127 A1 WO2017092127 A1 WO 2017092127A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- face
- category
- determining
- picture
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000001815 facial effect Effects 0.000 claims description 31
- 230000003252 repetitive effect Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000037406 food intake Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Definitions
- the present disclosure relates to the field of multimedia clustering technologies, and in particular, to a video categorization method and apparatus.
- Embodiments of the present disclosure provide a video categorization method and apparatus.
- the technical solution is as follows:
- a video categorization method including:
- the video is assigned to a picture category to which the video belongs.
- the acquiring a key frame including a face in the video includes:
- a key frame in the video is determined based on the face parameters in each of the video frames.
- the determining, according to the face parameters in each video frame, the key frames in the video including:
- At least one of the non-repeating video frames is determined as the key frame.
- the determining, according to the face parameters in each video frame, the key frames in the video including:
- each group of the repeated video frames includes at least two video frames, each group of the repetition
- the difference between the ingest time between the video frame with the latest ingested time and the video frame with the earliest time of the video frame is less than or equal to the preset duration, and the face parameters of all the video frames in each group of the repeated video frames are the same;
- Any one of the sets of the repeated video frames is determined as the key frame.
- the determining, according to the face feature in the key frame and the face feature corresponding to the picture category, the picture category to which the video belongs including: when the number of the video is at least two Determining a face feature in the key frame of each video; performing face clustering processing on the at least two videos according to a face feature in the key frame of each video to obtain at least one a video category; determining, according to a face feature corresponding to each of the at least one video category and a face feature corresponding to the picture category, a video category and a picture category corresponding to the same facial feature;
- the assigning the video to a picture category to which the video belongs includes: assigning a video in each of the video categories to a picture category corresponding to the same facial feature.
- the determining, according to the face feature in the key frame and the face feature corresponding to the picture category, the picture category to which the video belongs including:
- the matched picture category is determined as the picture category to which the video belongs.
- the method further includes:
- the video is assigned to a picture category to which the destination picture belongs.
- a video categorization apparatus including:
- a first acquiring module configured to acquire a key frame including a face in the video
- a second acquiring module configured to acquire a facial feature in the key frame acquired by the first acquiring module
- a third acquiring module configured to acquire a face feature corresponding to the picture category
- a first determining module configured to determine the video according to a face feature in the key frame acquired by the second acquiring module and a face feature corresponding to the picture category acquired by the third acquiring module The category of the picture to which it belongs;
- a first allocation module configured to allocate the video to a picture category to which the video determined by the first determining module belongs.
- the first acquiring module includes:
- Obtaining a submodule configured to acquire at least one video frame including a human face from the video;
- Obtaining a submodule configured to acquire at least one video frame including a human face from the video;
- a first determining submodule configured to determine a face parameter in each video frame in the at least one video frame acquired by the acquiring submodule, where the face parameter includes a number of faces and a face position Any one or two;
- a second determining submodule configured to determine a key frame in the video according to the face parameter in each video frame.
- the second determining submodule is further configured to determine, according to the face parameter in each video frame, the non-repetitive video that the face parameter does not repeatedly appear in other video frames. a frame; determining at least one of the non-repeating video frames as the key frame.
- the second determining submodule is further configured to determine, according to the face parameter in each video frame, at least one set of repeated video frames with the same face parameter, each group of The repeated video frame includes at least two video frames, and the difference between the ingest time between the video frame with the latest ingested time and the video frame with the earliest time in each of the repeated video frames is less than or equal to a preset duration, each group The face parameters of all the video frames in the repeated video frames are the same; any one of the sets of the repeated video frames is determined as the key frame.
- the first determining module includes:
- a third determining submodule configured to determine a face feature in the key frame of each video when the number of the videos is at least two; according to a face feature in the key frame of each video, Performing face clustering processing on the at least two videos to obtain at least one video category; determining corresponding corresponding facial features according to respective facial features corresponding to the at least one video category and facial features corresponding to the image category Video category and image category;
- the first distribution module includes:
- a first allocation submodule configured to allocate, in the picture category corresponding to the same facial feature, the video in each video category determined by the third determining submodule.
- the first determining module includes:
- a fourth determining submodule configured to determine, in a facial feature corresponding to the picture category, a picture category that matches a facial feature in the key frame
- a second allocation submodule configured to determine, by the fourth determining submodule, the matched picture category as a picture category to which the video belongs.
- the apparatus further includes:
- a fourth acquiring module configured to acquire a shooting time and a shooting location of the video
- a second determining module configured to determine a target picture that is the same as the shooting time and the shooting location of the video acquired by the fourth acquiring module
- a second allocation module configured to allocate the video to a picture category to which the target picture determined by the second determining module belongs.
- a video classification apparatus including:
- a memory for storing processor executable instructions
- processor is configured to:
- the video is assigned to a picture category to which the video belongs.
- the video can be intelligently and automatically classified into the picture category corresponding to the person participating in the video, which not only does not require manual classification by the user, but also has high classification accuracy.
- FIG. 1 is a flow chart showing a video categorization method according to an exemplary embodiment.
- FIG. 2 is a flow chart of another video categorization method, according to an exemplary embodiment.
- FIG. 3 is a flowchart of still another video categorization method according to an exemplary embodiment.
- FIG. 4 is a block diagram of a video categorizing device, according to an exemplary embodiment.
- FIG. 5 is a block diagram of another video categorization device, according to an exemplary embodiment.
- FIG. 6 is a block diagram of still another video categorization apparatus according to an exemplary embodiment.
- FIG. 7 is a block diagram of still another video categorization apparatus, according to an exemplary embodiment.
- FIG. 8 is a block diagram of still another video categorization apparatus, according to an exemplary embodiment.
- FIG. 9 is a block diagram suitable for a network connection device, according to an exemplary embodiment.
- the embodiment of the present disclosure provides a video categorization technology, which can intelligently and automatically assign a video into a picture category corresponding to a person participating in the video, which not only does not require manual classification by the user, but also has high classification accuracy.
- each picture category corresponds to one face
- each picture category has the same face in the picture, or it can be said that one picture category corresponds to one person. Therefore, each picture category includes a group with the same facial feature. image.
- the embodiment of the present disclosure may adopt the following face clustering method to generate a picture category, but is not limited to the following method.
- the first clustering is initialized by a full-scale clustering method, and the subsequent clustering is generally an incremental clustering method.
- the face clustering method may include the following steps A1-A5:
- Step A1 Obtain face features included in each of the N pictures, and obtain N face features, where N is greater than or equal to 2. At the beginning of the cluster, each face is treated as a class, then there are N classes at the beginning.
- Step A2 In the N classes, calculate the distance between the class and the class, and the distance between the class and the class is the distance between the faces of the two classes.
- Step A3 a distance threshold ⁇ is preset, and when the distance between the two classes is less than ⁇ , the two classes are considered to correspond to the same person, and this iteration merges the two classes into a new class. .
- Step A4 step A3 is repeatedly performed to perform repeated iterations until no new class is generated in one iteration, and the iteration is terminated.
- step A5 the result is a total of M classes, each class containing at least one face, and one class representing one person.
- FIG. 1 is a flowchart of a video categorization method according to an embodiment of the present disclosure.
- the execution body of the method may be an application for managing a multimedia file.
- the video, the picture category, and the picture under the picture category involved in the method refer to the video and picture category stored in the device where the application is located.
- the image under the image category may also be an electronic device that stores a multimedia file.
- the video, the picture category, and the picture under the picture category involved in the method refer to the video and picture stored in the electronic device. Categories and images under the image category.
- the foregoing application or the electronic device may automatically trigger the method periodically, or may trigger the method when receiving the indication of the user, or may automatically trigger the method when it detects that at least one new video is generated, and trigger
- the timing of the method may be various, and is not limited to the above exemplified ones.
- the ultimate purpose is to use the method to intelligently classify videos and save manpower. As shown in FIG. 1, the method includes steps S101-S105:
- step S101 a key frame including a face in the video is acquired.
- any one or more video frames including a human face may be selected from the video as a key frame, or a key frame may be acquired as shown in FIG. 2.
- step S101 may be implemented as follows. Steps S201-S203:
- step S201 at least one video frame including a face is acquired from the video.
- a face parameter in each video frame is determined in at least one video frame, and the face parameter includes any one or two of a face number and a face position.
- step S203 a key frame in the video is determined based on the face parameters in each video frame.
- the step S203 can be implemented as any one or two of the following manners 1 and 2.
- Method 1 According to each
- the face parameters in the video frames determine that the face parameters are not repeatedly present in the non-repeating video frames in other video frames; the at least one non-repeating video frame is determined as the key frame.
- the non-repeating video frame refers to a video frame in which the face parameter is different from any other video frame, that is, the face picture is not repeatedly displayed in other video frames, and therefore, one or more non-repetitions can be arbitrarily selected.
- Video frames are used as key frames.
- Manner 2 determining, according to the face parameters in each video frame, at least one set of repeated video frames with the same face parameters, and each set of repeated video frames includes at least two video frames, and each group of repeated video frames has the latest ingestion time.
- the difference between the ingestion time between the video frame and the earliest video frame is less than or equal to the preset duration, and the face parameters of all video frames in each group of repeated video frames are the same; any video in each group of repeated video frames will be repeated
- the frame is determined to be a key frame.
- the preset duration can be preset. Since the same picture in the video does not last for too long, the preset duration should not be too long. Considering that the video is played 24 frames per second, the preset duration can be Controlled in N/24 seconds, N is greater than or equal to 1, and less than or equal to 24 (or 36, or other values, which can be determined as needed). The shorter the preset duration, the more accurate the last selected keyframe. That is, the face pictures of each video frame in each set of repeated video frames are the same, that is, the same face picture appears in multiple video frames. Therefore, any one of the video frames can be selected as a key frame in each set of repeated video frames, which realizes the deduplication effect and improves the efficiency of selecting key frames.
- the first method and the second method may be implemented separately or in combination.
- step S102 a face feature in a key frame is acquired.
- step S103 a face feature corresponding to the picture category is acquired.
- step S104 the picture category to which the video belongs is determined according to the face feature in the key frame and the face feature corresponding to the picture category.
- step S105 the video is assigned to the picture category to which the video belongs.
- the above method provided by the embodiment of the present disclosure can intelligently and automatically classify a video and a picture, and does not need to be manually classified by a user, and is classified according to a face feature, and has high accuracy.
- step S104 may be implemented as steps B1-B2: step B1, determining a picture category that matches a face feature in a key frame in a face feature corresponding to the picture category; for example, performing the foregoing step A1 -A5, through the face clustering process, determining the picture category to which the key frame belongs according to the face feature in the key frame, and the picture category to which the key frame belongs is the picture category matching the face feature in the key frame;
- Step B2 Determine the matched picture category determined by the above step B1 as the picture category to which the video belongs.
- step S104 can be implemented as steps C1-C3:
- Step C1 determining a face feature in a key frame of each video when the number of videos is at least two; step C2, performing face on at least two videos according to a face feature in a key frame of each video
- the clustering process obtains at least one video category, and one video category corresponds to one human face; specifically, the face clustering method shown in the foregoing steps A1-A5 may be used to perform face clustering processing on each key frame to obtain at least a class; a class is a video category, such that each video category corresponds to a face feature; the video category to which the key frame of the video belongs is the video category to which the video belongs; step C3, the person corresponding to each of the at least one video category
- the face feature and the face feature corresponding to the picture category determine a video category and a picture category corresponding to the same facial feature; that is, a video category and a picture category corresponding to the same facial feature are determined.
- the above step S105 can be implemented as: assigning videos in each video category to picture categories corresponding to the same facial features.
- the video is first subjected to face clustering processing to obtain a video category, and then the video category and the image category are subjected to face clustering processing to determine a video category and a picture category corresponding to the same face, each of which will be
- the video in the video category is assigned to the picture category corresponding to the same facial feature, thereby realizing the categorization processing of the video.
- the above method may also perform video categorization in the following manner, which does not require face clustering processing, but roughly assumes that as long as the shooting time and the shooting location are the same video and picture, it is considered They are the same person involved, they can be classified into one category, this method has certain accuracy and is fast.
- the foregoing method may further include steps S301-S303: step S301, acquiring a shooting time and a shooting location of the video; and step S302, determining a destination image that is the same as the shooting time and the shooting location of the video; and step S303, the video is displayed. Assigned to the picture category to which the destination picture belongs.
- a second aspect of the embodiments of the present disclosure provides a video categorization device, which can be used to manage an application of a multimedia file.
- the video, the picture category, and the picture under the picture category in the device refer to The video, image category, and image under the image category stored in the device where the above application is located.
- the device can also be used for an electronic device storing a multimedia file.
- the video, the picture category, and the picture under the picture category in the device refer to the video and picture categories stored in the electronic device. Picture under the picture category.
- the above application or electronic device may automatically trigger the device to perform an operation periodically, or may trigger the device to perform when receiving an instruction from the user.
- the operation may also automatically trigger the device to perform an operation when it detects that at least one new video is generated.
- the trigger timing may be various, and is not limited to the above-exemplified ones, and the ultimate purpose is to use the device to video. Intelligent classification, saving manpower.
- the device comprises:
- the first obtaining module 41 is configured to acquire a key frame including a face in the video
- the second obtaining module 42 is configured to acquire a facial feature in the key frame acquired by the first obtaining module 41;
- the third obtaining module 43 is configured to acquire a face feature corresponding to the picture category
- the first determining module 44 is configured to determine a picture category to which the video belongs according to the face feature in the key frame acquired by the second obtaining module 42 and the face feature corresponding to the picture category acquired by the third obtaining module 43;
- the first allocating module 45 is configured to allocate the video to the picture category to which the video determined by the first determining module 41 belongs.
- the foregoing device provided by the embodiment of the present disclosure can intelligently and automatically classify videos and pictures, and does not need to be manually classified by a user, and is classified according to facial features, and has high accuracy.
- the first obtaining module 41 includes:
- the obtaining submodule 51 is configured to acquire at least one video frame including a human face from the video;
- the first determining sub-module 52 is configured to determine, in the at least one video frame acquired by the obtaining sub-module 51, a face parameter in each video frame, where the face parameter includes any one of a face number and a face position. Or two;
- the second determining sub-module 53 is configured to determine key frames in the video based on the face parameters in each video frame.
- the second determining submodule 53 is further configured to determine, according to the face parameter in each video frame, a non-repetitive video frame in which the face parameter is not repeatedly present in other video frames;
- the repeated video frame is determined as a key frame. That is, the non-repeating video frame refers to a video frame in which the face parameter is different from any other video frame, that is, the face picture is not repeatedly displayed in other video frames, and therefore, one or more non-repetitions can be arbitrarily selected.
- Video frames are used as key frames.
- the second determining submodule 53 is further configured to determine, according to the face parameters in each video frame, at least one set of repeated video frames having the same face parameters, and each set of the repeated video frames includes at least two Video frames, the difference between the ingest time between the video frame with the latest ingested time and the video frame with the earliest time in each group of repeated video frames is less than or equal to the preset duration, and the face of all video frames in each group of repeated video frames The parameters are the same; any video frame in each set of repeated video frames is determined as a key frame.
- the preset duration can be preset. Since the same picture in the video does not last for too long, the preset duration should not be too long. Considering that the video is played 24 frames per second, the preset duration can be Controlled in N/24 seconds, N is greater than or equal to 1, and less than or equal to 24 (or 36, or other values, which can be determined as needed). The shorter the preset duration, the more accurate the last selected keyframe. That is, the face image of each video frame in each group of repeated video frames is the same, that is, The same face picture appears in multiple video frames. Therefore, any one of the video frames can be selected as a key frame in each set of repeated video frames, which realizes the deduplication effect and improves the efficiency of selecting key frames.
- the first determining module 44 includes:
- a third determining sub-module 61 configured to determine a face feature in a key frame of each video when the number of videos is at least two; at least two according to a face feature in a key frame of each video
- the video performs face clustering processing to obtain at least one video category; one video category corresponds to one human face; specifically, the face clustering method shown in the foregoing steps A1-A5 may be used to perform face clustering for each key frame.
- one class is a video category, such that each video category corresponds to a face feature; the video category to which the key frame of the video belongs is the video category to which the video belongs; corresponding to each of the at least one video category
- the face feature and the face feature corresponding to the picture category determine a video category and a picture category corresponding to the same facial feature; that is, a video category and a picture category corresponding to the same facial feature are determined.
- the first distribution module 45 includes:
- the first distribution sub-module 62 is configured to allocate the video in each video category determined by the third determination sub-mode 61 into the picture category corresponding to the same facial feature.
- the video is first subjected to face clustering processing to obtain a video category, and then the video category and the image category are subjected to face clustering processing to determine a video category and a picture category corresponding to the same face, and each video is selected.
- the video in the category is assigned to the picture category corresponding to the same facial feature, thereby realizing the categorization processing of the video.
- the first determining module 44 includes:
- the fourth determining sub-module 71 is configured to determine a picture category that matches a facial feature in the key frame in the facial feature corresponding to the picture category;
- the second allocation sub-module 72 is configured to determine the matched picture category determined by the fourth determining sub-module 71 as the picture category to which the video belongs.
- the foregoing apparatus further includes:
- the fourth obtaining module 81 is configured to acquire a shooting time and a shooting location of the video
- the second determining module 82 is configured to determine a destination picture that is the same as the shooting time and the shooting location of the video acquired by the fourth obtaining module 81;
- the second allocating module 83 is configured to allocate the video to the picture category to which the destination picture determined by the second determining module 82 belongs.
- the above device does not need to perform face clustering processing, but roughly assumes that as long as the video and the picture with the same shooting time and shooting location are considered to be the same person, they can be classified into one category. Certain accuracy and fast classification.
- a video classification apparatus including:
- a memory for storing processor executable instructions
- processor is configured to:
- FIG. 9 is a block diagram of an apparatus 800 for video categorization, according to an exemplary embodiment.
- device 800 can be a mobile device such as a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.
- device 800 can include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, And a communication component 816.
- Processing component 802 typically controls the overall operation of device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- Processing component 802 can include one or more processors 820 to execute instructions to perform all or part of the steps of the above described methods.
- processing component 802 can include one or more modules to facilitate interaction between component 802 and other components.
- processing component 802 can include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
- Memory 804 is configured to store various types of data to support operation at device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phone book data, messages, pictures, videos, and the like.
- the memory 804 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM Electrically erasable programmable read only memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Disk Disk or Optical Disk.
- Power component 806 provides power to various components of device 800.
- Power component 806 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 800.
- the multimedia component 808 includes a screen between the device 800 and the user that provides an output interface.
- the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor can sense not only the boundary of the touch or the sliding action, It also detects the duration and pressure associated with the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input an audio signal.
- the audio component 810 includes a microphone (MIC) that is configured to receive an external audio signal when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
- the received audio signal may be further stored in memory 804 or transmitted via communication component 816.
- the audio component 810 also includes a speaker for outputting an audio signal.
- the I/O interface 812 provides an interface between the processing component 802 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
- Sensor assembly 814 includes one or more sensors for providing device 800 with a status assessment of various aspects.
- sensor assembly 814 can detect an open/closed state of device 800, a relative positioning of components, such as the display and keypad of device 800, and sensor component 814 can also detect a change in position of one component of device 800 or device 800. The presence or absence of user contact with device 800, device 800 orientation or acceleration/deceleration, and temperature variation of device 800.
- Sensor assembly 814 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor assembly 814 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- Communication component 816 is configured to facilitate wired or wireless communication between device 800 and other devices.
- the device 800 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
- the communication component 816 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a near field communication (NFC) module to facilitate short range communication.
- NFC near field communication
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable A gate array
- controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- non-transitory computer readable storage medium comprising instructions, such as a memory 804 comprising instructions executable by processor 820 of apparatus 800 to perform the above method.
- the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and Optical data storage devices, etc.
- a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of a mobile terminal, enabling the mobile terminal to perform a video categorization method, the method comprising:
- the video is assigned to a picture category to which the video belongs.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Collating Specific Patterns (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (15)
- 一种视频归类方法,其特征在于,包括:获取视频中包括人脸的关键帧;获取所述关键帧中的人脸特征;获取图片类别对应的人脸特征;根据所述关键帧中的人脸特征和所述图片类别对应的人脸特征,确定所述视频所归属的图片类别;将所述视频分配至所述视频所归属的图片类别中。
- 如权利要求1所述的方法,其特征在于,所述获取视频中包括人脸的关键帧,包括:从所述视频中获取包括人脸的至少一个视频帧;确定所述至少一个视频帧中,每个视频帧中的人脸参数,所述人脸参数包括人脸数目、人脸位置中的任一项或两项;根据所述每个视频帧中的人脸参数,确定所述视频中的关键帧。
- 根据权利要求2所述的方法,其特征在于,所述根据所述每个视频帧中的人脸参数,确定所述视频中的关键帧,包括:根据所述每个视频帧中的所述人脸参数,确定所述人脸参数未重复出现在其它视频帧中的非重复视频帧;将至少一个所述非重复视频帧确定为所述关键帧。
- 根据权利要求2所述的方法,其特征在于,所述根据所述每个视频帧中的人脸参数,确定所述视频中的关键帧,包括:根据所述每个视频帧中的所述人脸参数,确定所述人脸参数相同的至少一组重复视频帧,每组所述重复视频帧中包括至少两个视频帧,每组所述重复视频帧中摄取时间最晚的视频帧与摄取时间最早的视频帧之间的摄取时间之差小于或等于预设时长,每组所述重复视频帧中所有视频帧的人脸参数相同;将每组所述重复视频帧中的任一视频帧确定为所述关键帧。
- 如权利要求1所述的方法,其特征在于,所述根据所述关键帧中的人脸特征和所述图片类别对应的人脸特征,确定所述视频所归属的图片类别,包括:当所述视频的数目为至少两个时,确定每个视频的所述关键帧中的人脸特征;根据每个视频的所述关键帧中的人脸特征,对所述至少两个视频进行人脸聚类处理,获 得至少一个视频类别;根据所述至少一个视频类别各自对应的人脸特征和所述图片类别对应的人脸特征,确定对应相同人脸特征的视频类别和图片类别;所述将所述视频分配至所述视频所归属的图片类别中,包括:将所述每个视频类别中的视频分配至对应相同人脸特征的图片类别中。
- 如权利要求1所述的方法,其特征在于,所述根据所述关键帧中的人脸特征和所述图片类别对应的人脸特征,确定所述视频所归属的图片类别,包括:在所述图片类别对应的人脸特征中,确定与所述关键帧中的人脸特征匹配的图片类别;将所述匹配的图片类别确定为所述视频所归属的图片类别。
- 如权利要求1所述的方法,其特征在于,所述方法还包括:获取所述视频的拍摄时间和拍摄地点;确定与所述视频的拍摄时间和拍摄地点相同的目的图片;将所述视频分配至所述目的图片所归属的图片类别中。
- 一种视频归类装置,其特征在于,包括:第一获取模块,用于获取视频中包括人脸的关键帧;第二获取模块,用于获取所述第一获取模块获取到的所述关键帧中的人脸特征;第三获取模块,用于获取图片类别对应的人脸特征;第一确定模块,用于根据所述第二获取模块获取到的所述关键帧中的人脸特征和所述第三获取模块获取到的所述图片类别对应的人脸特征,确定所述视频所归属的图片类别;第一分配模块,用于将所述视频分配至所述第一确定模块确定出的所述视频所归属的图片类别中。
- 如权利要求8所述的装置,其特征在于,所述第一获取模块,包括:获取子模块,用于从所述视频中获取包括人脸的至少一个视频帧;第一确定子模块,用于确定所述获取子模块获取到的所述至少一个视频帧中,每个视频帧中的人脸参数,所述人脸参数包括人脸数目、人脸位置中的任一项或两项;第二确定子模块,用于根据所述每个视频帧中的人脸参数,确定所述视频中的关键帧。
- 如权利要求9所述的装置,其特征在于,所述第二确定子模块,还用于根据所述每个视频帧中的所述人脸参数,确定所述人脸参数未重复出现在其它视频帧中的非重复视频帧;将至少一个所述非重复视频帧确定为所述关键帧。
- 如权利要求9所述的装置,其特征在于,所述第二确定子模块,还用于根据所述每个视频帧中的所述人脸参数,确定所述人脸参数相同的至少一组重复视频帧,每组所述重复视频帧中包括至少两个视频帧,每组所述重复视频帧中摄取时间最晚的视频帧与摄取时间最早的视频帧之间的摄取时间之差小于或等于预设时长,每组所述重复视频帧中所有视频帧的人脸参数相同;将每组所述重复视频帧中的任一视频帧确定为所述关键帧。
- 如权利要求8所述的装置,其特征在于,所述第一确定模块,包括:第三确定子模块,用于当所述视频的数目为至少两个时,确定每个视频的所述关键帧中的人脸特征;根据每个视频的所述关键帧中的人脸特征,对所述至少两个视频进行人脸聚类处理,获得至少一个视频类别;根据所述至少一个视频类别各自对应的人脸特征和所述图片类别对应的人脸特征,确定对应相同人脸特征的视频类别和图片类别;所述第一分配模块,包括:第一分配子模块,用于将所述第三确定子模块确定出的所述每个视频类别中的视频分配至对应相同人脸特征的图片类别中。
- 如权利要求8所述的装置,其特征在于,所述第一确定模块,包括:第四确定子模块,用于在所述图片类别对应的人脸特征中,确定与所述关键帧中的人脸特征匹配的图片类别;第二分配子模块,用于将所述第四确定子模块确定出的所述匹配的图片类别确定为所述视频所归属的图片类别。
- 如权利要求8所述的装置,其特征在于,所述装置还包括:第四获取模块,用于获取所述视频的拍摄时间和拍摄地点;第二确定模块,用于确定与所述第四获取模块获取到的所述视频的拍摄时间和拍摄地点相同的目的图片;第二分配模块,用于将所述视频分配至所述第二确定模块确定出的所述目的图片所归属的图片类别中。
- 一种视频分类装置,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:获取视频中包括人脸的关键帧;获取所述关键帧中的人脸特征;获取图片类别对应的人脸特征;根据所述关键帧中的人脸特征和所述图片类别对应的人脸特征,确定所述视频所归属的图片类别;将所述视频分配至所述视频所归属的图片类别中。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
RU2016136707A RU2667027C2 (ru) | 2015-12-01 | 2015-12-29 | Способ и устройство категоризации видео |
JP2016523976A JP6423872B2 (ja) | 2015-12-01 | 2015-12-29 | ビデオ分類方法および装置 |
KR1020167010359A KR101952486B1 (ko) | 2015-12-01 | 2015-12-29 | 동영상 분류 방법 및 장치 |
MX2016005882A MX2016005882A (es) | 2015-12-01 | 2015-12-29 | Metodo y aparato de categorizacion de video. |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510867436.5 | 2015-12-01 | ||
CN201510867436.5A CN105426515B (zh) | 2015-12-01 | 2015-12-01 | 视频归类方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017092127A1 true WO2017092127A1 (zh) | 2017-06-08 |
Family
ID=55504727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/099610 WO2017092127A1 (zh) | 2015-12-01 | 2015-12-29 | 视频归类方法及装置 |
Country Status (8)
Country | Link |
---|---|
US (1) | US10115019B2 (zh) |
EP (1) | EP3176709A1 (zh) |
JP (1) | JP6423872B2 (zh) |
KR (1) | KR101952486B1 (zh) |
CN (1) | CN105426515B (zh) |
MX (1) | MX2016005882A (zh) |
RU (1) | RU2667027C2 (zh) |
WO (1) | WO2017092127A1 (zh) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106227868A (zh) * | 2016-07-29 | 2016-12-14 | 努比亚技术有限公司 | 视频文件的归类方法和装置 |
CN106453916B (zh) * | 2016-10-31 | 2019-05-31 | 努比亚技术有限公司 | 对象分类装置及方法 |
KR20190007816A (ko) | 2017-07-13 | 2019-01-23 | 삼성전자주식회사 | 동영상 분류를 위한 전자 장치 및 그의 동작 방법 |
CN108830151A (zh) * | 2018-05-07 | 2018-11-16 | 国网浙江省电力有限公司 | 基于高斯混合模型的面具检测方法 |
CN108986184B (zh) * | 2018-07-23 | 2023-04-18 | Oppo广东移动通信有限公司 | 视频创建方法及相关设备 |
CN110334753B (zh) * | 2019-06-26 | 2023-04-07 | Oppo广东移动通信有限公司 | 视频分类方法、装置、电子设备及存储介质 |
CN110516624A (zh) * | 2019-08-29 | 2019-11-29 | 北京旷视科技有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN110580508A (zh) * | 2019-09-06 | 2019-12-17 | 捷开通讯(深圳)有限公司 | 视频分类方法、装置、存储介质和移动终端 |
CN111177086A (zh) * | 2019-12-27 | 2020-05-19 | Oppo广东移动通信有限公司 | 文件聚类方法及装置、存储介质和电子设备 |
CN111553191A (zh) * | 2020-03-30 | 2020-08-18 | 深圳壹账通智能科技有限公司 | 基于人脸识别的视频分类方法、装置及存储介质 |
CN112069875B (zh) * | 2020-07-17 | 2024-05-28 | 北京百度网讯科技有限公司 | 人脸图像的分类方法、装置、电子设备和存储介质 |
CN112035685B (zh) * | 2020-08-17 | 2024-06-18 | 中移(杭州)信息技术有限公司 | 相册视频生成方法、电子设备和存储介质 |
CN112835807B (zh) * | 2021-03-02 | 2022-05-31 | 网易(杭州)网络有限公司 | 界面识别方法、装置、电子设备和存储介质 |
CN115115822B (zh) * | 2022-06-30 | 2023-10-31 | 小米汽车科技有限公司 | 车端图像处理方法、装置、车辆、存储介质及芯片 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040228504A1 (en) * | 2003-05-13 | 2004-11-18 | Viswis, Inc. | Method and apparatus for processing image |
CN103207870A (zh) * | 2012-01-17 | 2013-07-17 | 华为技术有限公司 | 一种照片分类管理方法、服务器、装置及系统 |
CN103530652A (zh) * | 2013-10-23 | 2014-01-22 | 北京中视广信科技有限公司 | 一种基于人脸聚类的视频编目方法、检索方法及其系统 |
CN103827856A (zh) * | 2011-09-27 | 2014-05-28 | 惠普发展公司,有限责任合伙企业 | 检索视觉媒体 |
CN104284240A (zh) * | 2014-09-17 | 2015-01-14 | 小米科技有限责任公司 | 视频浏览方法及装置 |
CN104317932A (zh) * | 2014-10-31 | 2015-01-28 | 小米科技有限责任公司 | 照片分享方法及装置 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005227957A (ja) * | 2004-02-12 | 2005-08-25 | Mitsubishi Electric Corp | 最適顔画像記録装置及び最適顔画像記録方法 |
EP1867173A2 (en) * | 2005-03-10 | 2007-12-19 | QUALCOMM Incorporated | Content adaptive multimedia processing |
JP4616091B2 (ja) * | 2005-06-30 | 2011-01-19 | 株式会社西部技研 | 回転式ガス吸着濃縮装置 |
US8150155B2 (en) * | 2006-02-07 | 2012-04-03 | Qualcomm Incorporated | Multi-mode region-of-interest video object segmentation |
KR100771244B1 (ko) * | 2006-06-12 | 2007-10-29 | 삼성전자주식회사 | 동영상 데이터 처리 방법 및 장치 |
JP4697106B2 (ja) * | 2006-09-25 | 2011-06-08 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
JP2008117271A (ja) * | 2006-11-07 | 2008-05-22 | Olympus Corp | デジタル画像の被写体認識装置、プログラム、および記録媒体 |
US8488901B2 (en) * | 2007-09-28 | 2013-07-16 | Sony Corporation | Content based adjustment of an image |
JP5278425B2 (ja) * | 2008-03-14 | 2013-09-04 | 日本電気株式会社 | 映像分割装置、方法及びプログラム |
JP5134591B2 (ja) * | 2009-06-26 | 2013-01-30 | 京セラドキュメントソリューションズ株式会社 | ワイヤー係止構造 |
JP2011100240A (ja) * | 2009-11-05 | 2011-05-19 | Nippon Telegr & Teleph Corp <Ntt> | 代表画像抽出方法,代表画像抽出装置および代表画像抽出プログラム |
US8452778B1 (en) * | 2009-11-19 | 2013-05-28 | Google Inc. | Training of adapted classifiers for video categorization |
JP2011234180A (ja) * | 2010-04-28 | 2011-11-17 | Panasonic Corp | 撮像装置、再生装置、および再生プログラム |
US9405771B2 (en) * | 2013-03-14 | 2016-08-02 | Microsoft Technology Licensing, Llc | Associating metadata with images in a personal image collection |
WO2014205090A1 (en) * | 2013-06-19 | 2014-12-24 | Set Media, Inc. | Automatic face discovery and recognition for video content analysis |
WO2015082572A2 (en) * | 2013-12-03 | 2015-06-11 | Dacuda Ag | User feedback for real-time checking and improving quality of scanned image |
CN104133875B (zh) * | 2014-07-24 | 2017-03-22 | 北京中视广信科技有限公司 | 一种基于人脸的视频标注方法和视频检索方法 |
CN104361128A (zh) * | 2014-12-05 | 2015-02-18 | 河海大学 | 一种基于水工巡检业务的pc端与移动端数据同步方法 |
-
2015
- 2015-12-01 CN CN201510867436.5A patent/CN105426515B/zh active Active
- 2015-12-29 WO PCT/CN2015/099610 patent/WO2017092127A1/zh active Application Filing
- 2015-12-29 RU RU2016136707A patent/RU2667027C2/ru active
- 2015-12-29 JP JP2016523976A patent/JP6423872B2/ja active Active
- 2015-12-29 MX MX2016005882A patent/MX2016005882A/es unknown
- 2015-12-29 KR KR1020167010359A patent/KR101952486B1/ko active IP Right Grant
-
2016
- 2016-06-24 EP EP16176268.7A patent/EP3176709A1/en not_active Ceased
- 2016-08-19 US US15/241,804 patent/US10115019B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040228504A1 (en) * | 2003-05-13 | 2004-11-18 | Viswis, Inc. | Method and apparatus for processing image |
CN103827856A (zh) * | 2011-09-27 | 2014-05-28 | 惠普发展公司,有限责任合伙企业 | 检索视觉媒体 |
CN103207870A (zh) * | 2012-01-17 | 2013-07-17 | 华为技术有限公司 | 一种照片分类管理方法、服务器、装置及系统 |
CN103530652A (zh) * | 2013-10-23 | 2014-01-22 | 北京中视广信科技有限公司 | 一种基于人脸聚类的视频编目方法、检索方法及其系统 |
CN104284240A (zh) * | 2014-09-17 | 2015-01-14 | 小米科技有限责任公司 | 视频浏览方法及装置 |
CN104317932A (zh) * | 2014-10-31 | 2015-01-28 | 小米科技有限责任公司 | 照片分享方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
RU2016136707A3 (zh) | 2018-03-16 |
KR101952486B1 (ko) | 2019-02-26 |
CN105426515B (zh) | 2018-12-18 |
JP2018502340A (ja) | 2018-01-25 |
RU2016136707A (ru) | 2018-03-16 |
KR20180081637A (ko) | 2018-07-17 |
RU2667027C2 (ru) | 2018-09-13 |
CN105426515A (zh) | 2016-03-23 |
MX2016005882A (es) | 2017-08-02 |
US20170154221A1 (en) | 2017-06-01 |
US10115019B2 (en) | 2018-10-30 |
EP3176709A1 (en) | 2017-06-07 |
JP6423872B2 (ja) | 2018-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017092127A1 (zh) | 视频归类方法及装置 | |
WO2021031609A1 (zh) | 活体检测方法及装置、电子设备和存储介质 | |
WO2016090829A1 (zh) | 图像拍摄方法及装置 | |
WO2017096782A1 (zh) | 防止拍照遮挡的方法及装置 | |
WO2016029641A1 (zh) | 照片获取方法及装置 | |
US20170154206A1 (en) | Image processing method and apparatus | |
WO2016090822A1 (zh) | 对固件进行升级的方法及装置 | |
WO2015169061A1 (zh) | 图像分割方法及装置 | |
WO2017084183A1 (zh) | 信息显示方法与装置 | |
WO2018120906A1 (zh) | 缓存状态报告bsr上报触发方法、装置和用户终端 | |
US10230891B2 (en) | Method, device and medium of photography prompts | |
WO2021036382A9 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2018228422A1 (zh) | 一种发出预警信息的方法、装置及系统 | |
JP6333990B2 (ja) | パノラマ写真の生成方法および装置 | |
WO2017000491A1 (zh) | 获取虹膜图像的方法、装置及红膜识别设备 | |
WO2016078394A1 (zh) | 一种提醒语音通话的方法及装置 | |
CN106534951B (zh) | 视频分割方法和装置 | |
WO2017080084A1 (zh) | 字体添加方法及装置 | |
WO2016110146A1 (zh) | 移动终端及虚拟按键的处理方法 | |
WO2017219497A1 (zh) | 消息生成方法及装置 | |
US20170090684A1 (en) | Method and apparatus for processing information | |
WO2016173246A1 (zh) | 基于云名片的电话呼叫方法及装置 | |
WO2017140108A1 (zh) | 压力检测方法和装置 | |
KR20220043004A (ko) | 차폐된 이미지 검출 방법, 장치 및 매체 | |
WO2017101397A1 (zh) | 一种信息显示方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2016523976 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20167010359 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2016/005882 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2016136707 Country of ref document: RU Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15909641 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15909641 Country of ref document: EP Kind code of ref document: A1 |