CN116955662A - Media file management method, device, equipment and storage medium - Google Patents
Media file management method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN116955662A CN116955662A CN202210390982.4A CN202210390982A CN116955662A CN 116955662 A CN116955662 A CN 116955662A CN 202210390982 A CN202210390982 A CN 202210390982A CN 116955662 A CN116955662 A CN 116955662A
- Authority
- CN
- China
- Prior art keywords
- media file
- media
- sign
- sign information
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007726 management method Methods 0.000 title claims description 48
- 238000001514 detection method Methods 0.000 claims abstract description 103
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000004590 computer program Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 13
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000000875 corresponding effect Effects 0.000 description 96
- 230000036772 blood pressure Effects 0.000 description 23
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000036541 health Effects 0.000 description 8
- 239000008280 blood Substances 0.000 description 6
- 210000004369 blood Anatomy 0.000 description 6
- 101100100125 Mus musculus Traip gene Proteins 0.000 description 4
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 229910052760 oxygen Inorganic materials 0.000 description 4
- 239000001301 oxygen Substances 0.000 description 4
- 230000036651 mood Effects 0.000 description 3
- 210000000577 adipose tissue Anatomy 0.000 description 2
- 230000037396 body weight Effects 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000035790 physiological processes and functions Effects 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 206010020772 Hypertension Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 150000002632 lipids Chemical class 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/44—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/44—Browsing; Visualisation therefor
- G06F16/447—Temporal browsing, e.g. timeline
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/45—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/483—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/489—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present application relates to the field of terminal technologies, and in particular, to a method, an apparatus, a device, and a storage medium for managing media files. According to the application, the sensing detection equipment is utilized to acquire the physical sign information of the user when the media recording equipment shoots the media file, and the physical sign information of the user is associated with the corresponding media file, so that the media file and the physical sign information can provide intelligent and convenient use experience for the user. The application can expand the practicability of the current media file, improve the use value of the media file and improve the experience of users.
Description
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method, an apparatus, a device, and a storage medium for managing media files.
Background
The advent of media recording devices such as cameras and video cameras provides people with different life recording modes, and by means of media files such as photos, audio and video, each person can have more memory in the past, but in the early days, the media recording devices are expensive, have high shooting requirements and are often exclusive to professionals. With the popularization of smart phones, the performance of the integrated lens module of the mobile phone is continuously improved, people start to get used to shooting and recording life by the mobile phone anytime and anywhere, the shot media files are stored as the content of recall and display, and the number of the media files is rapidly increased. Currently, terminals such as mobile phones simply store media files corresponding to shot contents, so that users can conveniently check the media files later, and the contents and the values of the media files are limited.
Disclosure of Invention
The application provides a media file management method, a device, equipment and a storage medium, which can be used for associating physical sign information of a user when the user shoots a media file, expanding associated content of the current media file, increasing the practicability of the media file and improving the outward extending value.
In order to solve the technical problems, the application provides the following technical scheme.
In a first aspect, a media file management method is provided, including the steps of:
determining a first media file, wherein the first media file is shot at a first time through a media shooting device;
acquiring first feature information of a first sensing detection device at the first time, wherein the media shooting device and the first sensing detection device have an association relationship, and the first feature information is a detection value corresponding to the first feature;
and establishing a first association relationship between the first media file and the first sign information.
It can be understood that in this embodiment, the media file shot by the media recording device is associated with the sign information acquired by the sensing detection device, so that the physical state information of the user of the feedback media file in different scenes is increased, and the use value of the media file is improved.
In an alternative implementation of the first aspect, the media file is one of a photo, audio, video.
It will be appreciated that the present embodiment may process media files such as photos, audio, video, etc.
In an optional implementation manner of the first aspect, the media recording device is one of a camera, a video camera, a sound recorder, and a mobile phone.
It will be appreciated that the present embodiment may capture media files by a camera, video camera, audio recorder, cell phone, or the like.
In an optional implementation manner of the first aspect, the acquiring the first sign information of the first sensing device at the first time includes:
and when the media shooting device shoots the first media file at the first time, triggering the first sensing detection device to detect and obtain the first sign information.
It can be understood that, in this embodiment, the first association relationship may be established in real time by triggering the first sensing detection device to detect that the first sign information is immediately acquired.
In an optional implementation manner of the first aspect, the media file management method further includes:
and displaying the first sign information in an associated mode when the first media file is displayed according to the first association relation.
It can be understood that the dimension of the display content of the media file can be improved, and the physical condition of the corresponding user during shooting can be intuitively reflected.
In an optional implementation manner of the first aspect, the media file management method further includes:
and displaying the first media file in a first classification according to the shooting time of the media file, wherein the first classification corresponds to the shooting time interval of the first time.
It can be understood that the embodiment can more intuitively feed back the condition of the user that the physical condition changes along with time in a media file display mode, and is particularly suitable for health monitoring of the elderly.
In an optional implementation manner of the first aspect, the media file management method further includes:
and displaying the media files in a classified manner according to the first sign, and displaying the first media files in a second classification corresponding to a first sign interval where the first sign information is located.
It can be appreciated that the present embodiment may classify and reclassify the corresponding media files according to the signs, so as to analyze the behavioral activities under different physical conditions more deeply.
In an optional implementation manner of the first aspect, the first feature is a heart rate, and the heartbeat animation corresponding to the heart rate speed of the second category is also displayed in association with the second category.
It can be understood that the psychological states of shooting in different application scenes can be intuitively displayed through the heartbeat animations with different speeds.
In an optional implementation manner of the first aspect, the media file management method further includes:
acquiring second sign information of a second sensing detection device at the first time, wherein the media recording device and the second sensing detection device have an association relationship, and the second sign information is a detection value corresponding to the second sign;
establishing a second association relationship between the first media file and the second sign information;
and displaying the first media file in a third classification according to at least the first sign and the second sign, wherein the third classification corresponds to the first sign interval where the first sign information is located and the second sign interval where the second sign information is located at the same time.
It can be appreciated that the present embodiment may collectively implement classification according to multiple dimensions of signs for joint analysis and display using relevant signs of a particular symptom.
In an optional implementation manner of the first aspect, the establishing a first association between the first media file and the first feature information includes:
and identifying a first sensing detection equipment user in the first media file through face recognition so as to include the association relation between the first media file and the first sign information attribute person in the first association relation.
It can be appreciated that in this embodiment, the user may be distinguished for a case where there are multiple users in one sensing device, so as to improve accuracy of display.
In an optional implementation manner of the first aspect, the association between the media capturing device and the sensing detection device includes:
the media recording device and the sensing detection device are in the same device group in binding.
It can be understood that the devices with association relationship can be clustered in the same device group by means of account binding.
In a second aspect, there is provided a media file management apparatus comprising:
a determining unit configured to determine a first media file, where the first media file is captured at a first time by a media capturing device;
The media shooting device comprises an acquisition unit, a first sensing detection device and a second sensing detection device, wherein the acquisition unit is used for acquiring first feature information of the first sensing detection device at the first time, the media shooting device and the first sensing detection device have an association relation, and the first feature information is a detection value corresponding to the first feature;
and the association unit is used for establishing a first association relation between the first media file and the first sign information.
In an alternative embodiment of the second aspect, the media file is one of a photo, audio, video.
In an alternative embodiment of the second aspect, the media recording device is one of a camera, a video camera, a sound recorder, a mobile phone.
In an alternative embodiment of the second aspect, the obtaining unit includes:
and when the media shooting device shoots the first media file at the first time, triggering the first sensing detection device to detect and obtain the first sign information.
In an alternative implementation manner of the second aspect, the media file management apparatus further includes:
and the display unit is used for displaying the first sign information in an associated mode when the first media file is displayed according to the first association relation.
In an alternative implementation manner of the second aspect, the media file management apparatus further includes:
and the clustering unit is used for displaying the media files in a classified manner according to the shooting time of the media files, and displaying the first media files in a first classification, wherein the first classification corresponds to the shooting time interval in which the first time is located.
In an alternative implementation manner of the second aspect, the media file management apparatus further includes:
and the clustering unit is used for displaying the media files in a classified manner according to the first sign, and displaying the first media files in a second classification corresponding to a first sign section where the first sign information is located.
In an optional implementation manner of the second aspect, the first feature is a heart rate, and the display unit further displays, in association with the second category, a heart beat animation corresponding to a heart rate speed of the second category.
In an optional implementation manner of the second aspect, the acquiring unit acquires second sign information of a second sensing detection device at the first time, where the media capturing device has an association relationship with the second sensing detection device, and the second sign information is a detection value corresponding to the second sign;
The association unit establishes a second association relationship between the first media file and the second sign information;
the media file management apparatus further includes:
and the clustering unit is used for displaying the media files in a classified manner at least according to the first and second signs, and displaying the first media files in a third classification, wherein the third classification simultaneously corresponds to the first sign interval where the first sign information is located and the second sign interval where the second sign information is located.
In an optional implementation manner of the second aspect, the association unit identifies a first sensing device user in the first media file through face recognition, so as to include, in the first association relationship, an association relationship between the first media file and the first sign information attribute person.
In an optional implementation manner of the second aspect, the association between the media capturing device and the sensing detection device includes:
the media recording device and the sensing detection device are in the same device group in binding.
In a third aspect, there is provided a data processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method as provided in any one of the alternative embodiments of the first aspect and the first aspect when executing the computer program.
In a fourth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as provided in any of the alternative embodiments of the first aspect and the first aspect.
In a fifth aspect, a computer readable storage medium is provided, in which a computer program is stored which, when executed, implements a method as provided in any one of the alternative embodiments of the first aspect and the first aspect.
Compared with the prior art, the method and the device for capturing the media file by using the sensing detection equipment acquire the physical sign information of the user when the media recording equipment captures the media file, and establish the association relation between the physical sign information of the user and the corresponding media file, so that the media file and the physical sign information can provide intelligent and convenient use experience for the user. The application can expand the practicability of the current media file, improve the use value of the media file and improve the experience of users.
Drawings
In order to more clearly illustrate the technical solutions in the prior art or embodiments of the present application, the drawings used in the description of the prior art or embodiments will be briefly described below. It is apparent that the following drawings reflect only some embodiments of the present application described herein, and that other embodiments of the present application may be made by those of ordinary skill in the art without inventive effort, and all such embodiments are within the scope of the present application.
Fig. 1 is a schematic diagram of a media file and feature information processing architecture according to an embodiment of the present application.
FIG. 2 is a schematic view of a user usage scenario in an embodiment of the present application.
FIG. 3 is a flowchart of a method for managing media files according to an embodiment of the application.
FIG. 4 is a flowchart of media file and feature information processing according to an embodiment of the present application.
FIG. 5 is a schematic diagram showing the centering rate according to an embodiment of the present application.
Fig. 6 is a schematic diagram showing time according to an embodiment of the present application.
FIG. 7 is a flow chart of a heart rate classification implementation in an embodiment of the application.
Fig. 8 is a schematic diagram showing classification of heart rate according to an embodiment of the application.
Fig. 9 is a schematic diagram showing heart rate and blood pressure according to an embodiment of the application.
Fig. 10 is a schematic diagram of a device group binding relationship according to an embodiment of the present application.
FIG. 11 is a schematic diagram of a media file management apparatus according to an embodiment of the application.
FIG. 12 is a schematic diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present application more apparent, the present application will be described in detail with reference to the embodiments shown in the drawings. However, these described embodiments are only some, but not all, of the present application and all other embodiments obtained by those skilled in the art without inventive effort are within the scope of the present application.
With the simplification of the operation of cameras and video cameras, the popularization of smart phones and the technology upgrading, photographing, shooting and the like have become common functions of users, and users can easily use media recording equipment such as cameras, video cameras, smart phones and the like to shoot spot drops in life. Meanwhile, social software such as microblogs, short videos and the like is promoted, and users are further encouraged to record life fragments by using functions such as photographing and shooting and share the life fragments on the Internet. Accordingly, the user needs to store and classify the shot media files, and the traditional management mode is to store the shot media files in the corresponding album folders, so that the user can browse in the corresponding album folders, and the browsable information is limited to the data content shot by the media files. Taking EXIF (Exchangeable image file format ) format media files, which are specially set for photos of digital cameras, can record shot image data, i.e. scenery and characters shot by users, and attribute information, which is generally only hardware information of digital cameras, such as manufacturer, camera model, image direction, image resolution X, image resolution Y, exposure time, aperture value, shooting mode, image shooting time, image gamut space, image size X, image size Y, etc. When the digital camera shoots, the hardware information of the digital camera and the attribute information such as the format of the shot photo are stored in a photo file, and accordingly, the corresponding attribute information can be read by looking at the photo. However, when a user shoots a media file by using a media shooting device, only hardware information such as shooting time, aperture information and the like is recorded in the stored media file, but no information related to a photographer or a shot subject is involved, so that the user and the shot media file are split, and a lot of valuable information related to shooting time is lost.
On the other hand, there are many sensing and detecting devices capable of acquiring physical sign information of physiological health of human body, such as body fat balance, smart watch, smart bracelet, sphygmomanometer, etc., and accordingly, the physical sign information may be detected values of body weight, heart rate, blood pressure, blood sugar, blood lipid level, blood oxygen saturation, diet, exercise, sleep, etc. of the user measured by the above sensing and detecting devices. As sensing devices, particularly smart wearable devices, enter thousands of households, people increasingly tend to acquire physical sign information of a human body by using the sensing devices, and even further obtain physical sign information related to some inferred values, such as pressure, etc., based on the acquired physical sign information. The series of sign information may represent the current physical health state of the user to some extent, especially when the detected data are sufficiently high and accurate, and the data may reflect the physiological state or even psychological state of the user to some extent. However, if these physical sign information are used only for health data analysis, the limitation is too great to achieve the purpose of better service for the user.
Therefore, in the present application, when the media file is shot by the media shooting device, the sensing detection device collects the sign information of the relevant user to combine with the media file, which will be described in detail below. When a user uses a media recording device to take a photograph, certain states, such as tear feeling, mind tranquillization, heart beat acceleration or heart ash coolness, are often accompanied, and these states can have certain physical signs, such as heart rate. By recording these data at the time of shooting, it is possible to make the user review the state and mood at that time when browsing the shot media file. By classifying and analyzing the data, the user can intuitively and reliably know the physical states corresponding to various activities.
As shown in fig. 1, in one embodiment, to combine the media file with the sign information, two types of devices, that is, a media capturing device 111 and a sensing detection device 112, are required, where the media capturing device 111 may include a smart phone, a camera, a video camera, and the like integrated with a lens module, and the media capturing device 111 may obtain scenery and characters within a capturing range and generate the media file. The sensing device 112 may include a smart watch for monitoring information such as heart rate, blood oxygen saturation, etc., a smart bracelet, a sphygmomanometer for monitoring blood pressure, etc. Specifically, the coordination module between the media capturing device 111 and the sensing detection device 112 may be used to obtain the captured media file and the detected sign information respectively, and when the media capturing device 111 captures and obtains the media file, the sensing detection device 112 may be notified to send sign information detected by the media capturing device 111 to the corresponding sensing detection device 112. Accordingly, after receiving the request, the sensing device 112 pushes data to the media recording device 111 according to the request. In one example, in the case where there are multiple sensing devices 112, the multiple sensing devices 112 may be abstracted into virtual devices on the soft bus and registered to the corresponding media recording devices 111, and accordingly, the user may request corresponding detected sign information when shooting, for example, call the corresponding sign information acquisition interface. The media recording device 111 embeds the obtained physical sign information into a media file, for example, the obtained physical sign information is saved into a media file in EXIF format, and the physical sign information is saved into the external memory 121 or the database 122 through the data saving module, specifically, the external memory 121 may be a memory integrated by the media recording device 111, or may be a portable SD memory card (Secure Digital Memory Card ). Taking the photo in the external memory storage 121 as an example, the device information of the media recording device, the embedded sign information, and the picture information of the photo itself can be read from the photo file. In one example, the stored information may be provided to the album creating module 131 for performing data cluster analysis to create an album, so as to form various media file presentation forms reflecting physiological health states, and the user may view the recorded physical sign information when browsing the media file, for example, may visually view the heart rate information recorded by the smart watch when viewing the photo of the user playing the ball.
It should be noted that, as shown in fig. 2, when the media capturing device is used, in this embodiment, the camera 210 is used, the roles pointed by the user mainly include two roles, that is, the photographer 202 and the subject 201, and the photographer 202 is the main body that operates the camera 210. Optionally, the photographer may also include a businessman participating in the photographing activity, who, although not directly operating the media recording device, is also participating in the photographing activity from the photographer's perspective. The subject 201 is a person standing in front of the lens of the camera 210, and is not limited to one, and may be a multi-person group photo. For the two roles, the relationship between the two roles and the photographed media file is very strong, specifically, the mood of the photographer during photographing is often related to the photographing environment and content, and the physiological state and psychological state of the photographer are often consistent with the performance of the photographer in the photograph, and meanwhile, different mood performances exist in different situations at the moment. When the two roles wear or use the sensing device in the shooting process, the sign information acquired by the sensing device can be utilized. Therefore, the physical sign information related to the user during shooting is associated with the corresponding media file, so that the value of the media file can be improved, and the user experience is improved. It should be noted that the user of the media capturing device and the user of the sensing device may be the same person or different persons. In the same case, the user may be a person wearing the sensing device and operating the media recording device to take a photograph, and the detected sign information is derived from the sensing device worn by the photographer. In different cases, the user may be a person wearing the sensing device and photographed by the media recording device, or a person using the sensing device within the photographing range of the media recording device, the sign information being derived from the sensing device worn or in use by the photographer.
FIG. 3 is a flowchart of a method for managing media files according to an embodiment of the application. As shown in fig. 3, the media file management method may be applied to media recording devices such as cameras, video cameras, recorders, smartphones, etc., and may also be applied to computing devices such as computers and servers for back-end processing, where, of course, the method shown in fig. 3 may also be applied if the sensing device has a certain processing and display capability, such as a sphygmomanometer, smartwatch, etc. Specifically, the media file management method includes the steps of:
step S301, determining a first media file, where the first media file is shot at a first time by a media shooting device. The first media file may be a photo, a video, or even audio, and is obtained by a user operating the media capturing device, for example, by using a camera function of the mobile phone. For a first media file captured at a first time, the first time is not limited to a particular moment in time. For example, if a photograph is taken, the first time may be the instant when the photograph was taken to be generated, and if a video or audio is taken, the first time may refer to the time period during which the video or audio was taken, such as the time period corresponding to the beginning to the end of the video. By determining the first media file, the processing of the associated sign information may be performed on the first media file, specifically, the step S302 is performed.
Step S302, obtaining first feature information of a first sensing detection device at the first time, where the media capturing device and the first sensing detection device have an association relationship, and the first feature information is a detection value corresponding to the first feature. For determining the first media file in step S301, the method may be a new captured media file that is determined immediately at the time of capturing and obtain the sign information detected by the sensing device at this moment, for example, when the media capturing device captures the first media file at the first time, the first sensing device is triggered to detect and obtain the first sign information. The acquiring of the first sign information may also be triggering to acquire the sign information detected by the corresponding sensing device when the corresponding media file is shot when the corresponding media file is browsed and the determined first media file does not have the corresponding sign information. Alternatively, the acquisition mode may be to directly request to the sensing detection device, receive the response feedback of the sensing detection device, or may be through a server, a storage medium, or the like that is connected to the media capturing device and the sensing detection device at the same time. It should be noted that, the first feature information and the first media file are cross-sectional data at a first time, and as described above, the first time may be a moment in time, or alternatively, the precision of the moment in time may be seconds, minutes or hours, and the first time may also be a period of time, such as a capturing time of the video file. For the sensing detection device, the detection time of the first feature information is generally an instant moment, and meanwhile, part of the sensing detection device does not continuously perform detection of the feature information, but performs periodic detection in an interval time manner, so that the detection time of the first feature information does not necessarily completely coincide with the first time, but is within a certain range determined by the first time. For example, for a captured video or audio, the first sign information at the first time may refer to sign information in which the detection time of the first sign information is within a time period to which the first time belongs. For another example, the sensing detection device does not perform detection at the instant corresponding to the first time, and the first sign information at the first time may refer to sign information that an error between the detection time of the first sign information and the first time is smaller than a set first threshold.
When the first sign information is acquired, the sign information associated with the first media file can be accurately acquired only by determining the association relationship between the media recording device and the sensing detection device. The first sensing device for acquiring the first sign information has an association relationship with a media recording device for shooting a first media file, for example, the first sensing device and the media recording device are in pairing connection in a wireless manner such as Bluetooth, so that the first sensing device and the media recording device have the association relationship. For example, the media recording device with the association relationship and the first sensing device are in the same device group. The devices in the same device group can be set through binding management of users, and when the sensing detection devices corresponding to the media recording devices are required to be called, the sensing detection devices in the same device group are queried to acquire the physical sign information. In one example, the sensing devices used by persons belonging to the same family relationship as the user of the media capturing device are all disposed within the same device group. In one example, a trip activity of a specific time period may be further added, a sensing device carried by a person in the same trip activity and a media recording device used in the trip activity are set to be the same device group, and accordingly, the set device group may manage the valid state according to the time period of the trip activity. Further, the association relationship between the media capturing device and the sensing detection device may also determine, through distance detection, whether the sensing detection device is in the vicinity of the corresponding media capturing device at the first time, that is, whether the distance between the corresponding media capturing device and the sensing detection device is smaller than the set second threshold at the first time, so that it may be determined at least whether the user of the sensing detection device is together with the user of the corresponding media capturing device, and optionally, only if the sensing detection device is in the vicinity of the corresponding media capturing device at the first time, it may be determined that the association relationship exists. The distance detection can be realized through a distance measuring module integrated in the media recording device and the sensing detection device, such as a Bluetooth module, and also can be calculated through positioning information determined by the media recording device and the sensing detection device respectively. The sensing detection devices of the same device group which are bound with the corresponding media recording devices can be one or a plurality of sensing detection devices, and if a plurality of sensing detection devices exist, different sensing detection devices can be respectively called to acquire the corresponding sign information. In one example, for a sensing device worn by a specific person, the home person of the sensing device may also be determined according to user information set by the sensing device, the home person corresponding to the detected sign information may be further determined, and face data of the home person may be invoked to identify a face in a media file to determine whether the home person of the sign information is a photographer or a photographed person.
Step S303, establishing a first association relationship between the first media file and the first sign information. And (3) associating the first media file determined in the step S301 with at least the first feature information acquired in the step S302, and establishing a mapping relation between the first media file and the at least the first feature information. Specifically, the first feature information may be embedded in the first media file, for example, in an attribute information segment of the first media file, or a first association relationship between the first media file and the first feature information may be stored, for example, in a data table, and the first feature information corresponding to the first media file may be queried by querying a mapping relationship in the data table. Optionally, step S303 further includes identifying, by face recognition, a first user of the first sensing device in the first media file, so as to include, in the first association, an association between the first media file and the first sign information attribute person.
In the following, the related steps will be described by using fig. 4, and after step S410 is started, two situations exist in which it is required to trigger the acquisition of the sign information of the corresponding sensing device, one is the case when the capturing is performed in step S421, and the other is the case when the browsing is performed in step S422. The first step S421 is to develop an explanation, and when a media recording device such as a camera captures a media file such as a generated photo, real-time physical sign information is actively obtained from a sensing detection device through step S431, where the sensing detection device has an association relationship with the media recording device as described above. Step S441 determines whether the corresponding physical sign information is successfully acquired, and if the acquisition fails, for example, if the synchronization of the corresponding sensing device physical sign information fails, step S460 may be entered to end, so as to complete the basic media file storage. If the acquisition is successful, the step S451 is entered, the acquired sign information and the photographed media file are stored together, and the step S460 is entered to end the corresponding storage step for later calling. For step S422, the browsing action may be that the user actively checks, or may be that the media file management program or the like periodically manages the media file, and through step S432, it is determined whether the media file or the associated information corresponding to the media file includes corresponding sign information, if the corresponding sign information exists, no further processing is required, and the process directly jumps to step S460 to end, and no further processing is performed on the stored media file. If the corresponding sign information is not included, for example, if the acquisition in step S441 fails, then the acquisition of the sign information of the sensing device during shooting may be further retried, and the sign information may be stored in the sensing device according to the detection time, or may be stored in a database that may be called by the query. Step S452 is performed again to determine whether the corresponding sign information is successfully acquired, and if the acquisition fails, step S460 is performed, and the stored media file is not further processed. If the corresponding sign information is obtained, the process proceeds to step S451, where the sign information and the media file are stored together, and then the process proceeds to step S460.
Based on the first association relationship between the first media file and the first feature information established in step S303, the first feature information can be displayed in association when the first media file is displayed, and the user can see the first feature information at the same time when browsing the first media file, so that the user can be assisted to recall the current circumstances. Optionally, when multiple types of sign information pointing to the same person are determined according to the association relationship, for example, the heart rate and the blood oxygen saturation of the same person can be obtained simultaneously through the intelligent watch, or the same person carries a plurality of sensing detection devices to obtain corresponding different types of sign information respectively, for example, the heart rate of the same person is obtained through the intelligent watch, the body weight of the same person is obtained through the body fat scale, at the moment, all the sign information can be displayed in an associated mode in the corresponding media file, and one or more default display types of sign information can be selected. Optionally, when it is determined that there are sign information pointing to a plurality of different people according to the association relationship, for example, there are a plurality of sensing detection devices in the same device group as the media recording device, and the plurality of sensing detection devices are respectively carried by different people, identities of the different people can be determined through user settings of the sensing detection devices, and at this time, sign information of a corresponding person can be determined to be displayed according to identities of a browser. As shown in fig. 5, the photo is taken by the person taking the photo during exercise, the user wearing the smart watch is the person taking the photo, the smart watch detects that the heart rate of the user is 130 times per minute during the taking, and stores the heart rate together with the corresponding photo, when the corresponding photo is browsed, the heart rate of the person taking the photo can be displayed at the upper right corner of the photo, so that the user can recall the feeling of exercise at the time through the heart rate sign number during browsing the photo, and the browsing experience is more apparent. Further, in an embodiment in which a plurality of characters exist in the photo or the video, the attribution person of the feature information can be determined according to the user information set by the sensing detection device, and the face data of the attribution person is called to identify the face in the media file and display the corresponding feature information for the character association with successful face identification, for example, when the photo is checked, the feature information is displayed in the place related to the associated character in the plurality of people.
In one embodiment, a camera may be used to record videos of the life of the elderly in real time, and accordingly, a smart bracelet or a smart watch is worn for the elderly to record the heart rate and other sign information of the elderly. The living video of the old person is determined, and the corresponding physical sign information of the old person in the range of the camera is obtained and stored together, and the specific process of obtaining the physical sign information can be directly executed by the camera or the process of obtaining the physical sign information is executed by a server connected with the camera. And displaying the first media file in a first classification according to the shooting time of the media file, wherein the first classification corresponds to the shooting time interval of the first time. Specifically, the manner of classification presentation may be to create different folders according to different classifications, where the first classification has separate folders accordingly, and all media files belonging to the first classification are classified into corresponding folders. The intervals divided according to the shooting time can be reflected on different positions of a time axis, the range of the intervals can be one hour, one minute or one second, wherein the first classification correspondingly has a corresponding mark position on the time axis, the media files belonging to the first classification are displayed on the corresponding mark position of the time axis, and optionally, the displayed media files can be previewed icons or video clips and the like. As shown in fig. 6, the corresponding media files are displayed through a time axis, and the shooting time interval is in days, so that one day can be understood as a category, for example, the media files with shooting time of 2 months and 4 days fall into the same category and are displayed on the time axis marked by 2 months and 4 days. Taking health monitoring of the elderly as an example, the corresponding video files are displayed on the corresponding photographing time classification marks, a longitudinal heart rate axis vertical to a time axis is further arranged in the display area, the heart rate value in the video files with the higher position is higher, the method is another mode of associating and displaying physical sign information, the corresponding video is displayed on the position with the corresponding height according to the obtained heart rate physical sign information, the corresponding heart rate value can be known through the corresponding relation of the height, health of the elderly can be monitored well, physical sign change of the elderly can be further analyzed, the activity of the abnormal physical sign data is judged, and reference is provided for providing corresponding medical advice.
In one embodiment, the media files may be displayed in a first category according to a first feature, and the first media files are displayed in a second category, where the second category corresponds to a first feature interval in which the first feature information is located. Unlike the above-described classified display according to the photographing time, the classified display interval is directly related to the sign information itself. As shown in fig. 7, taking the heart rate division as an example, starting from step S701, all media files with heart rate are searched for through step S702, and step S703 is performed to obtain heart rate maximum value headratemax and heart rate minimum value headratemin as references for determining group distances according to heart rate division sections, where the group distances can be understood as spans of heart rate ranges included in each section corresponding category. In step S704, the number of divided data segments N, that is, the number of photo album to be generated, is determined, where N is recommended to be 5 to 20, and the number of divided data segments is equally divided into N regions from heatrat min to heatrat max, and the corresponding group distance is (heatrat max-heatrat min)/N, alternatively, the average division may be not adopted, and may be determined according to the amount of the feature information data corresponding to the heart rate. Step S705 is entered to save the media files that fit the same heart rate value area to the same album, i.e. to display the media files in the same category in the same album. Further, in step S706, a heartbeat animation corresponding to the heart rate speed is displayed on the corresponding album cover according to the heart rate interval in which each album is located, that is, the heartbeat animation corresponding to the heart rate speed is displayed in association with the corresponding category. Finally, the process advances to step S707 to end the entire flow. As shown in fig. 8, media files in the same heart rate interval are clustered and displayed through album folders to form a heart rate map, heart rate intervals are divided according to the distribution of 20 times of group distance, different media files are placed in corresponding album folders according to the heart rate interval where associated heart rate physical sign information is located, a single album folder is a classification of the corresponding heart rate interval, and for different album folder previews, corresponding heart rate data is displayed in an associated manner, alternatively, the displayed heart rate data can be the median of the heart rate interval. Optionally, the heart rate data may be preceded by a heart type symbol displaying beats, the frequency of which may correspond to the heart rate data displayed. It should be noted that, the division of the intervals according to the physical sign is not limited by heart rate, and may also include other physical sign information such as blood pressure, blood oxygen saturation, etc., so that the user may quickly browse the media files corresponding to the physical sign range through selection, summarize the influence of the corresponding activity rule on the physical health, and subvert the existing album management mode.
In one embodiment, the dimension of the classification display can be further expanded, and the second sign information is introduced, wherein the second sign information is a detection value corresponding to the second sign obtained through the second sensing detection equipment. As described above, some sensing devices may detect a plurality of types of sign information at the same time, and thus, in this embodiment, the second sensing device may be a different device from the first sensing device or may be the same device as the first sensing device. Correspondingly, the media recording device for shooting the first media file and the second sensing detection device have an association relationship, such as the same device group which is bound, and a second association relationship of the first media file and the second sign information is also established. Alternatively, the second feature information may be displayed in association with the first media file according to the second association relationship. Further, the media files are displayed in a classified manner according to at least the first and second signs, and in further embodiments, a third sign, a fourth sign, etc. may be further introduced. And performing cross grouping according to the first and second signs to form a plurality of classifications, wherein taking the third classification as an example, the first and second signs associated with the first media file satisfy both the first and second sign intervals corresponding to the third classification. As shown in fig. 9, a physiological map is displayed by the distribution of the two sign information of the blood pressure and the heart rate in a cross-group manner, different heart rate intervals corresponding to the heart rate are displayed on the horizontal axis, and different blood pressure intervals of the blood pressure are displayed on the vertical axis, wherein the heart rate intervals and the blood pressure intervals can be heart rate values and blood pressure values which are accurate to unit values, or heart rate ranges and blood pressure ranges which are divided according to a certain group distance in the manner shown in fig. 7. Any position above the horizontal axis and to the right of the vertical axis is required to simultaneously meet a specific heart rate interval corresponding to the heart rate axis and a specific blood pressure interval corresponding to the blood pressure axis, and a corresponding media file preview icon can be displayed at a corresponding position. Taking fig. 9 as an example, a user can easily select a media file with a high heart rate and high blood pressure to help quickly lock the activity of dangerous symptoms.
As shown in fig. 10, for the media capturing device 101, the sensing device having an association relationship includes a plurality of smart watches 1021, smart bracelets 1022, and sphygmomanometers 1023, and the media capturing device 101 may also perform ranging with the smart watches 1021, smart bracelets 1022, and sphygmomanometers 1023 to determine whether to be in the vicinity of the media capturing device 101. Accordingly, when the media recording device 101 shoots, the sign information detected by the smart watch 1021, the smart bracelet 1022 and the sphygmomanometer 1023 during shooting is acquired at the same time, and the detected sign information can be applied to the display content of the corresponding media file by establishing the association relation. In an example, for a wearable sensing device such as a smart watch 1021 and a smart bracelet 1022, the wearable sensing device generally has a one-to-one relationship with a user, that is, the user identity can be directly set for the smart watch 1021 and the smart bracelet 1022 to bind the user, and when the sign information of the smart watch 1021 and the smart bracelet 1022 is acquired, the association relationship between the user and the detected sign information can be established by default according to the set user identity. For example, the default user set by the smart watch 1021 is the user 1031, only the user 1031 wears the smart watch 1021, and the sign information detected by the smart watch 1021 is the user 1031 by default. The default user set by the smart band 1022 is the user 1032, only the user 1032 wears the smart band 1022, and the sign information detected by the smart band 1022 is the user 1032 by default. However, in such a sensing device such as the blood pressure monitor 1023, it is difficult to establish a stable association with a single user from a use scene, and the blood pressure monitor 1023 may be shared by all members of the family when the blood pressure monitor 1023 is placed in the home, and at this time, the blood pressure value acquired by the blood pressure monitor 1023 cannot be simply associated with the user, and it is not clear whether the user at a specific time is the owner even if owner data is provided for the blood pressure monitor. For example, although the default user set by the blood pressure monitor 1023 is the user 1033, not only the user 1033 but also the blood pressure monitor 1023 and the user 1034 may be the user 1033, and the sign information detected by the blood pressure monitor 1023 may be the user 1033 or the user 1034 because of the one-to-many association relationship. Therefore, according to the display mode, only the acquired blood pressure value is displayed in the corresponding media file, and particularly, what people browse the media file, the value is different, because there is no way to display the blood pressure value of the corresponding identity according to the identity of the browser at all. Therefore, the type of the sensing detection device or the device attribute can be confirmed to determine the attribution of the first characteristic feature information, for example, the characteristic information detected by the device with strong private attribute such as the smart watch 1021 and the smart bracelet 1022, which can be directly determined as the characteristic information of the attribution user of the device. Alternatively, for a device with a non-private attribute, such as the sphygmomanometer 1023 available to multiple persons, the detected information may be confirmed with the aid of other information, such as further identifying the user through face recognition, or requiring the user to perform setting or confirmation to confirm the identity information of the user, so as to determine the owner of the first feature information. Accordingly, when the first association information between the first media file and the first feature information is established, the identity information of the user is also associated. Further, when the first media file is displayed in an associated mode according to the first association relationship, corresponding first feature information is selected to be displayed according to the association relationship between the identity of the browser and the identity of the user.
FIG. 11 is a schematic diagram of a media file management apparatus according to an embodiment of the application. As shown in fig. 11, the media file management apparatus 1101 includes a determination unit 1111, an acquisition unit 1112, an association unit 1113, wherein,
a determining unit 1111 configured to determine a first media file, where the first media file is captured by a media capturing device at a first time;
an obtaining unit 1112, configured to obtain first feature information of a first sensing detection device at the first time, where the media capturing device has an association relationship with the first sensing detection device, and the first feature information is a detection value corresponding to a first feature;
the associating unit 1113 is configured to establish a first association relationship between the first media file and the first feature information.
Optionally, the media file is one of a photo, audio, video.
Optionally, the media recording device is one of a camera, a video camera, a sound recorder, and a mobile phone.
Optionally, the acquiring unit includes:
and when the media shooting device shoots the first media file at the first time, triggering the first sensing detection device to detect and obtain the first sign information.
Optionally, the media file management apparatus further includes:
And the display unit is used for displaying the first sign information in an associated mode when the first media file is displayed according to the first association relation.
Optionally, the media file management apparatus further includes:
and the clustering unit is used for displaying the media files in a classified manner according to the shooting time of the media files, and displaying the first media files in a first classification, wherein the first classification corresponds to the shooting time interval in which the first time is located.
Optionally, the media file management apparatus further includes:
and the clustering unit is used for displaying the media files in a classified manner according to the first sign, and displaying the first media files in a second classification corresponding to a first sign section where the first sign information is located.
Optionally, the first sign is heart rate, and the display unit is further associated with displaying, in the second category, a heartbeat animation corresponding to the heart rate speed of the second category.
Optionally, the acquiring unit acquires second sign information of a second sensing detection device at the first time, the media recording device and the second sensing detection device have an association relationship, and the second sign information is a detection value corresponding to the second sign;
The association unit establishes a second association relationship between the first media file and the second sign information;
the media file management apparatus further includes:
and the clustering unit is used for displaying the media files in a classified manner at least according to the first and second signs, and displaying the first media files in a third classification, wherein the third classification simultaneously corresponds to the first sign interval where the first sign information is located and the second sign interval where the second sign information is located.
Optionally, the association unit identifies the first sensing detection device user in the first media file through face recognition, so as to include the association relationship between the first media file and the first sign information attribute person in the first association relationship.
Optionally, the association between the media capturing device and the sensing detection device includes:
the media recording device and the sensing detection device are in the same device group in binding.
It should be noted that reference may be further made to the details of the media file management method and the alternative embodiments described herein and in fig. 1-10.
FIG. 12 is a schematic diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 12, the data processing device 1201 may be a media recording device such as a camera, a video camera, a smart phone, a sound recorder, or a sensing device such as a sphygmomanometer or a smart watch. The data processing device 1201 may also be a computing device such as a computer, server, or the like, and accordingly, the association and display of the first media file and the first characterizing information is processed on the back-end device. The data processing device 1201 includes a processor 1211, a memory 1212, and accordingly, a computer program stored on the memory 1212 and executable on the processor 1211, the processor 1211 executing the computer program to perform the media file management methods and methods provided by alternative embodiments as described herein and in relation to fig. 1-10. Specifically, in order to determine whether the sensing devices of the same device group are in the vicinity of the corresponding media capturing devices, the data processing device as a media capturing device or a sensing device may further include a bluetooth module for distance detection.
Optionally, a computer program product comprising instructions which, when run on a computer, cause the computer to perform the media file management method and the method provided by the alternative embodiments as referred to herein and in fig. 1-10.
Optionally, a computer readable storage medium has a computer program stored therein, which when executed, implements a method as provided by the media file management methods and alternative embodiments as referred to herein and in fig. 1-10.
In summary, the sensing detection device is utilized to acquire the physical sign information of the user when the media recording device shoots the media file, and the physical sign information of the user and the corresponding media file are associated, so that the media file and the physical sign information can provide intelligent and convenient use experience for the user. The application can expand the practicability of the current media file, improve the use value of the media file and improve the experience of users.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions which, when loaded and executed on a computer, fully or partially produce a process or function in accordance with embodiments of the present application. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is to be understood that the technical solutions disclosed in connection with the present application may be directly embodied as hardware, as a software module executed by a control unit or as a combination of both, i.e. one or more steps and/or a combination of one or more steps, either corresponding to the respective software module of a computer program flow or corresponding to the respective hardware module, such as an ASIC (Application Specific Integrated Circuit ), an FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, a discrete gate or crystal logic device, a discrete hardware component or any suitable combination thereof. For convenience of description, the above-described apparatus is described as being functionally divided into various modules, and of course, the functions of each module may be implemented in the same or multiple pieces of software and/or hardware when implementing the present application.
It will be appreciated that the term "and/or" is merely one association describing the association of objects, meaning that three relationships may exist, e.g., a and/or B may represent the presence of a alone, B alone, a and B together. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship; in the formula, the character "/", generally indicates that the front and rear associated objects are a "division" relationship.
It is to be understood that the reference numerals, first "," second ", etc. in the various embodiments are for convenience of description only and do not represent absolute distinguishing relationships between structures or functions, and that the same reference numerals or labels may be used in different embodiments, which do not represent absolute relationships between structures or functions, and are not intended to limit the scope of embodiments of the application. The sizes of the steps and the process numbers do not mean the sequence of execution, and the execution sequence of the steps and the process should be determined according to the functions and the internal logic, and should not limit the implementation process of the present application.
It is to be understood that although the application has been described in connection with specific features and embodiments, it is apparent that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present application.
Claims (16)
1. A method of media file management comprising the steps of:
Determining a first media file, wherein the first media file is shot at a first time through a media shooting device;
acquiring first feature information of a first sensing detection device at the first time, wherein the media shooting device and the first sensing detection device have an association relationship, and the first feature information is a detection value corresponding to the first feature;
and establishing a first association relationship between the first media file and the first sign information.
2. The method of claim 1, wherein the obtaining the first sign information of the first sensing device at the first time comprises:
and when the media shooting device shoots the first media file at the first time, triggering the first sensing detection device to detect and obtain the first sign information.
3. The media file management method according to any one of claims 1-2, wherein said media file management method further comprises:
and displaying the first sign information in an associated mode when the first media file is displayed according to the first association relation.
4. A media file management method according to any one of claims 1-3, wherein the media file management method further comprises:
And displaying the first media file in a first classification according to the shooting time of the media file, wherein the first classification corresponds to the shooting time interval of the first time.
5. The media file management method according to any one of claims 1 to 4, wherein said media file management method further comprises:
and displaying the media files in a classified manner according to the first sign, and displaying the first media files in a second classification corresponding to a first sign interval where the first sign information is located.
6. The method of media file management according to claim 5, wherein said first feature is heart rate and wherein a heart beat animation corresponding to heart rate speed of said second category is also displayed in association with said second category.
7. The media file management method according to any one of claims 1 to 6, wherein said media file management method further comprises:
acquiring second sign information of a second sensing detection device at the first time, wherein the media recording device and the second sensing detection device have an association relationship, and the second sign information is a detection value corresponding to the second sign;
Establishing a second association relationship between the first media file and the second sign information;
and displaying the first media file in a third classification according to at least the first sign and the second sign, wherein the third classification corresponds to the first sign interval where the first sign information is located and the second sign interval where the second sign information is located at the same time.
8. The method of any of claims 1-7, wherein the establishing a first association between the first media file and the first sign information comprises:
and identifying a first sensing detection equipment user in the first media file through face recognition so as to include the association relation between the first media file and the first sign information attribute person in the first association relation.
9. The method of any one of claims 1-8, wherein the media recording device and the sensing device have an association relationship comprising:
the media recording device and the sensing detection device are in the same device group in binding.
10. A media file management apparatus, comprising:
A determining unit configured to determine a first media file, where the first media file is captured at a first time by a media capturing device;
the media shooting device comprises an acquisition unit, a first sensing detection device and a second sensing detection device, wherein the acquisition unit is used for acquiring first feature information of the first sensing detection device at the first time, the media shooting device and the first sensing detection device have an association relation, and the first feature information is a detection value corresponding to the first feature;
and the association unit is used for establishing a first association relation between the first media file and the first sign information.
11. The media file management apparatus of claim 10, wherein the media file management apparatus further comprises:
and the display unit is used for displaying the first sign information in an associated mode when the first media file is displayed according to the first association relation.
12. The media file management apparatus according to any one of claims 10 to 11, wherein said media file management apparatus further comprises:
and the clustering unit is used for displaying the media files in a classified manner according to the shooting time of the media files, and displaying the first media files in a first classification, wherein the first classification corresponds to the shooting time interval in which the first time is located.
13. The media file management apparatus according to any one of claims 10 to 12, wherein said media file management apparatus further comprises:
and the clustering unit is used for displaying the media files in a classified manner according to the first sign, and displaying the first media files in a second classification corresponding to a first sign section where the first sign information is located.
14. The media file management apparatus according to any one of claims 10 to 13, wherein the acquisition unit acquires second feature information of a second sensing detection device at the first time, the media recording device and the second sensing detection device having an association relationship, the second feature information being a detection value corresponding to a second feature;
the association unit establishes a second association relationship between the first media file and the second sign information;
the media file management apparatus further includes:
and the clustering unit is used for displaying the media files in a classified manner at least according to the first and second signs, and displaying the first media files in a third classification, wherein the third classification simultaneously corresponds to the first sign interval where the first sign information is located and the second sign interval where the second sign information is located.
15. A data processing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of claims 1-9 when executing the computer program.
16. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run, implements the method according to any of claims 1-9.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210390982.4A CN116955662A (en) | 2022-04-14 | 2022-04-14 | Media file management method, device, equipment and storage medium |
PCT/CN2023/087725 WO2023198092A1 (en) | 2022-04-14 | 2023-04-12 | Media file management method and apparatus, and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210390982.4A CN116955662A (en) | 2022-04-14 | 2022-04-14 | Media file management method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116955662A true CN116955662A (en) | 2023-10-27 |
Family
ID=88328974
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210390982.4A Pending CN116955662A (en) | 2022-04-14 | 2022-04-14 | Media file management method, device, equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116955662A (en) |
WO (1) | WO2023198092A1 (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150178915A1 (en) * | 2013-12-19 | 2015-06-25 | Microsoft Corporation | Tagging Images With Emotional State Information |
JP6379424B2 (en) * | 2014-10-20 | 2018-08-29 | シャープ株式会社 | Image recording device |
CN114079730B (en) * | 2020-08-19 | 2023-09-12 | 华为技术有限公司 | Shooting method and shooting system |
CN113505259A (en) * | 2021-06-28 | 2021-10-15 | 惠州Tcl云创科技有限公司 | Media file labeling method, device, equipment and medium based on intelligent identification |
-
2022
- 2022-04-14 CN CN202210390982.4A patent/CN116955662A/en active Pending
-
2023
- 2023-04-12 WO PCT/CN2023/087725 patent/WO2023198092A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2023198092A1 (en) | 2023-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021121236A1 (en) | Control method, electronic device, computer-readable storage medium, and chip | |
CN110618933B (en) | Performance analysis method and system, electronic device and storage medium | |
US9788065B2 (en) | Methods and devices for providing a video | |
EP3125135B1 (en) | Picture processing method and device | |
KR102091848B1 (en) | Method and apparatus for providing emotion information of user in an electronic device | |
US20100086204A1 (en) | System and method for capturing an emotional characteristic of a user | |
US8014573B2 (en) | Digital life recording and playback | |
KR20170019823A (en) | Method for processing image and electronic device supporting the same | |
CN105069083B (en) | The determination method and device of association user | |
US9953221B2 (en) | Multimedia presentation method and apparatus | |
CN111615003A (en) | Video playing control method, device, equipment and storage medium | |
CN105808635A (en) | Method and apparatus for image analysis | |
US20110096994A1 (en) | Similar image retrieval system and similar image retrieval method | |
KR20100003913A (en) | Method and apparatus for communication using 3-dimensional image display | |
JP7491867B2 (en) | VIDEO CLIP EXTRACTION METHOD, VIDEO CLIP EXTRACTION DEVICE AND STORAGE MEDIUM | |
Adams et al. | Extraction of social context and application to personal multimedia exploration | |
WO2013079124A1 (en) | Portable electronic equipment and method of recording media using a portable electronic equipment | |
KR20180121273A (en) | Method for outputting content corresponding to object and electronic device thereof | |
US11163822B2 (en) | Emotional experience metadata on recorded images | |
CN115512829A (en) | Method, device and medium for acquiring disease diagnosis related group | |
JP2008269411A (en) | Image keyword editing system, image keyword provision server and image keyword editing device | |
CN106533918A (en) | User addition prompting method and apparatus | |
KR20170098113A (en) | Method for creating image group of electronic device and electronic device thereof | |
KR20190117100A (en) | Method and apparatus for measuring biometric information in electronic device | |
JPWO2015178234A1 (en) | Image search system, search screen display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |