WO2021083004A1 - 照片展示处理方法及装置、视频展示处理方法及装置 - Google Patents
照片展示处理方法及装置、视频展示处理方法及装置 Download PDFInfo
- Publication number
- WO2021083004A1 WO2021083004A1 PCT/CN2020/122485 CN2020122485W WO2021083004A1 WO 2021083004 A1 WO2021083004 A1 WO 2021083004A1 CN 2020122485 W CN2020122485 W CN 2020122485W WO 2021083004 A1 WO2021083004 A1 WO 2021083004A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- photo
- identification information
- video
- person
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/432—Query formulation
- G06F16/434—Query formulation using image data, e.g. images, photos, pictures taken by a user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/432—Query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/438—Presentation of query results
Definitions
- This application relates to the field of image processing technology, and in particular to a method and device for processing photo display, and a method and device for processing video display.
- the shooting services provided by scenic spots are all carried out by semi-manual methods. After media information is generated, it is selected through manual comparison, and then the generated media information is delivered to tourists. Manual comparison is inefficient in selecting media information. If the amount of generated media information exceeds the workload of manual selection, it will affect the service effect for tourists.
- the embodiments of the present application provide a photo display processing method and device, and a video display processing method and device, to at least solve the technology of comparing the efficiency of the way of taking a picture of a user in a predetermined area and showing the obtained photo to the user in the related technology. problem.
- a photo display processing method including: obtaining a photo, wherein the photo is obtained from at least one collection device distributed in a predetermined area, and the at least one collection device The device is triggered to shoot by a predetermined condition; identifying a person from the photo, and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area; The photo and the identification information identified from the photo are correspondingly saved; the user's identification information is obtained; the corresponding photo is searched for according to the user's identification information; the found photo is displayed to the user.
- recognizing the identification information for identifying the person from the person includes: recognizing the attachment on the person and/or the biological characteristics of the person from the person; The characteristic information and/or the characteristic information of the biological characteristics are used as the identification information for identifying the person; acquiring the user’s identification information includes: acquiring the attachments of the user and/or the biological characteristics of the user, and combining the The feature information corresponding to the biological feature is the identification information of the user; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held items; the attachment is used to uniquely identify the person in the predetermined area;
- the biological characteristics of the person include one of the following: facial characteristics and posture characteristics.
- searching for the corresponding photo according to the identification information of the user includes: searching, according to the identification information of the user, the characteristic information of one or more characters corresponding to the identification information;
- the feature information of the one or more people searches for the photo of the one or more people as the photo corresponding to the identification information of the user.
- storing the photo and the identification information recognized from the photo correspondingly includes: according to the white area on the white area of the photo.
- the balance is adjusted; the adjusted photo is saved corresponding to the identification information recognized from the photo.
- acquiring the photo includes: extracting a predetermined frame from the video as the photo; and/or, retrieving the found In addition to showing the photo to the user, part or all of the content of the video is also shown to the user.
- the predetermined condition is that it is detected that the person in the predetermined area has at least one of the following information: gesture information, mouth shape information, and body shape information.
- presenting the found photos to the user includes: in the case that the found photos exceed a predetermined number, sorting part or all of the photos; displaying some or all of the sorted photos To the user.
- a video display processing method including: acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least A collection device is triggered by a predetermined condition to shoot; a person is identified from the video, and identification information for identifying the person is identified from the person, wherein the identification information is unique within the predetermined area Save the video and the identification information identified from the video correspondingly; obtain the user's identification information; find the corresponding video according to the user's identification information; show the found video to the user.
- a photo display processing device including: a first obtaining unit configured to obtain a photo, wherein the photo is obtained from at least one collection device distributed in a predetermined area Acquired, the at least one collection device is triggered to take a picture by a predetermined condition; the first recognition unit is configured to recognize a person from the photo, and recognize identification information for identifying the person from the person, Wherein, the identification information is unique within the predetermined area; the first storage unit is used to store the photo and the identification information identified from the photo correspondingly; and the second acquisition unit is used to obtain The identification information of the user; the first search unit is used to search for corresponding photos according to the identification information of the user; the first display unit is used to display the searched photos to the user.
- the first recognition unit includes: a first recognition module for recognizing attachments on the person and/or biological characteristics of the person from the person; a first determination module for The characteristic information of the attachment and/or the characteristic information of the biological feature is used as the identification information for identifying the person;
- the second acquisition unit includes: a second acquisition module for acquiring the attachment of the user And/or the biological characteristics of the user, and the characteristic information corresponding to the biological characteristics is the identification information of the user; wherein the attachment includes at least one of the following: clothing, accessories, hand-held articles; the attachment It is used to uniquely identify the person in the predetermined area; the biological characteristics of the person include one of the following: facial characteristics and posture characteristics.
- the first searching unit includes: a searching module configured to, after obtaining the identification information of the user, search for the characteristic information of one or more persons corresponding to the identification information according to the identification information of the user; 2.
- a determination module configured to search for the photo of the one or more people as the photo corresponding to the user's identification information according to the characteristic information of the one or more people.
- the first storage unit includes: an adjustment module, configured to adjust the white balance of the photo according to the white area when the attachment on the person includes a white area; a storage module, It is used to save the adjusted photo corresponding to the identification information recognized from the photo.
- the first acquiring unit includes: a third determining module, configured to extract a predetermined frame from the video as the photo when the at least one acquisition device shoots a video triggered by a triggering condition And/or, the first display module is used to display the found photos to the user, and also to display part or all of the content of the video to the user.
- a third determining module configured to extract a predetermined frame from the video as the photo when the at least one acquisition device shoots a video triggered by a triggering condition
- the first display module is used to display the found photos to the user, and also to display part or all of the content of the video to the user.
- the predetermined condition is that it is detected that the person in the predetermined area has at least one of the following information: gesture information, mouth shape information, and body shape information.
- the first display unit includes: a sorting module, which is used to sort part or all of the photos when the found photos exceed a predetermined number; and the second display module is used to sort Part or all of the sorted photos are displayed to the user.
- a sorting module which is used to sort part or all of the photos when the found photos exceed a predetermined number
- the second display module is used to sort Part or all of the sorted photos are displayed to the user.
- a video display processing device including: a third acquisition unit configured to acquire a video, wherein the video is obtained from at least one acquisition device distributed in a predetermined area Acquired, the at least one collection device is triggered to shoot by a predetermined condition; the second recognition unit is configured to recognize a person from the video, and recognize identification information for identifying the person from the person, Wherein, the identification information is unique within the predetermined area; the second storage unit is used to store the video and the identification information identified from the video correspondingly; and the fourth acquisition unit is used to obtain The identification information of the user; the second search unit is used to search for the corresponding video according to the identification information of the user; the second display unit is used to display the searched video to the user.
- the storage medium includes a stored program, wherein the program executes the photo display processing method described in any one of the above, and the video display processing method.
- a processor is also provided, the processor is configured to run a program, wherein the program executes the photo display processing method described in any one of the above, and the video display Approach.
- an electronic device including: a processor; a memory, connected to the processor, and configured to provide the processor with instructions for the following processing steps: obtaining a photo, Wherein, the photo is obtained from at least one collection device distributed in a predetermined area, and the at least one collection device is triggered by a predetermined condition to shoot; a person is identified from the photo, and the person is Identify the identification information used to identify the person, wherein the identification information is unique within the predetermined area; save the photo and the identification information identified from the photo correspondingly; obtain the user's identification Information; search for a corresponding photo according to the user’s identification information; show the searched photo to the user; and/or, the memory is connected to the processor for providing the processor with the following The instruction of the processing step: acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot; identifying from the video
- a photo is taken, where the photo is obtained from at least one collection device distributed in a predetermined area, and at least one collection device is triggered by a predetermined condition to take a photo; a person is identified from the photo, and Identify the identification information used to identify the person from the person, where the identification information is unique within a predetermined area; save the photo and the identification information identified from the photo correspondingly; obtain the user's identification information; The identification information searches for the corresponding photos; displays the photos found to the user to show the user's photos in the predetermined area to the user.
- the photo display processing method provided by the embodiment of this application realizes the automatic placement of the user in the predetermined area
- the purpose of showing the photos of the user to the user and at the same time achieves the technical effect of improving the efficiency of showing the user’s photos in the predetermined area to the user, and then solves the problem of taking pictures of the user in the predetermined area and showing the obtained photos to the user in the related technology
- the method is relatively inefficient technical problem.
- Fig. 1 is a flowchart of a photo display processing method according to an embodiment of the present application
- Figure 2 (a) is a schematic diagram of an accessory according to an embodiment of the present application.
- Figure 2(b) is a schematic diagram of optional accessories according to an embodiment of the present application.
- Figure 2(c) is a schematic diagram of optional accessories according to an embodiment of the present application.
- Fig. 3 is a schematic diagram of a hand-held article according to an embodiment of the present application.
- Figure 4(a) is a first schematic diagram of a registration interface according to an embodiment of the present application.
- Figure 4(b) is a second schematic diagram of a registration interface according to an embodiment of the present application.
- Figure 4(c) is a third schematic diagram of a registration interface according to an embodiment of the present application.
- Figure 5 (a) is a schematic diagram of clothing registration according to an embodiment of the present application.
- Figure 5(b) is a second schematic diagram of clothing registration according to an embodiment of the application.
- Figure 6 (a) is a schematic diagram of a user login interface according to an embodiment of the present application.
- Figure 6(b) is a schematic diagram of a user registration interface according to an embodiment of the present application.
- Figure 6(c) is a schematic diagram of a user login interface according to an embodiment of the present application.
- Fig. 7(a) is an interface diagram displayed by an individual user according to an embodiment of the present application.
- Figure 7(b) is an interface diagram displayed by a team user according to an embodiment of the present application.
- Figure 7(c) is an interface diagram for adding user display according to an embodiment of the present application.
- Fig. 7(d) is an interface diagram of user operation selection according to an embodiment of the present application.
- Fig. 8 is a schematic diagram of white balance according to an embodiment of the present application.
- Figure 9(a) shows a schematic diagram of a media service system in an embodiment of the present application.
- Figure 9(b) shows a schematic diagram of an optional media service system according to an embodiment of the present application.
- Fig. 10 is a flowchart of a video display processing method according to an embodiment of the present application.
- FIG. 11 is a schematic diagram of a photo display processing device according to an embodiment of the present application.
- Fig. 12 is a schematic diagram of a video display processing device according to an embodiment of the present application.
- Fig. 13 is a structural block diagram of an electronic device according to an embodiment of the present application.
- a method embodiment of a photo display processing method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions, and Although the logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
- Fig. 1 is a flowchart of a photo display processing method according to an embodiment of the present application. As shown in Fig. 1, the photo display processing method includes the following steps:
- Step S102 Acquire a photo, where the photo is obtained from at least one collection device distributed in a predetermined area, and the at least one collection device is triggered by a predetermined condition to shoot.
- the predetermined area here may be a scenic spot.
- the scenic area is a place to provide viewing, learning, leisure and entertainment for the public. It can be a paid scenic spot, a free scenic spot, a private scenic spot, a public scenic spot, a natural landscape, or a man-made facility.
- the aforementioned at least one collection device may be a collection device arranged in the aforementioned predetermined area.
- the specific location of at least one collection device is not specifically limited. Taking a predetermined area as a scenic spot for illustration, the above-mentioned at least one collection device may be set at the entrance of the scenic spot, at a certain scenic spot.
- Step S104 identifying a person from the photo, and identifying identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area.
- recognizing the person from the photo herein may be using image recognition technology to perform image recognition on the image collected by at least one collection device to obtain the person in the photo.
- the identification information of the recognized persons is unique in the predetermined area.
- step S106 the photo and the identification information recognized from the photo are correspondingly saved.
- the photos are saved here in correspondence with the identification information recognized from the photos, which can facilitate subsequent display of the photos to the person corresponding to the identification information.
- Step S108 Obtain user identification information.
- Step S110 searching for a corresponding photo according to the identification information of the user.
- Step S112 Show the found photos to the user.
- a photo can be obtained, then a person can be identified from the photo, and identification information for identifying the person can be identified from the person, wherein the identification information is unique within a predetermined area; Then save the photo and the identification information identified from the photo correspondingly; after obtaining the user's identification information, find the corresponding photo according to the user's identification information to show the found photo to the user, and realize the automatic display of the user The purpose of displaying photos in a predetermined area to users.
- the photo can be saved corresponding to the identification information identified from the photo; then the user's identification information is obtained, and the corresponding photo is searched based on the user's identification information, and the photo is displayed to the user, which realizes the automatic display of the user in The purpose of showing the photos in the predetermined area to the user is achieved, and at the same time, the technical effect of improving the efficiency of showing the photos of the user in the predetermined area to the user is achieved.
- the photo display processing method provided by the embodiment of the present application solves the technical problem of comparing the efficiency of the way of taking a picture of a user in a predetermined area and showing the obtained photo to the user in the related technology.
- the above-mentioned attachments may be ornaments, clothes, etc. worn by the user.
- the user's clothing, accessories, and hand-held items may be ornaments, clothes, etc. worn by the user.
- Figure 2 (a) is a schematic diagram of an accessory according to an embodiment of the application, wherein the accessory shown in Figure 2 (a) is a hat
- Figure 2 (b) is an optional accessory according to an embodiment of the application
- a schematic diagram, wherein the ornament shown in Figure 2(b) is a bracelet
- Figure 2(c) is a schematic diagram of an optional ornament according to an embodiment of the present application, wherein the ornament shown in Figure 2(b) is a necklace .
- FIG. 3 is a schematic diagram of a hand-held article according to an embodiment of the present application, and FIG. 3 shows a toy pistol.
- the identification information may also be the biological feature information of the person, for example, the facial features and posture features of the person.
- the recognition of photos can be implemented by an image feature generator.
- the image feature generator is designed to use feature recognition technology to extract feature blocks from the photo. , Generate features according to preset rules.
- the image feature generator can generate recognition features through specific algorithms of dedicated software, can use recognition technology to extract recognition feature blocks in the image, and can train a feature extraction model to generate recognition features through convolutional neural networks.
- the feature recognizers are divided into recognition feature generators and additional feature generators according to different uses.
- the identification feature generator can identify and generate identification features, which are used to distinguish and confirm the identity of tourists (that is, people or users), so as to provide tourists with accurate media resources that need to be acquired;
- the additional feature generator can identify and Generate additional features to provide a basis for visitors to filter media resources and preview sorting.
- Figure 4(a) is a schematic diagram of a registration interface according to an embodiment of the application
- Figure 4(b) is a schematic diagram of a registration interface according to an embodiment of the application
- Figure 4(c) is a registration interface according to an embodiment of the application Figure 3;
- the user can make an appointment registration.
- the reserved area (as shown in Figure 4(a) ***Museum welcomes you Coming!)
- the appointment date (as shown in Figure 4(a), your appointment date is: ****year**month**day) is shown to the user; in addition, as shown in Figure 4(a),
- the number of reservations is also shown.
- you have reserved 3 people for registration and all the image information of the reserved users are stored in the user avatar database for subsequent user matching.
- Figure 4(b) shows a schematic diagram of post-registration. As shown in Fig. 4(b), it shows the name of the predetermined area, the date of post-registration, and the image information of the person who has been registered.
- the avatars of the registered users are stored in the user avatar library.
- Figure 4(c) shows a schematic diagram of post-supplement registration. As shown in Fig. 4(c), after post-supplement registration is submitted, registration feedback information will be sent to the user.
- the user registration method is to obtain the image containing the identification feature information, extract the identification feature information in the image, generate the identification feature, and send the user identification feature to the identification feature database.
- the registration operation is to determine the identity of the tourist in the process of receiving the service, and is used to screen and sort the image resources and video resources collected by the collection device according to the identity of the tourist, and provide the tourist with accurate media resources that need to be obtained.
- the image containing the identification feature information can be the image information obtained by the client terminal collection device.
- the collection device of the customer terminal equipment at the service counter of the scenic spot is used to collect the identification feature information
- the self-service counter machine of the scenic spot is used to collect the identification feature information.
- the collection device collects the identification feature information, and collects it using the collection device of the tourist's handheld mobile terminal.
- the client can be a counter thin client, touch self-service terminal, desktop computer, portable computer, mobile phone, tablet computer, and the terminal can be installed with a client application or an application platform.
- Client applet, or installed with a browser access the web client through the browser.
- the application client and the webpage client are collectively referred to as the client, which will not be specifically stated below.
- the client connects with the business support system for data, receives and displays the image output of the business support system, and sends a business request to the business support system.
- the business support system includes a feature recognizer, which can perform feature recognition on images.
- Image feature recognition is an existing image recognition technology widely used in other fields, such as the field of video surveillance.
- the current image recognition technology has been very mature and perfect. Whether it is for face recognition in images or for object recognition in images, the recognition efficiency and recognition accuracy are continuously improving.
- the application of network neural through the application of multi-layer deep convolution and pooling technology, the accuracy of machine recognition has reached or even exceeded the level of manual recognition.
- the aforementioned recognition feature recognizer can recognize wearing information or human body information in an image (ie, a photo).
- the wearing information may be a badge pattern or a printed pattern such as a transfer tattoo sticker distributed when the visitor registers.
- the pattern content may be a dot matrix, a stripe, a character mark, an image mark, and the like. For example, if a registered visitor's name is "Tom", a badge printed with the characters "Tom” can be issued to the visitor, a badge printed with the characters ""T-005" can be issued to the visitor, or a badge printed with the characters "T-005" can be issued to the visitor. The visitor is issued a badge with a cartoon "Tom cat" image.
- the printed pattern issued to the visitor needs to be unique for a fixed period of time, and the printed patterns with the same or similar characteristics cannot be assigned to different ones in the same period of time. Users, avoid system misjudgment and affect the accuracy of media services provided.
- the printed pattern is a QR code, it is not applicable in the actual scenic application scenario.
- the two-dimensional code is usually used for short-distance recognition, and the recognition rate of long-distance recognition is low; thirdly, the daily passenger flow in the scenic area is usually limited, and there is no need to use the two-dimensional code. Large-capacity and high-density coding.
- the wearing information can be badges, epaulettes, shoulder straps, hats, cap badges, clothing stickers, barcode bracelets, hanging chains, pendants, walking sticks, hand flags, children's hand-held toys and accessories, which can be distributed through tourist registration Technologys can also register their own accessories.
- registering accessories it must be satisfied that two or more accessories with the same or similar characteristics are not allowed to be registered in the same time period to ensure the uniqueness of tourist identification, avoid system misjudgments, and affect the accuracy of media services. Sex.
- the service counter stocks a large number of zodiac badges.
- scenic media service providers can pre-customize a large number of printed patterns or accessories, perform feature recognition on the printed patterns and accessories in advance to generate identification features, and group and number the identification features of the printed patterns or accessories, and identify features on the server
- the identification feature is stored in the database to establish the corresponding relationship with the corresponding number.
- the wearing information can be to collect the information of the clothes worn by the tourists as the identity information of the tourists, such as the matching characteristics of hats, tops, trousers or clothing as the identification features.
- the clothing of tourists has problems such as hitting shirts in the actual application of scenic spots. In particular, group tourists with uniform clothing do not have the possibility of distinguishing the identity of tourists from clothing information.
- clothing information can be used with good results among photo studio customers. For example, a photo studio customer needs to organize newlyweds to go to the scenic spot to take a wedding photo. The photo studio customer can pre-collect and register the wedding dresses and dresses in their stores and assign them to different newlyweds.
- Fig. 5(a) is a schematic diagram of clothing registration according to an embodiment of the present application.
- the registered clothing can be used as identification information and stored in the clothing identification library.
- the registration date and other users' historical spots are also shown.
- the jewelry identification library is also shown in the figure.
- Figure 5(b) is a second schematic diagram of clothing registration according to an embodiment of the application. As shown in Figure 5(b), when the clothing selected by the user is the same or similar to that already selected by other personnel, the user will be prompted to not Use it as identification information.
- the above-mentioned biological feature it can be a human face. Face recognition is very mature in current technology and has been applied on a large scale. The specific technical details need not be redundantly described here.
- the service provider can also store the user's face information on the cloud server, and when the visitor logs into the registered account, the face information is directly extracted from the cloud, without the need to collect the face information on site. Facial recognition to distinguish the identity of tourists can reduce the service process and improve the convenience of tourists receiving services. Face recognition in the actual application scenario of scenic spot services, there is a real problem that the twins’ facial features are highly similar and cannot distinguish their personal identities. However, as twins appear in scenic spots, they usually receive services as family group customers. Therefore, The problem can be ignored.
- the use of human identity information, especially face information will involve personal privacy laws and regulations in certain regions, and this identification method cannot be allowed to be used.
- identification features are stored in the identification feature database, when registering a piece of user information, multiple identification features can be associated at the same time. For example, a family group only needs to register one piece of user information when registering in a scenic spot, and all its family members need to collect identification feature information. Multiple identification features correspond to the same registered user information; for another example, a tourist pair
- the service has characteristic requirements. In the scenic area, not only the front image is obtained, but also the media service of the side, the back and other dimensions are obtained. The customer service staff needs to collect different identification feature information on the front, two sides, and back. These identification features are the same Just correspond to one registered user information.
- the method is to receive the user login request from the client, receive event information, and the identification feature generator generates event identification features, matches with the identification feature database, generates identification feature matching results, and the identification feature matching results determine the logged-in user.
- the event information may be image information acquired by a client terminal collection component, may be an identification feature code entered by the client, or may be a user code entered by the client.
- Fig. 6(a) is a schematic diagram of a user login interface according to an embodiment of the present application.
- the name of the predetermined area and the login category (for example, individual user, family user, etc.) are displayed in the login interface.
- Team user, post user, member user) among them, when the user logs in, the user will also be identified, for example, the badge worn by the user can be identified.
- the identification is successful, it will be displayed, the name badge will be successfully identified, and the identification will be displayed coding.
- Fig. 6(b) is a schematic diagram of a user registration interface according to an embodiment of the present application. As shown in Fig. 6(b), the registration can be achieved through face entry or face entry.
- Fig. 6(c) is a schematic diagram of a user login interface according to an embodiment of the present application. As shown in Fig. 6(c), in this interface, the user can choose to log in or browse a predetermined area.
- Figure 7(a) is an interface diagram displayed by an individual user according to an embodiment of the application. As shown in Figure 7(a), it shows the current number of users, the user’s badge number, the piece package selected by the user, and the user selected Photos, videos selected by the user, etc.
- Figure 7(b) is an interface diagram displayed by team users according to an embodiment of the present application. As shown in Figure 7(b), it shows the number of current users, the piece-rate package selected by the user, the photo selected by the user, and the photo selected by the user. Video etc.
- Figure 7(c) is an interface diagram displayed by adding users according to an embodiment of the present application. As shown in Figure 7(c), it shows the type of the current user and prompts the added user to place the identified item in the camera collection area. Recognition.
- Fig. 7(d) is an interface diagram of user operation selection according to an embodiment of the application. As shown in Fig. 7(d), the user can select the operation he needs, that is, the user can select the type of photo, for example, electronic photo, Paper photos, music albums, travel albums, etc.
- the type of photo for example, electronic photo, Paper photos, music albums, travel albums, etc.
- the methods for users to log in to the media service system include manual entry or mobile terminal scanning of two-dimensional codes.
- the image acquisition device of the client is used to collect identification feature information for user login, which can bring visitors Come to a better service experience.
- the fundamental spirit is to use image feature recognition technology to register or log in.
- pre-coding it first performs image feature recognition on the identification features and then generates identification features. Establish correspondence with pre-coding.
- the basic spirit of registering or logging in by pre-setting an account through the network is to use image feature recognition technology for registration or logging in.
- the registered or logged-in account needs to perform image feature recognition on recognition features such as human faces to generate recognition
- the characteristics are stored in the network, and the corresponding relationship between the identification characteristics and the account is established.
- the image data and video data collected by the collection device need to be filtered by feature classification to present a sorting display interface that is satisfactory to tourists.
- step S110 after obtaining the user's identification information, searching for the corresponding photo according to the user's identification information may include: according to the user's identification information, searching for the characteristics of one or more characters corresponding to the identification information Information: According to the characteristic information of one or more people, find the photos of one or more people as the photos corresponding to the user's identification information.
- correspondingly saving the photo and the identification information recognized from the photo may include: performing white balance of the photo according to the white area Adjust; save the adjusted photo in correspondence with the identification information recognized from the photo.
- the characteristic subject person is identified, and the shooting and acquisition system can perform white balance adjustment and focusing operations for the characteristic subject.
- the white balance adjustment information and focus information for the identified characteristic subject person can be stored in the image index database as additional characteristic information.
- the white balance adjustment of the image is adjusted based on the color temperature of the white block in the image, and the same white block will have different color temperature differences under different lighting conditions in the actual scene, resulting in
- the actual output image has a color cast phenomenon, which affects the unreal color of the main character captured by the shooting.
- the model usually holds a white balance model to shoot, and then uses the white block color temperature information in the white balance model to perform subsequent white balance adjustment operations to restore the true color image. If the white block template is fixed in advance in the shooting and collection area of the scenic spot, the white block template area and the main body area of the person will also have different spectral characteristics in the actual application process, which also affects the color reproduction of the main person.
- the embodiment of the present application also provides a method for providing white balance adjustment in media services in scenic spots.
- the user receives a printed image logo or accessory logo, a printed image logo, or an accessory logo at the scenic service counter during pre-registration.
- a white area is arranged during the production of the accessory logo.
- the photographing and collecting device extracts the color temperature value of the white block area in the mark, and uses the color temperature value of the white block area as the white balance adjustment criterion to adjust the white balance of the image.
- the color temperature value of the white block area is matched according to the user's skin color, so that the acquired image presents the user's best imaging effect.
- a skin color sample is made, and different skin colors in the skin color sample are correspondingly selected white blocks with different color temperatures.
- a user with a darker skin color is assigned a marker with a darker white block color temperature during pre-registration.
- the white balance is adjusted based on the collected white block color temperature.
- the adjusted image can be Shows the effect of whitening the user's skin tone, maximizes the satisfaction of the user's consumption needs, and improves the service experience.
- the white balance processing unit needs to perform white balance adjustment for each white block region and store the images separately.
- a travel team takes a group photo. Because the skin color of each team member in the team is inconsistent, it is obvious that a white balance adjustment scheme cannot make each team member in the team have an excellent presentation effect.
- each team member in the team pre-assigns and wears a white block logo that matches the skin color. After the image is collected, the optimal white balance adjustment is generated according to each white block logo, and the image is generated separately to meet the image of each team member. Optimize demand.
- skin color values can be collected to set white balance adjustment parameters.
- the method of collecting skin color value can record the tourist's facial skin color value into the support system in advance by comparing the skin color samples.
- the support system can estimate according to the tourist's facial skin color temperature value in the obtained image and the entered tourist skin color parameter to obtain the virtual
- the color temperature value of the white balance white block can be adjusted based on the obtained virtual white balance white block color temperature value.
- the support system By comparing skin color samples, the support system’s facial skin color parameters are entered in advance, and the support system can also perform precise beautification operations based on the skin color parameter values. For example, white people like wheat-colored skin tones, and the support system can automatically follow the skin color information entered in advance. Generates a darker skin tone imaging effect for white people.
- FIG. 8 is a schematic diagram of white balance according to an embodiment of the present application.
- a skin color contrast color card is shown in the figure.
- the skin color contrast color card can realize white balance for the user, and in this figure The white balance is also explained.
- the user can refer to the skin color contrast color card to select the white balance white block color card, and insert the white block into the reserved area of the badge, so as to present the user with more excellent works of art.
- images can be obtained by continuous shooting.
- the focus control unit can be controlled to achieve precise focus on the positions of different characters in the collection area and improve the shooting effect.
- the continuous shooting focusing step is to detect the identification feature information in the image, and the focus control unit uses the identification feature information area as the focus to capture and capture subsequent images.
- the unfocused shooting area is preferentially selected as the focus for focused shooting.
- the focus position is changed by detecting and identifying the characteristic area, and multiple people in the viewing area are separately focused and photographed to meet the service needs of different tourists.
- the identification feature information area that can match the features in the identification database is preferentially used as the focus for focus shooting. Focus shooting is performed only for pre-registered customers, and the focus control unit is optimized. resource utilization.
- acquiring the photo includes: extracting a predetermined frame from the video as a photo; and/or, in addition to displaying the found photo to the user, Show part or all of the video to users.
- the video frame extraction image is extracted from the video stream and based on the video.
- Index extraction has two extraction methods. After the image is acquired, the generated image is identified by the identification feature information and the additional feature information. The generated identification feature and additional feature are stored in the image index database, and the user can accurately obtain the desired image by searching the image index database.
- the steps of generating the image and the image index are to obtain the image, extract the identification feature information in the image, generate the identification feature, store the image, and send the identification feature of the image to the image index database.
- the required images can be extracted from the real-time video stream, such as the video stream collected and generated by a surveillance camera in a scenic spot.
- the video stream collected and generated by a surveillance camera in a scenic spot.
- To extract an image from a video stream it is necessary to perform feature recognition on the image, and extract the identification features that can distinguish users and the additional features that can provide a basis for display screening and sorting.
- the additional feature information here may be the image acquisition device identification, image acquisition time, facial expression information of a person, gesture information, position information, eye information, defocus information, sharpness information, focus information, and white balance sampling information.
- the tourists can get excellent shooting results in some key areas in the camera collection area.
- additional feature information such as facial expression information, gesture information, position information, eye information, defocus information, and sharpness information.
- the feature collection of all the characters in the key area of the picture is convenient for the system to retrieve all the photos of a certain user and recommend ranking display.
- a method for generating a video index is also provided: successively obtain video frames, capture time node information of the video frames, generate time features, capture the identification feature information in the video frames, and generate identification features,
- the time feature and the identification feature of the video frame are sent to the video index database, and the corresponding relationship between the time feature and the identification feature is determined in the index database.
- the additional feature information in the extracted frame image is captured, the additional feature of the extracted frame image is generated, the additional feature of the video frame is sent to the video index database, and the corresponding relationship between the time feature and the additional feature is determined in the index database.
- the content of the additional feature part of the frame in the video is the same as the additional feature of the image, and will not be repeated here.
- the video index generating device can be divided into servers placed on the collection side and servers placed on the support system side.
- the video stream collected by the collection terminal is passed to the collection server, and the collection server first recognizes the characteristics of the video stream through the video index generating device, and then stores the video file.
- the collection server can be built into the camera collection device, or can be built into the camera collection device controller.
- the video index generating device extracts stored video files and then performs feature recognition.
- This application provides a method for extracting videos based on a video index, specifically, receiving a request for extracting a video and corresponding identification features, searching a video index database based on the identification features, and obtaining time node information of a video frame containing the identification features, according to the acquired video Frame time node information and video extraction rules extract video segments from the corresponding video files.
- the aforementioned identification feature may be one identification feature or a group of identification features.
- a certain tour group is an individual group, and each team member only needs its own video clip. At this time, it is set to extract the video clip according to a recognition feature; for another example, if the group member belongs to the same organization, the organization needs All team members are required in the video clip of, and the video clip is set to extract the video clip based on a set of identification features.
- the video extraction rules may include the following types, and of course, may also include other types of extraction rules.
- Video extraction rules include rules that set the starting frame of video extraction to be at a fixed time interval between the first appearance of the recognition feature.
- the starting frame of video extraction is set at a fixed time interval before the first appearance of the identification feature, and the video clip is extracted from the time when the main character of the identification feature has not yet appeared;
- the starting frame is set at a fixed time interval after the first appearance of the recognition feature, and the video clip is extracted from when the main character of the recognition feature appears in a better position in the viewing area; the starting frame of the video extraction is set at the first recognition feature. When it appears once, the video clip will be extracted as soon as the main character is recognized.
- the determination of the starting frame for video extraction requires different settings according to different locations of the scenic spots, and different settings according to different presentation effects requirements.
- Video extraction rules include rules that set the end frame of video extraction to be at a fixed time interval between the last occurrence of the recognition feature.
- the end frame of video extraction is set at a fixed time interval after the identification feature appears for the last time, and the video clip is extracted after the main character disappears from the identification feature;
- the end frame of video extraction is set Set a fixed time interval before the recognition feature appears for the last time, and the extraction of the video clip ends when the main character of the recognition feature leaves a better position in the viewing area;
- the end frame of video extraction is set when the recognition feature appears for the last time, and the feature main body is recognized The last time the person is recognized ends the extraction of the video clip.
- the determination of the end frame of video extraction needs to adopt different settings according to different locations of the scenic spots, and different settings according to different presentation effects requirements.
- Video extraction rules also include rules for setting the end frame of video extraction to be located at a fixed time interval after the start frame of video extraction. Through this rule, short videos with a fixed duration can be generated, which is conducive to simplifying system management and pricing accounting.
- Video extraction rules include rules for skipping frame extraction. Through this rule, it is possible to display video clips in fast forward mode, reduce the length of the video, and increase the artistic effect of the video clips.
- Video extraction rules include repeated extraction rules. Through this rule, wonderful clips can be repeatedly presented, and slow motion presentation can also be realized, increasing the artistic effect of video clips.
- Video extraction rules include the rules for extracting multiple video frame identification features and corresponding regions to be spliced into one video frame image. Through this rule, multiple frames of image pictures containing characteristic main characters are spliced into one frame of image picture to enhance the artistic effect of video clips.
- Video extraction rules include rules for extracting video frame images across multiple video files. Through this rule, video clips from different places of interest can be integrated into one video clip to enhance the artistic effect of presentation.
- an embodiment of the present application also provides a method for extracting images through a video index. Specifically, an image extraction request and corresponding identification features are received, and the video index database is retrieved according to the identification features to obtain the video frame time containing the identification features. Node information, extract the frame image from the corresponding video file according to the acquired video frame time node information.
- the predetermined condition is that it is detected that the person in the predetermined area has at least one of the following information: gesture information, mouth shape information, and body shape information.
- the image can be obtained by snapping.
- An embodiment of the present application also provides a method for triggering a snapshot.
- the trigger signal is generated by the trigger signal generator detecting the trigger feature information in the video frame of the video stream, and the trigger signal generation and the start of the snapshot are set to delay. Time.
- the trigger feature information is at least one of character gesture information, mouth shape information, and body shape information in the key area.
- the service provider of the scenic spot defines the trigger feature in advance.
- the trigger feature can be an OK gesture, a snapping finger gesture, a clapping gesture, an OK voice mouth shape, or a standard standing posture.
- the trigger signal generator detects and generates a trigger signal. After a fixed delay time, the shooting collection device starts the capture operation.
- the focus control unit uses the trigger feature information area as the focus to perform focus capture, so as to achieve precise focus shooting of the trigger subject.
- the supplemental light unit is activated within the set delay time after the trigger signal is generated to realize the illumination compensation for a specific location in the viewing area and present a better image effect.
- displaying the searched photos to the user may include: sorting part or all of the photos when the searched photos exceed a predetermined number; displaying some or all of the sorted photos to the user .
- the embodiment of this application also provides a method for extracting and displaying photos based on the image index in the scenic spot. Specifically, obtaining the image retrieval request and user information, querying all the identification features corresponding to the user information in the identification feature database, and according to the identification feature Search the image index database, generate image index data, extract corresponding images according to the image index data, and send the extracted images to the display unit.
- the user After logging in the user information on the client, the user is identified by querying and extracting all the identification features corresponding to the user in the feature database, searching the image index database according to the identification features, extracting the corresponding image according to the image index generated by the image index database, and extracting the corresponding image according to the additional features and
- a predetermined display sorting rule generates an image display sorting sequence, and the image display unit sorts and displays the extracted images according to the image display sorting sequence.
- the image displayed by the image can be a low-quality thumbnail image, which prevents users from obtaining medium-quality images by taking screenshots on the network client.
- the display ordering rules include the ordering rules of the collection equipment one by one. Different collection equipment represents different mining attractions, sorted according to the popularity of the mining attractions, which can bring better presentation effects to tourists.
- Display ordering rules include rules for ordering by service outlets. Images collected in different scenic spots are stored in a unified cloud server and sorted by service outlets, which can provide passengers with services across scenic spots.
- the display ordering rules can include the rules for each collection device to display N items at most. Visitors can generate extraordinary images on the same collection device, all of which will cause visitors to have difficulty in choosing and affect the service experience.
- the display ordering rule contains the ordering rules of different identification features. Team visitors need to provide each team member with an opportunity to display images to reduce the problem of missing choices for team users.
- the display ordering rules include the rules for prioritizing the optimized processed synthetic images. Optimized processing of images through various image processing templates can produce better presentation effects, such as adding text foreground, pattern foreground, expression foreground, blurred background, background replacement, multi-image stitching, using art filters, etc. It can increase the presentation effect of images and bring a higher service experience for tourists.
- Display sorting rules include rules for sorting by clarity. Prioritize images with good imaging effects and high definition, and improve service experience.
- the display ordering rule includes the rule that the eye state of the characteristic subject is the open state first, or the rule that the eye state of the characteristic subject is identified with the largest number of open states under the same login user. In actual scenic spot applications, the same image may contain many people with different eye open and closed states. When displayed to the user, only the eye states of the main character that recognize the characteristics are screened out. If it is a team user, all the recognitions of the logged-in user are displayed. The image with the largest number of open eye states of the characteristic main character is excluded from the closed-eye state image of the characteristic main character.
- the display sorting rules include the rule that the position status of the characteristic subject is set to the best position state and the second best position is displayed in sequence, or the position state of each identified characteristic subject under the same login user is the set best position.
- the rule of state priority display and second best position display in turn.
- the same framing area can be divided into the same different areas, the subject being photographed in different positions has different effects, and different areas are set to the priority order, and the order is sorted according to the priority position of the photographed characteristic subject, improving Technologys choose efficiency and enhance service experience.
- the display sorting rules include displaying in turn according to the expression richness of the characteristic subject, or displaying the expression richness of each identified characteristic subject under the same login user. Using the expression richness of the captured characteristic subject as a reference basis for sorting, images with dull expressions can be filtered out, giving priority to presenting wonderful moments with rich facial expressions, enhancing the service experience.
- the display ordering rules include the rules for prioritizing display according to the focus accuracy of the characteristic subject, or the rules for sequentially displaying the focus accuracy of each identified characteristic subject under the same login user. Prioritize images with a high degree of focus on the characteristic subject area to enhance the service experience.
- the display sorting rules include the rules for the white balance sampling area to be displayed first in the feature main character area. Prioritize the images for white balance adjustment based on the white block sampling area worn by the characteristic main character, giving tourists the highest quality image information and enhancing the service experience.
- the combination display ordering of multiple display ordering rules is realized through the combination of the above display ordering rules. Filter and sort through various sorting rules, and display the best photos that tourists need first, improve the efficiency of tourists' selection, and enhance the service experience of tourists.
- the display sorting rules include rules for displaying according to the scoring points. Through the scoring method, the best photos that tourists need are displayed first, which improves the efficiency of tourists' selection and enhances the service experience of tourists.
- a tourist collects A image and B image in front of the scenic spot, and the scores of the acquisition equipment for A and B image are both 70; the collection of C image, D image, C image, and D image collected in front of the landmark building in the scenic spot
- the equipment points are all 100.
- a image does not use the beautification synthesis module, the image synthesis template score is 60; the B image uses the image synthesis template with a score of 70; the C image uses the image synthesis template with a score of 80; the D image is used An image synthesis template with a score of 100 was added.
- the sharpness score of the A image is 60, the eye state score is 0, the expression richness score is 80, the focus accuracy score is 70, and the white balance state score is 50.
- the sharpness score of the B image is 80, the eye state score is 100, the expression richness score is 70, the focus accuracy score is 100, and the white balance state score is 100.
- the sharpness score of the C image is 80, the eye state score is 100, the expression richness score is 90, the focus accuracy score is 90, and the white balance state score is 100.
- the sharpness score of the D image is 100, the eye state score is 100, the expression richness score is 50, the focus accuracy score is 100, and the white balance state score is 100.
- Table 2 The specific scores are shown in Table 2 below:
- a image rating (70*1+60*2+60*1+0*5+70*0.5+80*0.5+50*1)/7 ⁇ 53
- D image rating (100*1+100*2+100*1+100*5+50*0.5+100*0.5+100*1)/7 ⁇ 154
- the display method includes the method of displaying the image scoring result.
- the system score of the image is marked to give visitors an additional basis for selection when selecting images, and to enhance the service experience of visitors.
- the embodiment of the application also provides a method for extracting and displaying videos in a scenic spot according to the video index.
- the video retrieval request and user information are obtained, and all the identifications corresponding to the user information in the identification feature database are queried.
- Search the video index database according to the identification features generate video index data, extract corresponding video segments from the video according to the image index data, and send the extracted video cover to the display unit.
- the video cover may be a static cover or a dynamic cover.
- the static cover is the frame image extracted from the video clip
- the dynamic cover is the preview video extracted from the video clip.
- the display ordering rules may include multiple, which will be described in detail below.
- the display ordering rules include the ordering rules of the collection equipment one by one. Different collection equipment represents different mining attractions, sorted according to the popularity of the mining area, can bring better presentation effects to tourists.
- Display sorting rules include rules for grouping and sorting by collection equipment.
- the collection equipment is grouped by scenic spot area and arranged by group, which can facilitate tourists to preview by group.
- Display sorting rules include rules sorted by service outlets.
- the videos collected in different scenic spots are stored uniformly on the cloud server and sorted by service outlets, which can bring a better service experience to passengers.
- the display ordering rules include the rules for prioritizing the video of post-synthesis processing. Optimized processing of videos through various video processing templates can produce better presentation effects, for example, adding text foreground, pattern foreground, expression foreground, blurred background, background replacement, multi-screen splicing transition, use of art filters, enhancement Special effects, etc., through post-synthesis processing can enhance the presentation effect of the video, and bring a higher service experience for tourists.
- the display ordering rules include the priority ordering rules for video extracted by skipping frames, the priority ordering rules for repeatedly extracted videos, and the priority ordering rules for spliced videos. Through this rule, videos with better presentation effects can be presented first, and the service experience of tourists can be improved.
- the display ordering rules can include the rules for each collection device to display N items at most. Visitors can generate multiple videos on the same collection device, all of which will cause visitors to have difficulty in choosing and affect the service experience. Set rules for displaying up to N priority videos on each capture device, streamline selection items, improve the efficiency of tourists' selection, and enhance service experience.
- the display sorting rules include rules for displaying according to the scoring points.
- the scoring method the best videos that tourists need are displayed first, which improves the efficiency of tourists' selection and enhances the service experience of tourists.
- the image scoring display embodiment refer to the image scoring display embodiment.
- the display method includes a method of displaying video scoring results.
- the system score of the video is marked to give visitors an additional basis for choosing images and enhance the service experience of visitors.
- the billing method for the media resource service of this application is to meet the characteristics of the scenic shooting service business and adopts the method of charging according to the service content. And according to the flexible and changeable billing requirements, different billing rules can be customized. In other words, this application can customize different charging formulas according to the charging rules, and realize flexible combinations of charging strategies, package strategies, discount strategies, reduction strategies, and bonus strategies, etc., which is easy to expand.
- one or more charging rules are customized according to different charging rules of the media service type to form a charging rule pool.
- the following factors are referred to: A.
- the videos and photos taken have different billing strategies; B. Different shooting cameras and different shooting locations have different billing strategies; C. Shooting by service providers The media provided and the media provided by user selfies have different billing strategies; D. Registered member users and registered users have different billing strategies; E. Different membership levels have different billing strategies; F. Pre-registered users and Subsequent billing users have different billing strategies; G, single user combination, double user combination, family user combination, and group user combination have different billing strategies; H, different user introduction channels have different billing strategies; I.
- Support single billing including single video, single photo, single electronic album
- Different acquisition methods have different billing strategies (such as photo printing method, customized travel album method, copy method, network sharing method) );
- Different post-processing has different billing strategies (such as beauty processing, adding filters, synthesizing music albums);
- L, different collection periods have different billing strategies (such as night collection compared to daytime collection, Need to provide additional means such as light);
- M support package billing strategies (such as free package, 5 yuan package, 10 yuan package, 50 yuan package, 100 yuan package), the package can include a fixed number of specific service items, such as The 100 yuan package includes services such as X photos, Y small videos, synthetic electronic albums and free travel albums; the package can include the maximum total amount.
- the 100 yuan package includes photos and videos with a total cost of less than 200 yuan.
- N Support personalized billing for users, different users can choose different tariffs or billing strategies according to their own situation
- O Support capped fees
- P Support advertising reduction and exemption billing strategies, such as built-in scenic spot advertisements when sharing music albums online , Will give certain advertising discounts and exemptions
- Q support the point reduction and exemption strategy, the points earned by users after consumption in different scenic spots can be used to deduct the billing
- R support the activity discount strategy, set the corresponding reduction and exemption range during the promotion period
- S Support the incentive discount strategy, the self-portrait photos or videos provided by the user are particularly effective to give incentive reduction or exemption of billing
- T support the unsettled user to apply for the consolidated settlement billing strategy of the media resource information history list
- U support one-time consumption Fixed number of discounts and deductions for billing strategies (such as one-time consumption can only enjoy one-time discounted billing or one-time deduction for billing)
- V Supports cross-scenic billing
- multiple charging rules are customized based on the above factors.
- the charging of a service can be realized through one or more charging rules, and the charging rules can be combined charging.
- the basic charging rules applied to the shooting service business in scenic spots define the charging rules with the minimum charging factor of a single resource specification as the unit, as shown in Table 3:
- the package billing rules applied to the shooting service business in scenic spots define the rules for billing in units of package resource specifications.
- the package billing rules include a fixed number of photos and a fixed number of videos, as shown in Table 4:
- Package billing rules include billing rules for the maximum total cost of media resources, as shown in Table 5 as an example:
- the coefficient charging rules define the basic charging rules and the charging rules of the coefficient accounting methods of the package charging rules.
- the coefficient charging rules include the discount coefficient charging rules and the addition coefficient charging rules.
- the discount coefficient charging rules can be as shown in the table 6 examples:
- the charging rule of the bonus factor can be as an example in Table 7:
- the reduction and exemption billing rules define the rules for billing based on the deduction quota as the accounting method, which can be shown in Table 8 as an example:
- a pre-registered user has 5 family members registered, he is a VIP member user, and he has taken 200 photos and 20 short videos on the shooting equipment in the scenic spot, and he has participated in the best expression show shooting event and received a 50% discount.
- the 99 yuan package is selected and 1,000 points (equivalent to 10 yuan in cash) are used in the settlement.
- the user agrees to implant scenic spots advertisements in the shared electronic music album, and the 99 yuan package provides free customized photos. You need the user himself Bear the cost of express delivery. Then the actual amount collected by the user should be the example shown in Table 9:
- a post-registered user If a post-registered user has 5 family members registered as a non-member user, it needs to extract 1 electronic photo of the front entrance of the scenic spot, 5 photos of the scenic hot spot and 2 special short videos from the shooting equipment of the scenic spot, and It is required to print photographic paper photos, make electronic music albums, and disagree with the placement of scenic spots advertisements in electronic music albums. It extracts special short videos and wins the award for the best walking posture shooting activity, enjoys 3000 points reduction, and additional purchase of media file storage media Disk) 1 pc. Then the actual amount charged by the user should be the example shown in Table 10:
- the device mainly includes an application information acquisition module, a charging module, and a charging result sending module.
- the application information acquisition module receiving customers User information and media resource information requested by the user at the end; charging module: According to the user information and media resource information requested by the user, the corresponding charging rule is found in the charging rule pool, and the charging result is generated, and the charging result is generated; Billing result sending module: Send the billing result to the accounting processing system.
- a charging processing method for scenic shooting service services is also provided, which is located on a cloud server.
- the device mainly includes an application information acquisition module, a charging module, and a charging result sending module.
- Charging module According to user information and media resource information applied by the user, the corresponding charging rule is found in the charging rule pool, and the charging result is generated, and the charging result is generated.
- Billing result sending module Send the billing result to the accounting processing system.
- a service charging method includes: one or more processors, memories, bus systems, transceivers, and one or more application programs, one or more The processor, the memory, and the transceiver are connected by a bus system; one or more application programs are stored in the memory, and one or more application programs include instructions.
- the processor of the service charging device executes the instructions, service charging The device executes the service charging method in the foregoing method embodiment.
- service charging method please refer to the related description in the above method embodiment, which will not be repeated here.
- Figure 9(a) shows a schematic diagram of the media service system in an embodiment of the present application.
- collection terminals are set in different scenic spots respectively, and the collection device on the collection side is connected to the collection server, and the collection server Connected to the local server, the local server is connected to the cloud server, and the photo required by the user is sent to the user terminal, that is, the client through the cloud server.
- Fig. 9(b) shows a schematic diagram of an optional media service system according to an embodiment of the present application.
- the processing includes the collection terminal, collection server, and local as shown in Fig. 9(a).
- the large bead is defined as the white balance white block collection area; the middle bead is set It is a dark and dark color, which constitutes the second identification feature; the beads in the second and third rankings are set to light and dark, which constitutes the third identification feature; the three identification features determine the identification features of the bead necklace .
- the daughter received a custom-made sun hat.
- the sun hat is set with repeated arrangement of leaf patterns and flower patterns, which constitute the identification feature of the sun hat.
- the son received a customized toy gun. On the left and right sides of the toy gun, there are six sets of identical identification feature information repeatedly arranged.
- the identification feature information consists of three five-star shapes and a circle forming the first identification feature, the circle The shape is set to rank the third to constitute the second identification feature.
- the accessories can also be used as the basis for accounting for various consumption activities of cruise ships. For example, when tourists spend on cruise ships, they only need to recognize accessories in front of the camera to perform accounting operations on the computer system. Application scenarios for cash-free consumption accounting.
- a family of four customized accessories are pre-registered at the media service counter.
- the customer service personnel collect and identify the four accessories through the counter client.
- the feature recognizer of the media service support system recognizes and generates the identification features of the accessories and sends them to the identification feature database. , Complete the visitor registration operation.
- the customer service staff sets the four registered accessories as a group of family group customers at the counter client.
- the main playgrounds of the cruise ship are equipped with shooting collection equipment as the monitoring system of the cruise ship, and the activities of tourists in the public areas on the cruise ship are collected and saved.
- the image index generating device of the business support system server obtains the video stream image in real time, extracts the identification feature information and additional feature information in the image, generates the identification feature and additional feature, stores the image, and sends the identification feature and additional feature of the image to the image index database .
- the step also includes judging whether there is identification feature information in the image, if so, extracting the image and saving it, if not, not saving the image.
- the video index generation device of the business support system server obtains the video frame, captures the time node information of the video frame, generates the time feature, captures the identification feature information in the video frame, generates the identification feature, and sends the time feature and identification feature of the video frame to
- the video index database is used to determine the corresponding relationship between the time feature and the recognition feature in the index database.
- the additional feature information in the extracted frame image is captured, the additional feature of the extracted frame image is generated, the additional feature of the video frame is sent to the video index database, and the corresponding relationship between the time feature and the additional feature is determined in the index database.
- the business support system server here also has a step of saving the video stream, which is not related to this application and will not be repeated.
- the counter customer service collects the father’s ball necklace image on the counter customer service machine, recognizes the identification features, and selects and extracts the media information related to the group information through the client interactive interface.
- the business support system extracts the four identification features corresponding to the group information, namely, four accessories The recognition characteristics.
- the business support system queries the image index database and the video index database according to the four identification characteristics, and generates image index data and video index data.
- the images and videos are extracted according to the image index data and video index data, and presented to customers in the display unit for selection according to the sorting rules.
- a company organizes 30 employees and their families to travel to the outdoor scenic spot A, including 15 employees without families and 15 employees and their families, of which 15 employees and their families are a combination of five families of three.
- the person in charge of the company's tourism activities went to the service counter of the scenic spot to receive the 30 badge, with badge numbers B066 ⁇ B086.
- Each of the numbers B066 ⁇ B081 corresponds to 1 badge, which is assigned to employees without family members; among them, each number of B082 ⁇ B086 corresponds to 3 badges, and the 3 badges of each number are assigned to the same family member. That is, three members of the same family all use the same numbered badge.
- the person in charge of the tourism activity received the sample pattern matching the skin color from the service counter.
- Each member compared his skin color according to the white balance white block and the skin color matching sample pattern, selected the corresponding white block sample, and inserted the white block sample into his own.
- a specific area of the name badge was used to identify the white block sample.
- this outdoor scenic spot has installed a dedicated high-definition continuous shooting collection device in the scenic hotspot area, which can realize uninterrupted continuous shooting operation, and automatically recognizes that tourists are wearing badges through the collection server, and automatically stores the contents. Images with recognition features are discarded. Images that do not contain recognition features are discarded.
- the collection-side server also has the ability to control the focus unit of the continuous shooting collection device, and after acquiring the identification feature, the focus unit is controlled to take the area where the identification feature is located as the focus point for focus shooting.
- the collection server also has the ability to adjust the white balance of the image. After acquiring the white block sample area in the badge, it automatically adjusts the white balance of the image with the color temperature of the white block sample area and stores it.
- the process of generating image recognition features and additional features on the collection server and sending them to the image index database will not be repeated. The process of shooting video and generating video in this scenic spot, as well as generating identification features, additional features, etc. will not be repeated.
- the service support system is equipped with a post-processing unit that can intelligently replace specific images in the image.
- the badge worn by the tourist in this embodiment affects the overall beauty of the generated image and video.
- the post-processing unit captures the badge in the image and replaces it.
- smart replacement is performed to eliminate the image of the badge from the image; for example, a fixed pattern is set for smart replacement, and the badge is replaced with other more beautiful fixed images. (If it is not included in the essentials, it needs to be stated in the manual as a technical enlightenment)
- the person in charge of the tourism activity went to the service counter and logged in the group user information of "B5" through the customer personnel.
- the customer service terminal showed the person in charge 20 identification features coded as "B066 ⁇ B086" under the group user of "B5" All the image data and video data are generated during the tourism in the scenic spot.
- the person in charge selected a number of employee group photos and individual wonderful images, and selected the customized photo tour business, requesting to customize 25 copies of "XXX Company XX Years Excellent Employee Travel Memorial Book". The expenses incurred under the group information are settled by the person in charge.
- the company's employees and their family members have taken many wonderful moments in the scenic spot, and they each go to the service hall to use the customer self-service counter to select their own media materials.
- One of the badges is placed in front of the self-service counter machine collection device.
- the counter machine recognizes that the identification feature of the badge is B085.
- the computer interactive interface login prompts two login options of "B5 Group” and "B085 Individual". After selecting the "B085 Individual" user, the display interface shows all the image data and video data generated during the scenic tourism process under all the B085 identification features.
- Employee A selects the corresponding media information according to the package and makes a settlement.
- employee Xiaofeng obtained the free package of individual users to extract media files from the self-service counter.
- employee Xiaofeng met Mr. Hu (badge code B267) and Ms. Wang (badge code B875) from another travel team.
- Xiaofeng, Mr. Hu, and Ms. Wang are classmates in the university.
- the three took many wonderful photos together during the trip.
- the three wanted to share the electronic photo album made during the tour in the scenic spot to the social group of the university classmates. Therefore, after the employee Xiaofeng logged in the personal user information again, he clicked "Add User" in the interactive interface, and used the self-service counter collection device to identify the badge with the B267 identification code.
- the interactive interface prompts "Add individual user” and “Add group” User” and other login options, select the “Add Personal User” login option and continue to click "Add User” in the interactive interface, repeat the above operation to add the coded B875 badge to the login interface.
- the interactive interface displays all the image data and video data generated during the scenic tourism process under the three identification features coded as B073, B267 and B875.
- Xiaofeng, Mr. Hu, and Ms. Wang chose to obtain an electronic album.
- the system generated an electronic album and shared it with the social group of university students in the social software.
- the customer service staff's counter machine can complete the user's login operation by directly inputting the code value.
- the self-service counter machine does not have the authority to directly enter the code value to perform the login operation.
- the self-service counter machine can only complete the login by collecting identification feature information. Operation, to avoid other malicious operations after logging in through simple operations.
- One of the main business of the photography company is to take wedding photos for newlyweds.
- the newlyweds will be organized to take pictures in scenic spots or parks from time to time.
- the outdoor shooting locations of a wedding photo studio in a city are generally in scenic spots such as artificial lake parks, city central parks, amusement parks, XX mountain scenic spots, XX cathedrals, and magic castles.
- Media service providers have installed ultra-high-definition shooting and capture devices in the scenic spots.
- the wedding photo studio provides monthly and annual high-definition image capture services.
- a wedding photo studio is a monthly subscription user of the media service provider, and plans to organize newlyweds to go to the city park to take pictures on July 23, 2019.
- the photo studio took multi-angle photos of the wedding dresses and dresses in the store and uploaded them to the business support system of the media service provider.
- the business support system automatically generated clothing identification information and stored it in the clothing identification library.
- the staff of the photo studio uses a computer terminal to log in to the account on the service provider’s website, select the date for the appointment, the address for the appointment, and the wedding dress and gown to be used on the appointment date. For example, on that day, the wedding dress No. 4 has been used by other photo studios in the same location, and the system detects that the identification feature database already contains the identification features of the clothing information, and the system prompts through the network interactive interface that the identification reservation request cannot be completed.
- a Wedding Photo Studio logs in to the account through the computer website, extracts the original files of all the images taken, and conducts subsequent classification and further processing.
- the method of recording video in the scenic spot is basically similar to the process of shooting images, so I won’t repeat it.
- Mr. Tang used his mobile terminal to download the service provider’s app, he registered and logged in to his account, and uploaded his family’s facial photos to the service provider’s cloud service support system.
- the recognition feature device completes the facial feature recognition of the three people, generates the recognition feature and stores it in the recognition feature database of the museum, and completes the pre-registration operation.
- Mr. Tang used a mobile terminal to check the images and videos taken by his parents and son at the museum, and found that he could not find it. Inquired by telephone and learned that in order to meet his son's request to spend a day in the capital children's park of country A, he did not visit the museum.
- Mr. Tang used the mobile terminal APP to enter the interactive interface of the Capital Children's Park of Country A, select the facial photos of his parents and son, and confirm the three persons to register in the children's park on Monday to receive services.
- the business support system After receiving the post-registration instruction, the business support system starts the media data appending program, and the recognition feature device completes the recognition of the facial features of the three people to generate recognition features.
- the local server of the children's playground extracts all the video data collected on Monday and the video one by one. File reading recognizes the facial information of tourists, generates image index data and video index data matching the three-person recognition characteristics, generates corresponding preview images and video covers, and pushes them to Mr. Tang's mobile terminal device APP for display. After selecting the required images and videos, Mr. Tang chose the option of temporary storage in the cloud. The selected image files and video files are transferred from the local server of the children's playground to the cloud server for storage.
- Mr. Tang selected the desired images and videos from the temporary image library and the temporary video library, customized a travel commemorative film called "European Grandfather and Grandson", and customized a set "Tour Europe” travel blog, and completed the payment using a handheld mobile terminal.
- FIG. 10 is a flowchart of a video display processing method according to an embodiment of the present application. As shown in FIG. 10, the video display processing method includes the following step:
- Step S1002 Obtain a video, where the video is obtained from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot.
- the predetermined area here may be a scenic spot.
- the scenic area is a place to provide viewing, learning, leisure and entertainment for the public. It can be a paid scenic spot, a free scenic spot, a private scenic spot, a public scenic spot, a natural landscape, or a man-made facility.
- the aforementioned at least one collection device may be a collection device arranged in the aforementioned predetermined area.
- the specific location of at least one collection device is not specifically limited. Taking a predetermined area as a scenic spot for illustration, the above-mentioned at least one collection device may be set at the entrance of the scenic spot, at a certain scenic spot.
- Step S1004 identifying a person from the video, and identifying identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area.
- recognizing the person from the video may be using image recognition technology to perform image recognition on the video frame collected by at least one collection device to obtain the person in the video.
- the identification information of the recognized persons is unique in the predetermined area.
- step S1006 the video and the identification information recognized from the video are correspondingly saved.
- the video is saved here in correspondence with the identification information recognized from the video, which can facilitate subsequent display of the video to the person corresponding to the identification information.
- Step S1008 Obtain user identification information.
- Step S1010 Find the corresponding video according to the user's identification information.
- step S1012 the found video is displayed to the user.
- a video can be obtained, and then a person can be identified from the video, and identification information for identifying the person can be identified from the person, wherein the identification information is unique within a predetermined area; Then save the video and the identification information identified from the video correspondingly; after obtaining the user's identification information, search for the corresponding video according to the user's identification information to show the found video to the user, which realizes the automatic display of the user The purpose of showing the video to the user in a predetermined area.
- the video can be saved corresponding to the identification information identified from the video; then the user's identification information is obtained, and the corresponding video is searched based on the user's identification information, and the video is displayed to the user, which realizes the automatic display of the user in The purpose of showing the video in the predetermined area to the user is achieved, and at the same time, the technical effect of improving the efficiency of showing the video of the user in the predetermined area to the user is achieved.
- the video display processing method provided in the embodiments of the present application solves the technical problem of low efficiency in the related art that takes a picture of a user in a predetermined area and displays the obtained video to the user.
- the identification information used to identify the person is identified from the person, the video and the identification information identified from the video are correspondingly saved, and the user's identification information is obtained according to the user's identification information.
- the method of searching for the corresponding video by the identification information and showing the found video to the user can be implemented in the above-mentioned manner, which will not be repeated here.
- the photo display processing method and the video display processing method provided by the embodiments of the present application realize the purpose of automatically showing the user's photos or videos in a predetermined area to the user, and at the same time achieves
- the technical effect of improving the efficiency of showing the user's photos or videos in a predetermined area to the user greatly improves the user's experience.
- FIG. 11 is a schematic diagram of the photo display processing device according to an embodiment of the present application.
- the photo display processing device includes: An acquisition unit 1101, a first identification unit 1102, a first storage unit 1103, a second acquisition unit 1104, a first search unit 1105, and a first display unit 1106.
- the photo display processing device will be described in detail below.
- the first acquisition unit 1101 is configured to acquire a photo, where the photo is acquired from at least one collection device distributed in a predetermined area, and the at least one collection device is triggered to take a photo by a predetermined condition.
- the first identification unit 1102 is used to identify a person from a photo, and identify identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area.
- the first saving unit 1103 is configured to save the photo and the identification information recognized from the photo correspondingly.
- the second obtaining unit 1104 is configured to obtain user identification information.
- the first search unit 1105 is configured to search for a corresponding photo according to the user's identification information.
- the first display unit 1106 is used to display the found photos to the user.
- first acquisition unit 1101, first identification unit 1102, first storage unit 1103, second acquisition unit 1104, first search unit 1105, and first display unit 1106 correspond to the steps in the embodiment. From S102 to S112, the examples and application scenarios implemented by the foregoing modules and corresponding steps are the same, but are not limited to the content disclosed in the foregoing embodiments. It should be noted that, as a part of the device, the above-mentioned modules can be executed in a computer system such as a set of computer-executable instructions.
- the first acquisition unit can be used to acquire photos, where the photos are acquired from at least one collection device distributed in a predetermined area, and at least one collection device is triggered by a predetermined condition.
- Shoot then use the first recognition unit to identify the person from the photo, and identify the identification information used to identify the person from the person, where the identification information is unique within a predetermined area; and then use the first storage unit to save the photo Corresponding to the identification information identified from the photo; and use the second acquisition unit to obtain the user's identification information; and use the first search unit to find the corresponding photo according to the user's identification information; finally use the first display unit to find the Show your photos to users.
- the photo display processing device provided by the embodiment of the present application, the purpose of automatically showing the user's photos in a predetermined area to the user is realized, and at the same time, the technical effect of improving the efficiency of showing the user's photos in the predetermined area to the user is achieved. Furthermore, it solves the technical problem of comparing the efficiency of the way of taking a picture of the user in a predetermined area and showing the obtained picture to the user in the related technology.
- the first recognition unit includes: a first recognition module for recognizing attachments on the person’s body and/or biological characteristics of the person from the person; a first determination module for recognizing the attachment
- the feature information of the attachment and/or the feature information of the biological feature is used as the identification information for identifying the person
- the second acquisition unit includes: a second acquisition module, which is used to acquire the attachment of the user and/or the biological feature of the user, and the biological
- the feature information corresponding to the feature is the user's identification information; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held items; the attachment is used to uniquely identify a person in a predetermined area; the biological characteristics of the person include one of the following: facial features , Body characteristics.
- the first searching unit includes: a searching module, configured to, after obtaining the identification information of the user, search for the characteristic information of one or more persons corresponding to the identification information according to the identification information of the user;
- the second determination module is used to find photos of one or more people as the photos corresponding to the user's identification information according to the characteristic information of one or more people.
- the first storage unit includes: an adjustment module, configured to adjust the white balance of the photo according to the white area when the attachment on the person includes a white area; and the storage module is used to adjust the white balance of the photo according to the white area. Save the adjusted photo in correspondence with the identification information recognized from the photo.
- the first acquisition unit includes: a third determination module, configured to extract a predetermined frame from the video as a photo when at least one acquisition device is triggered by a trigger condition to shoot a video; and /Or, the first display module is used to display the found photos to the user, but also display part or all of the video content to the user.
- a third determination module configured to extract a predetermined frame from the video as a photo when at least one acquisition device is triggered by a trigger condition to shoot a video
- the first display module is used to display the found photos to the user, but also display part or all of the video content to the user.
- the predetermined condition is that it is detected that the person in the predetermined area has at least one of the following information: gesture information, mouth shape information, and body shape information.
- the first display unit includes: a sorting module, which is used to sort part or all of the photos when the searched photos exceed a predetermined number; the second display module is used to sort Some or all of the sorted photos are shown to the user.
- FIG. 12 is a schematic diagram of the video display processing device according to an embodiment of the present application.
- the photo display processing device includes: Three acquisition unit 1201, second identification unit 1202, second storage unit 1203, fourth acquisition unit 1204, second search unit 1205, and second display unit 1206.
- the video display processing device will be described in detail below.
- the third acquisition unit 1201 is configured to acquire a video, where the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition.
- the second recognition unit 1202 is used for recognizing a person from the video, and recognizing identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area.
- the second saving unit 1203 is configured to correspondingly save the video and the identification information recognized from the video.
- the fourth obtaining unit 1204 is configured to obtain user identification information.
- the second searching unit 1205 is configured to search for a corresponding video according to the user's identification information.
- the second display unit 1206 is used to display the found video to the user.
- the above-mentioned third acquisition unit 1201, second identification unit 1202, second storage unit 1203, fourth acquisition unit 1204, second search unit 1205, and second display unit 1206 correspond to the steps in the embodiment. From S1002 to S1012, the examples and application scenarios implemented by the foregoing modules and corresponding steps are the same, but are not limited to the content disclosed in the foregoing embodiments. It should be noted that, as a part of the device, the above-mentioned modules can be executed in a computer system such as a set of computer-executable instructions.
- the third acquisition unit can be used to acquire the video, where the video is acquired from at least one acquisition device distributed in a predetermined area, and at least one acquisition device is triggered by a predetermined condition.
- Shooting then use the second recognition unit to identify the person from the video, and identify the identification information used to identify the person from the person, where the identification information is unique within a predetermined area; and use the second storage unit to save the video And save corresponding to the identification information identified from the video; and use the fourth acquisition unit to acquire the user's identification information; then use the second search unit to find the corresponding video according to the user's identification information; finally use the second display unit to find the The video of is shown to the user.
- the video display processing device provided by the embodiment of the present application, the purpose of automatically showing the user's video in a predetermined area to the user is realized, and at the same time, the technical effect of improving the efficiency of showing the user's video in the predetermined area to the user is achieved. Furthermore, it solves the technical problem of low efficiency in the way of taking a picture of the user in a predetermined area and showing the obtained video to the user in the related art.
- a storage medium includes a stored program, wherein the program executes any one of the above-mentioned photo display processing method or video display processing method.
- a processor which is configured to run a program, wherein the program executes any one of the above-mentioned photo display processing method or video display processing method when the program is running.
- an electronic device may also be provided, and the electronic device may be any electronic device terminal device in an electronic device terminal group.
- the above-mentioned electronic device terminal may also be replaced with a terminal device such as a computer.
- the above-mentioned electronic device may be located in at least one network device among a plurality of network devices in the electronic device network.
- the above-mentioned electronic device can execute the program code of the following steps in the photo display processing method: obtaining photos, where the photos are obtained from at least one collection device distributed in a predetermined area, and at least one collection device is The shooting is triggered by a predetermined condition; the person is recognized from the photo, and the identification information used to identify the person is identified from the person, wherein the identification information is unique within a predetermined area; the photo and the person identified from the photo The identification information is correspondingly saved; the user's identification information is obtained; the corresponding photo is searched according to the user's identification information; the found photo is displayed to the user.
- FIG. 13 is a structural block diagram of an electronic device according to an embodiment of the present application.
- the electronic device 1301 may include: one or more (only one is shown in the figure) processor 1302, memory 1303, and transmission device 1304.
- the memory can be used to store software programs and modules, such as the program instructions/modules corresponding to the photo display processing method and device in the embodiments of the present application.
- the processor executes various functions by running the software programs and modules stored in the memory.
- Application and data processing that is, the above-mentioned photo display processing method is realized.
- the memory may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
- the memory may further include a memory remotely provided with respect to the processor, and these remote memories may be connected to the electronic device 1301 through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
- the processor may call the information and application programs stored in the memory through the transmission device to perform the following steps: obtain photos, where the photos are obtained from at least one collection device distributed in a predetermined area, and at least one collection device is scheduled Condition triggers to shoot; identify a person from the photo, and identify the identification information used to identify the person from the person, where the identification information is unique within a predetermined area; combine the photo and the identification identified from the photo The information is correspondingly saved; the user's identification information is obtained; the corresponding photo is searched according to the user's identification information; the searched photo is displayed to the user; and/or the memory is connected to the processor to provide the processor with the following processing steps Instructions: Obtain a video, where the video is obtained from at least one acquisition device distributed in a predetermined area, and at least one acquisition device is triggered by a predetermined condition to shoot; identify a person from the video, and identify a person from the person The identification information used to identify the person, where the identification information is unique within a predetermined area; the video and the identification information
- the photo or video can be saved corresponding to the identification information identified from the photo or video; then the user's identification information is obtained, and the corresponding photo or video is found based on the user's identification information, and Displaying the photo or video to the user realizes the purpose of automatically showing the user’s photo or video in the predetermined area to the user, and at the same time achieves the technical effect of improving the efficiency of showing the user’s photo or video in the predetermined area to the user.
- the photo or video display processing method provided in the embodiments of the present application solves the technical problem of low efficiency in the related art of taking a picture of a user in a predetermined area and displaying the obtained photo or video to the user.
- the structure shown in Fig. 13 is only for illustration, and the electronic device may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet Device (MID). ), PAD and other terminal equipment.
- FIG. 13 does not limit the structure of the above electronic device.
- the electronic device 1301 may also include more or fewer components (such as a network interface, a display device, etc.) than shown in FIG. 13, or have a different configuration from that shown in FIG. As shown in Figure 13, it may also include: a display, a user interface, and different network interfaces, such as the IEEE 802.11 network interface, IEEE 802.16 network interface, and 3GPP interface as shown in Figure 13,
- the program can be stored in a computer-readable storage medium, which can be Including: flash disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc., couplers.
- the disclosed technical content can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units may be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
- a computer device which may be a personal computer, a server, or a network device, etc.
- the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
项目 | A图像 | B图像 | C图像 | D图像 | 权重 |
采集设备 | 70 | 70 | 100 | 100 | 1 |
图像合成模板 | 60 | 70 | 80 | 100 | 2 |
清晰度 | 60 | 80 | 80 | 100 | 1 |
特征主体人物眼部状态 | 0 | 100 | 100 | 100 | 5 |
特征主体人物表情丰富度 | 70 | 70 | 90 | 50 | 0.5 |
特征主体人物聚焦准确度 | 80 | 100 | 90 | 100 | 0.5 |
特征主体人物白平衡状态 | 50 | 100 | 100 | 100 | 1 |
Claims (11)
- 一种照片展示处理方法,包括:获取照片,其中,所述照片是从分布在预定区域中的至少一个采集设备处获取到的,所述至少一个采集设备被预定条件触发进行拍摄;从所述照片中识别出人物,并从所述人物中识别出用于标识该人物的标识信息,其中,所述标识信息在所述预定区域内是唯一的;将所述照片和从所述照片中所识别出的标识信息对应保存;获取用户的标识信息;根据所述用户的标识信息查找对应的照片;将查找到的照片展示给所述用户。
- 根据权利要求1所述的方法,其中,从所述人物中识别出用于标识该人物的标识信息包括:从所述人物中识别出所述人物身上的附着物和/或所述人物的生物特征;将所述附着物的特征信息和/或所述生物特征的特征信息作为用于标识该人物的标识信息;获取用户的标识信息包括:获取所述用户的附着物和/或所述用户的生物特征,将所述生物特征对应的特征信息为所述用户的标识信息;其中,所述附着物包括以下至少之一:服装、饰品、手持物品;所述附着物用于在所述预定区域唯一标识所述人物;所述人物的生物特征包括以下之一:面部特征、体态特征。
- 根据权利要求1所述的方法,其中,在获取所述用户的标识信息之后,根据所述用户的标识信息查找对应的照片包括:根据所述用户的标识信息查找该标识信息对应的一个或多个人物的特征信息;根据所述一个或多个人物的特征信息查找所述一个或多个人物的照片作为所述用户的标识信息所对应的照片。
- 根据权利要求2所述的方法,其中,在所述人物身上的附着物包括白色区域的情况下,将所述照片和从所述照片中所识别出的标识信息对应保存包括:根据所述白色区域对所述照片的白平衡进行调整;将调整后的照片与从所述照片中所识别出的标识信息对应保存。
- 根据权利要求1所述的方法,其中,在所述至少一个采集设备在被触发条件触发下拍摄视频的情况下,获取所述照片包括:从所述视频中提取预定帧作为所述照片;和/或,将查找到的照片展示给所述用户之外,还将所述视频的部分内容或全部内容展示给所述用户。
- 根据权利要求1所述的方法,其中,所述预定条件为检测到所述预定区域的人物存在以下至少之一信息:手势信息,嘴形信息、体形信息。
- 根据权利要求1至6中任一项所述的方法,其中,将查找到的照片展示给所述用户包括:在所述查找到的照片超过预定数量的情况下,将所述照片的部分或全部进行排序;将排序后的部分或全部照片展示给所述用户。
- 一种视频展示处理方法,包括:获取视频,其中,所述视频是从分布在预定区域中的至少一个采集设备处获取到的,所述至少一个采集设备被预定条件触发进行拍摄;从所述视频中识别出人物,并从所述人物中识别出用于标识该人物的标识信息,其中,所述标识信息在所述预定区域内是唯一的;将所述视频和从所述视频中所识别出的标识信息对应保存;获取用户的标识信息;根据所述用户的标识信息查找对应的视频;将查找到的视频展示给所述用户。
- 一种照片展示处理装置,包括:第一获取单元,设置为获取照片,其中,所述照片是从分布在预定区域中的至少一个采集设备处获取到的,所述至少一个采集设备被预定条件触发进行拍摄;第一识别单元,设置为从所述照片中识别出人物,并从所述人物中识别出用于标识该人物的标识信息,其中,所述标识信息在所述预定区域内是唯一的;第一保存单元,设置为将所述照片和从所述照片中所识别出的标识信息对应保存;第二获取单元,设置为获取用户的标识信息;第一查找单元,设置为根据所述用户的标识信息查找对应的照片;第一展示单元,设置为将查找到的照片展示给所述用户。
- 一种视频展示处理装置,包括:第三获取单元,设置为获取视频,其中,所述视频是从分布在预定区域中的至少一个采集设备处获取到的,所述至少一个采集设备被预定条件触发进行拍摄;第二识别单元,设置为从所述视频中识别出人物,并从所述人物中识别出用于标识该人物的标识信息,其中,所述标识信息在所述预定区域内是唯一的;第二保存单元,设置为将所述视频和从所述视频中所识别出的标识信息对应保存;第四获取单元,设置为获取用户的标识信息;第二查找单元,设置为根据所述用户的标识信息查找对应的视频;第二展示单元,设置为将查找到的视频展示给所述用户。
- 一种电子装置,包括:处理器;存储器,与所述处理器相连接,用于为所述处理器提供以下处理步骤的指令:获取照片,其中,所述照片是从分布在预定区域中的至少一个采集设备处获取到的,所述至少一个采集设备被预定条件触发进行拍摄;从所述照片中识别出人物,并从所述人物中识别出用于标识该人物的标识信息,其中,所述标识信息在所述预定区域内是唯一的;将所述照片和从所述照片中所识别出的标识信息对应保存;获取用户的标识信息;根据所述用户的标识信息查找对应的照片;将查找到的照片展示给所述用户;和/或,所述存储器,与所述处理器相连接,还用于为所述处理器提供以下处理步骤的指令:获取视频,其中,所述视频是从分布在预定区域中的至少一个采集设备处获取到的,所述至少一个采集设备被预定条件触发进行拍摄;从所述视频中识别出人物,并从所述人物中识别出用于标识该人物的标识信息,其中,所述标识信息在所述预定区域内是唯一的;将所述视频和从所述视频中所识别出的标识信息对应保存;获取用户的标识信息;根据所述用户的标识信息查找对应的视频;将查找到的视频展示给所述用户。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911045830.5A CN112749290A (zh) | 2019-10-30 | 2019-10-30 | 照片展示处理方法及装置、视频展示处理方法及装置 |
CN201911045830.5 | 2019-10-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021083004A1 true WO2021083004A1 (zh) | 2021-05-06 |
Family
ID=75641760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/122485 WO2021083004A1 (zh) | 2019-10-30 | 2020-10-21 | 照片展示处理方法及装置、视频展示处理方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112749290A (zh) |
WO (1) | WO2021083004A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI804421B (zh) * | 2022-08-23 | 2023-06-01 | 李玟鴻 | 婚紗攝影服務系統 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113837114A (zh) * | 2021-09-27 | 2021-12-24 | 浙江力石科技股份有限公司 | 一种景区中人脸视频片段采集方法系统 |
CN115103206B (zh) * | 2022-06-16 | 2024-02-13 | 北京字跳网络技术有限公司 | 视频数据的处理方法、装置、设备、系统及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108419027A (zh) * | 2018-02-28 | 2018-08-17 | 深圳春沐源控股有限公司 | 智能拍照方法和服务器 |
CN108777764A (zh) * | 2018-06-27 | 2018-11-09 | 合肥草木皆兵环境科技有限公司 | 一种景观用智能拍照上传系统及其方法 |
CN109948423A (zh) * | 2019-01-18 | 2019-06-28 | 特斯联(北京)科技有限公司 | 应用人脸及姿态识别的无人机旅游伴随服务方法及无人机 |
CN110033345A (zh) * | 2019-03-13 | 2019-07-19 | 庄庆维 | 一种游客视频服务方法及系统 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SG10201504080WA (en) * | 2015-05-25 | 2016-12-29 | Trakomatic Pte Ltd | Method and System for Facial Recognition |
CN106559654A (zh) * | 2016-11-18 | 2017-04-05 | 广州炫智电子科技有限公司 | 一种人脸识别监控采集系统及其控制方法 |
CN106708994A (zh) * | 2016-12-16 | 2017-05-24 | 维沃移动通信有限公司 | 一种图片选择方法及移动终端 |
CN108388672B (zh) * | 2018-03-22 | 2020-11-10 | 西安艾润物联网技术服务有限责任公司 | 视频的查找方法、装置及计算机可读存储介质 |
CN109087157A (zh) * | 2018-06-08 | 2018-12-25 | 成都第二记忆科技有限公司 | 一种摄像摄影作品售卖服务系统及方法和商业模式 |
CN109905595B (zh) * | 2018-06-20 | 2021-07-06 | 成都市喜爱科技有限公司 | 一种拍摄及播放的方法、装置、设备及介质 |
-
2019
- 2019-10-30 CN CN201911045830.5A patent/CN112749290A/zh active Pending
-
2020
- 2020-10-21 WO PCT/CN2020/122485 patent/WO2021083004A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108419027A (zh) * | 2018-02-28 | 2018-08-17 | 深圳春沐源控股有限公司 | 智能拍照方法和服务器 |
CN108777764A (zh) * | 2018-06-27 | 2018-11-09 | 合肥草木皆兵环境科技有限公司 | 一种景观用智能拍照上传系统及其方法 |
CN109948423A (zh) * | 2019-01-18 | 2019-06-28 | 特斯联(北京)科技有限公司 | 应用人脸及姿态识别的无人机旅游伴随服务方法及无人机 |
CN110033345A (zh) * | 2019-03-13 | 2019-07-19 | 庄庆维 | 一种游客视频服务方法及系统 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI804421B (zh) * | 2022-08-23 | 2023-06-01 | 李玟鴻 | 婚紗攝影服務系統 |
Also Published As
Publication number | Publication date |
---|---|
CN112749290A (zh) | 2021-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021083004A1 (zh) | 照片展示处理方法及装置、视频展示处理方法及装置 | |
US11893558B2 (en) | System and method for collaborative shopping, business and entertainment | |
CN111177451B (zh) | 基于人脸识别的旅游景区相册自动生成系统及方法 | |
CN207817749U (zh) | 一种用于制作视频的系统 | |
US20220075845A1 (en) | Computer aided systems and methods for creating custom products | |
US11429832B2 (en) | System and method for predictive curation, production infrastructure, and personal content assistant | |
DE202014011528U1 (de) | System zur Zeitmessung und zum Fotografieren eines Ereignisses | |
US20100030578A1 (en) | System and method for collaborative shopping, business and entertainment | |
JP5005107B1 (ja) | データ蓄積システム | |
CN107229625A (zh) | 一种拍摄处理方法和装置、一种用于拍摄处理的装置 | |
CN108694737B (zh) | 制作图像的方法和装置 | |
US20170193588A1 (en) | Method for image product recommendation | |
CN108197519A (zh) | 基于二维码扫描触发人脸图像采集的方法和装置 | |
CN108898450A (zh) | 制作图像的方法和装置 | |
CN107316232A (zh) | 一种摄影服务网站操作系统 | |
JP2018077718A (ja) | 情報処理システム、情報処理方法、およびプログラム | |
JP6369074B2 (ja) | 写真撮影編集装置、サーバ、制御プログラム、および記録媒体 | |
KR20180058380A (ko) | 추천 구도 기반 촬영 방법 및 시스템 | |
JP2016173864A (ja) | 写真・動画用プログラム | |
CN110209916A (zh) | 一种兴趣点图像推荐方法及装置 | |
KR20220088822A (ko) | 4차산업it융합기술인 문화체육산업의 소셜네트워크를 구성하는 방법 | |
JP2022507963A (ja) | プレゼンテーションファイル生成 | |
Werner | The fashion image: planning and producing fashion photographs and films | |
JP6369076B2 (ja) | 管理装置および管理装置の制御方法、並びに、端末装置、制御プログラムおよび記録媒体 | |
Campbell et al. | Spectacle and Anti‐spectacle: American Art Photography and Consumer Culture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20882019 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20882019 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.10.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20882019 Country of ref document: EP Kind code of ref document: A1 |