CN110049094B - Information pushing method and offline display terminal - Google Patents

Information pushing method and offline display terminal Download PDF

Info

Publication number
CN110049094B
CN110049094B CN201910153386.2A CN201910153386A CN110049094B CN 110049094 B CN110049094 B CN 110049094B CN 201910153386 A CN201910153386 A CN 201910153386A CN 110049094 B CN110049094 B CN 110049094B
Authority
CN
China
Prior art keywords
information
image
user
offline
push
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910153386.2A
Other languages
Chinese (zh)
Other versions
CN110049094A (en
Inventor
孙健康
林锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910153386.2A priority Critical patent/CN110049094B/en
Publication of CN110049094A publication Critical patent/CN110049094A/en
Application granted granted Critical
Publication of CN110049094B publication Critical patent/CN110049094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the specification provides an information pushing method and an offline display terminal. After the offline display terminal establishes connection with a first client corresponding to a first user in a near field communication mode, on one hand, the offline display terminal can feed back the acquired identity information of the first user to the server, the server determines the push information for the first user according to the online information of the first user, and on the other hand, the offline display terminal can acquire the on-site image information of the first user through the offline image acquisition unit to generate display information. Then, the fusion of the online information and the offline information is realized through the combined display of the push information and the display information, the interestingness in the information push process is increased, and the more personalized information push is realized. Therefore, the effectiveness of information pushing can be improved.

Description

Information pushing method and offline display terminal
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method for pushing information through a computer for an application and an offline display terminal.
Background
With the development of internet technology, information push is more and more widely applied. Information push, which may also be referred to as "web broadcasting," is a new technology for reducing information overload by periodically transmitting information required by users over the internet through a certain technical standard or protocol. Push technology reduces the time for searching on a network by automatically delivering information to a user.
The existing information pushing process is more and more diversified. In the process of information pushing through the equipment under the line, if some interesting interactions are added, the user experience can be improved, the viscosity of a client can be increased, and a better information pushing effect is achieved. Accordingly, improved schemes are desired that combine offline information push with online information to provide more efficient information push schemes.
Disclosure of Invention
The method and the device for pushing information, which are described in one or more embodiments of the present specification, can be used for solving one or more problems mentioned in the background section.
According to a first aspect, a method for pushing information is provided, which is applied to an offline display terminal, and the method includes: establishing connection with a first client corresponding to a first user by using a near field communication mode, wherein the first user interacts with a server through the first client; receiving identity information of the first user from the first client; sending the identity information to the server side, so that the server side can determine at least one piece of pushing information aiming at the first user from a plurality of pieces of information to be pushed according to the data record corresponding to the identity information; acquiring the scene image information of the first user; generating presentation information for the first user based on the live image information; and receiving the push information from the server side, and combining the push information with the display information for displaying.
In some embodiments, the near field communication mode is any one of the following modes: bluetooth low energy, Bluetooth, wifi, nfc.
In some embodiments, the live image information comprises a plurality of image frames, wherein the generating presentation information for the first user based on the live image information comprises: generating presentation information for the first user by fusing the plurality of image frames. .
In some embodiments, the display information is a target image for displaying a target part of a human body; the generating presentation information for the first user based on the live image information comprises: detecting the human body target part aiming at each image frame in the field image information to obtain each state image of the human body target part; fusing each state image into a preset number of target images through a predetermined image fusion algorithm, and taking the target images as the display information.
In some embodiments, the human target site comprises one or more of: human face parts and limb parts.
In some embodiments, said fusing the respective state images into a predetermined number of target images by a predetermined image fusion algorithm comprises: extracting a plurality of characteristic points of the human body target part from each state image respectively; and mapping each state image to the target image according to the incidence relation among the plurality of feature points.
In some embodiments, said acquiring live image information of said first user further comprises: acquiring first field image information, and detecting a preset identification part of a field user from the first field image information; sending the preset identification part of the field user to the server, so that the server can verify the preset identification part of the field user on the basis of the preset identification part of the first user which is prestored; and determining the first live image information as the live image information of the first user when the verification is passed.
In some embodiments, in the event that a plurality of faces are detected from the live image information, the generating presentation information for the first user based on the live image information comprises: matching each detected face image with a first face image which is received from the server and corresponds to the first user; and selecting the face image matched with the first face image as the face image to be fused.
In some embodiments, said generating presentation information for said first user based on said live image information comprises: filtering one or more image frames contained in the live image information according to a predetermined condition, the predetermined condition including at least one of: the target part of the human body is included and is a clear image; generating the presentation information for the first user based on the screened image frames.
In some embodiments, before generating presentation information for the first user based on the live image information, the method further comprises: and respectively carrying out at least one of shearing and background removal for reserving the human body target part on one or more image frames in the field image information.
In some embodiments, said presenting said push information in combination with said presentation information comprises at least one of: displaying the push information and the display information in a superposition manner; and displaying the push information and the display information in a carousel manner.
In some embodiments, the push information includes a two-dimensional code picture, so that the first user scans the two-dimensional code picture through the first client to accept information push for the push information.
According to a second aspect, an offline display terminal for information push is provided, the offline display terminal comprising: the communication device comprises a first communication unit, a second communication unit and a server, wherein the first communication unit is configured to establish connection with a first client corresponding to a first user in a near field communication mode, and the first user interacts with the server through the first client;
the first communication unit is further configured to receive identity information of the first user from the first client;
the second communication unit is configured to send the identity information to the server, so that the server determines at least one piece of push information for the first user from a plurality of pieces of information to be pushed according to a data record corresponding to the identity information;
an image acquisition unit configured to acquire live image information of the first user;
an image processing unit configured to generate presentation information for the first user based on the live image information;
the display unit is configured to combine the push information received from the server through the communication unit with the display information for displaying.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect described above.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has stored therein executable code, and wherein the processor, when executing the executable code, implements the method of the first aspect.
By the information push method for the first application and the offline display terminal provided in the embodiments of the present specification, after the offline display terminal establishes a connection with the first client corresponding to the first user in a near field communication manner, on one hand, the information can be fed back to the server by acquiring the identity information of the first user, and the server determines the push information for the first user according to the online information of the first user, and on the other hand, the online image acquisition unit can acquire the live image information of the first user to generate the display information. Then, the fusion of the online information and the offline information is realized through the combined display of the push information and the display information, and the interestingness in the information push process is increased, so that the more personalized information push is realized, and the effectiveness of the information push is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 illustrates a schematic diagram of an application scenario of an embodiment of the present description;
fig. 2 shows a flow diagram of the execution of an information push according to one embodiment;
FIG. 3 illustrates a push information presentation diagram according to one embodiment;
FIG. 4 shows a push information presentation diagram according to yet another embodiment;
FIG. 5 is a flowchart illustrating interaction between terminals according to a specific example of information pushing;
fig. 6 shows a schematic block diagram of an offline presentation terminal for information push according to one embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings. For convenience of explanation, a specific application scenario of the embodiment of the present specification shown in fig. 1 is described. Fig. 1 shows a specific scenario in which a user interacts with an offline display terminal 103 through a connection between a first client 102. The server 103 may communicate with the first client 102 and the offline display terminal 103 through a network.
Here, it may be assumed that the first client 102 is a first application client. The first client 102 may be understood as a terminal (e.g., a smart phone) on which a first application (e.g., a shopping APP, a Web application, etc.) may run, or as the first application itself that may run on the terminal, which is not limited herein. The server 101 is used for providing services for the first application. Wherein, in one embodiment, the first client 102 may be a client identified by a user connected with the server 101, in case the first client 102 is understood as the first application itself. For example, a first client is identified by the username "zhang san" and a second client is identified by the username "lie san si". In this specification, the client that establishes a connection with the offline presentation terminal 103 is referred to as a first client.
The offline presentation terminal 103 may be one of the presentation terminals in the distributed presentation system of the server 101. For example, in a specific scenario, for a certain clothing brand, the server 101 is a server of an APP developed by the clothing brand, each offline display terminal 103 is respectively provided for each offline store of the clothing brand, a user can download the APP provided by the clothing brand through a mobile phone and register the APP as the first client 102, and the server 101 can display the clothing brand through the offline display terminal 103 (such as an electronic shopping guide of a store) corresponding to the certain store or information of a current store.
The offline presentation terminal 103 may also be another client of the server 101. For example, in a specific scenario, for a certain shopping-class APP, the user of the first client 102 is a consumer and the user of the offline presentation terminal 103 is a merchant. The offline display terminal 103 at this time may be a billboard or the like installed at a push place by a merchant.
In other specific scenarios, the offline presentation terminal 103 may also be in other forms. In a word, the offline display terminal 103 may communicate with the server 101 through a network, and perform information push during an interaction process with a user corresponding to the first client 102.
The following describes an implementation process of the related art scheme in detail.
Fig. 2 shows a process of pushing information to a first user corresponding to a first client through an offline presentation device. The information pushing method shown in fig. 2 is suitable for an offline display terminal, and specifically includes the following steps: step 202, establishing connection with a first client corresponding to a first user by using a near field communication mode, wherein the first user is connected with a server through the first client; step 204, receiving identity information of a first user from the first client; step 206, sending the identity information to a server, so that the server determines at least one piece of push information for the first user from the plurality of pieces of information to be pushed according to the data record corresponding to the identity information; step 208, collecting the scene image information of the first user; step 210, generating display information for a first user according to the field image information; and step 212, receiving the push information from the server, and combining the push information with the display information for displaying.
First, in step 202, a connection is established with a first client corresponding to a first user by using a near field communication method. The first user interacts with the server side through the first client side. The offline display terminal and the first client may have the same near field communication mode, such as bluetooth, bluetooth low energy, wifi, nfc, and the like.
And under the condition that the near field communication mode is wifi, the first client and the offline display terminal are in the same local area network. Therefore, the offline display terminal can broadcast the identity identification or the connection request of the offline display terminal to the nearby client side in a broadcasting mode in the local area network. The identity can be used for identifying the current offline display terminal.
Bluetooth Low Energy (or Bluetooth LE, BLE) is also called Bluetooth Low Energy, is a personal area network technology designed and sold by the Bluetooth alliance, and compared with the classic Bluetooth, Bluetooth Low Energy aims to remarkably reduce power consumption and cost while maintaining the same communication range. Under the condition that the near field communication mode is the low-power-consumption Bluetooth mode, the offline display terminal can broadcast the identity identification of the offline display terminal outwards. When a first user located in a signal range (for example, a linear distance of 10 meters) of the offline display terminal opens the first client, the first client receives broadcast information of the offline display terminal and establishes a connection with the offline display terminal.
It should be noted that, in the process of establishing the connection between the first client and the offline display terminal, other clients may also receive the broadcast information of the offline display terminal and request the connection. For information push, one push may only be for a limited number of users, and therefore, the offline display terminal may stop broadcasting the identity identifier in the process of establishing connection with the first client, or detect whether the offline display terminal establishes connection with other predetermined number (e.g., 1) of clients first when establishing connection with the first client, and prompt a busy or queuing state if the offline display terminal establishes connection with other predetermined number (e.g., 1) of clients, thereby ensuring normal connection between the client that establishes connection first and the offline display terminal.
After the offline display terminal establishes a connection with the first client, on one hand, the offline display terminal receives the identity information of the first user from the first client through step 204, and sends the received identity information to the server through step 206. The identity information is used to identify the identity of the first user, and may include at least one of the following: account number (ID), user name (user name), avatar, and so on. After receiving the identity information, the server may determine the push information for the first user according to the data record corresponding to the identity information. Here, the data record corresponding to the identity information is, for example, a user history behavior record, a registration information record, or the like. The server may store a plurality of pieces of information to be pushed, such as commodity information, in advance.
In one embodiment, the server may select a predetermined number of pieces of information that are relatively popular from the pieces of information to be pushed, or select a predetermined number of pieces of information from at least one preference category set by the first user, respectively, as the pieces of information to be pushed for the first user.
In another embodiment, the server may further obtain, according to the identity information, historical operation data (e.g., data of browsing, ordering types, etc.) and/or registration information (e.g., age, gender, preference, etc.) of the first user, or, based on a user portrait described by keywords extracted from the historical operation data and/or the registration information, select, in a targeted manner, a predetermined number of pieces of information, or at least one piece of information to be pushed whose matching degree with the user portrait of the first user exceeds a predetermined threshold, as pushing information for the first user. In this way, the determined push information can be made more personalized and more targeted.
The push information may be text information, e.g. "West Hill", or may be character information, e.g. a purchase linkhttp://……The image information may be image information, such as a picture of clothes, and is not limited herein.
On the other hand, the offline terminal may also collect the live image information of the first user through step 208. Wherein the live image information may comprise one or more image frames. The image frame here can be understood as a picture, and can also be understood as an image frame in a video. The live image information may be, for example, a video for a certain period of time (e.g., 5 seconds), a plurality of images captured at predetermined time intervals, or the like. The field image information may be acquired by an image acquisition device such as a camera, or may be acquired from the first client, which is not limited herein.
After the live image information of the first user is collected, the display information may be generated for the first user based on the live image information in step 210. It is to be understood that the presentation information herein is information associated with the first user that can be presented via a display device such as a display screen. The presentation information may include, for example, a target image for presenting a human target part of the first user, a motion picture in accordance with the motion of the first user, and the like.
According to an embodiment of an aspect, the live image information includes an image frame (picture), and in this case, the image frame may be directly used as the presentation information, or the image frame may be subjected to predetermined processing, such as background removal, cutting of a predetermined target (e.g., a predetermined portion of a human body), and the like, and the processed image frame may be used as the presentation information.
According to another embodiment, the live image information includes a plurality of image frames, and the plurality of image frames may be merged to generate a predetermined number of pictures, or a video/moving picture of a certain duration, etc. as the presentation information of the first user. Among them, the image fusion algorithm is a method for combining a plurality of images into one image, such as a spatial fusion method, a transform fusion method, and the like. The following describes a process of generating an image M by fusing two face images I and J, taking a transform fusion method as an example.
Firstly, a predetermined number of feature points and a triangular structure composed of the feature points are respectively calibrated from the image I and the image J. The feature points may be points for outlining key parts in a human face, such as points outlining a face, eyes, and a nose. These triangular structures are delineated by feature points at predetermined locations.
Next, a transformation relationship is determined. Suppose that the coordinates of a certain feature point in the image I are (x)i,yi) The coordinates of the corresponding feature points in the image J are (x)j,yj) The coordinates (x) of the corresponding feature points of the fused image Mm,ym) Can be determined by a fusion weight a, e.g., xm=(1-a)xi+a xj… … it can be seen that when a is 0, the fused image coincides with image I, and when a is 1, the fused image coincides with image J. Then, according to the relationship between the vertices of the triangle, an affine transformation matrix between the image I and the image M, and an affine transformation matrix between the image J and the image M are respectively determined. The affine transformation matrix may be determined by a method such as injecting getAffineTransform, and will not be described herein. With the radiation variation matrix, each pixel within the corresponding triangular region can be processed separately.
Then, based on the above affine transformation result, the fusion image M is determined. (x)m,ym)=(1-a)(xi,yi)+(xj,yj). Thus, a fused face image can be obtained.
In one embodiment, the display information is a target image for displaying a target portion of a human body. At this time, for each image frame in the live image information, the human target part in the image frame can be detected first to obtain each state image of the human target part; and fusing each state image into one or more target images through a predetermined image fusion algorithm, and taking the target images as display information.
The target part of the human body can be a human face part, an extremity part and the like. Taking the human target region as a human face region as an example, the respective state images may be, for example, a side face image, a front face image, a blink image, a smile image, and the like.
It can be understood that, in the process of acquiring field image information by the offline display terminal through the camera, due to the influence of the distance and angle between the position of the first user and the offline display terminal and other peripheral scenes, a plurality of faces can be detected from one acquired face image. Therefore, in one embodiment, the offline display terminal may first obtain the first face image corresponding to the first user from the server. The first face image may be an authentication image or the like stored by the first user at the server during registration or the like. Then, for one image frame, the offline display terminal may match the detected face images with the first face image, so as to select the successfully matched face image as the face image to be fused.
In another embodiment, each image frame contained in the live image information may be further filtered according to predetermined conditions, the predetermined conditions including at least one of: comprises a human target part and is a clear image; and then fusing the screened image frames to generate display information. Therefore, invalid pictures and fuzzy pictures which do not contain human target parts (such as human faces, palms and the like) can be filtered out, and the image fusion is more accurate.
In an alternative implementation, the selected image frames may be further processed and then fused, for example, by cropping to retain only the desired target portion of the human body, background removal, and so on.
According to an alternative embodiment, the presentation information may be a presentation image. In the image fusion process, all image frames may be fused to generate a display image, or a plurality of image frames having a similar state (such as a smile state) may be fused to generate a plurality of display images, or adjacent predetermined number of image frames may be fused to generate a display image, which is not limited herein.
It can be understood that the server may store the identification information of the user in advance, such as a face image, a fingerprint image, and the like. According to one possible design, before step 210, the identity of the first user may be verified by the service end. Specifically, the offline display terminal may detect a predetermined recognition portion of a live user, such as a human face, from the acquired first live image information. Here, the first live image information may be a picture or a video, and accordingly, may include one or more image frames. Then, the image of the predetermined identification part of the live user can be sent to the server, so that the server can compare the predetermined identification part with the pre-stored predetermined identification part of the first user to verify whether the acquired live image information is the live image information of the first user. If the two are consistent, the verification is passed. In case the verification is successful, the first live image information is determined as the live image information of the first user and step 210 is performed.
And step 212, receiving the push information from the server, and combining the push information with the display information for displaying. It can be understood that, in step 206, the offline display terminal sends the identity information of the first user to the server, and the server may send the push information to the offline display terminal after determining the push information for the first user. As such, the operation of the offline presentation terminal to receive the push information from the server may occur at any time before, during, or after the performance of step 208 and step 210, and is not limited herein. The offline display terminal can combine the push information with the display information for displaying.
The push information may include text information, two-dimensional code information, hyperlink or website information, which is not limited herein. Through the information, the user can be used as a clue to accept the corresponding information push. For example, when the push information includes a two-dimensional code picture, the first user scans the two-dimensional code picture through the first client, so that a commodity purchase page can be displayed at the first client, or store reservation information and the like can be acquired.
In one embodiment, the push information and the presentation information can be presented in an overlay. As an example, as shown in fig. 3, a human face 301 is presentation information generated by an offline presentation terminal for a first user, and a scarf 302 is push information determined by a server for the first user. The push information may further include a two-dimensional code picture 303. The display information and the push information are overlapped, and can be displayed in the form of fig. 3. Fig. 3 can be applied to scenes such as shopping guide boards, because the face 301 is an image collected on site for the first user, and the scarf 302 is push information determined by the server for the first user according to user information of the first user, the push information may take into account characteristics of the user, such as age, preference, and the like, and is therefore rich in pertinence, and the two are superimposed together, so that a general commodity preview effect can be given. The two-dimensional code picture 303 can include a purchase page link of the scarf 302, and can also include a specific placement position of the scarf 302 in an off-line physical store. Therefore, products of the commercial tenants can be pre-screened, and more intelligent and efficient offline recommendation is achieved.
In another embodiment, the push information and the presentation information may be presented in a carousel. As an example, please refer to fig. 4. In fig. 4, an image 401 is presentation information of the first user, which is used to present a first image generated according to video fusion of the first user, and an image 402 is push information determined by the server according to user information of the first user, in this example, since it is detected that the first user is married and is a valentine day, the push information is a rose picture, and a two-dimensional code in the image may include purchase link information or address information of an offline store. The offline display terminal can display the information according to a carousel mode of one piece of display information and one piece of push information. There may be multiple pieces of display information and push information. It can be understood that in the display information in the image 401, the first user faces smile, both hands are relaxed, and in the display information displayed next time, the first user can also put out other postures, for example, make a grimace, compare scissors hands, etc., and the push information displayed next time can also be clothes, etc., so that the interest and diversity of the information push process can be increased.
In order to more clearly describe the interaction process among the offline presentation terminal, the first client, and the server in the foregoing various embodiments, fig. 5 shows an interaction flowchart of a specific example of information pushing. As shown in fig. 5, in step S501, the first client and the offline presentation terminal establish a connection through a near field communication manner.
Then, on the one hand, the push information for the first user is determined through steps S502, S503, S504, S505, S506. Specifically, the method comprises the following steps: in step S502, the offline display terminal receives user information of the first user, such as a mobile phone number, a user name, and the like, from the first client in a near field communication manner; next, in step S503, the offline display terminal uploads the identity information of the first user to the server; then, through step S504, the server receives the identity information; in step S505, the server determines, according to the data record corresponding to the identity information, push information for the first user, such as pushed commodity information, from the multiple pieces of information to be pushed; in step S506, the server feeds back the determined push information to the offline display terminal.
On the other hand, through steps S502 ', S503', presentation information for the first user is determined. As shown in fig. 5, through step S502', the offline presentation terminal may collect live image information, which may include one or more pictures, a video, and the like, for example. Then, in step S503', the offline presentation terminal processes the live image information, such as cropping, blending, and the like, and locally generates presentation information for the first user.
Then, in step S507, the offline display terminal receives the push information determined by the server from the server, and displays the push information together with locally generated display information.
It is understood that steps S501, S502, S503, S502 ', S503', S507, etc. in fig. 5 are performed by the offline display terminal. These steps may correspond to steps 202 to 212 shown in fig. 2, respectively, and are not described herein again.
Reviewing the above process, in the information pushing process, the offline display terminal and the first client establish connection in a near field communication mode, on one hand, user information of the first user is sent to the server side, so that the server side determines the pushing information according to online data, on the other hand, on-site image information of the first user is collected offline, the on-site image information is fused to generate display information for the first user, and then the display information and the pushing information are combined together for displaying. Therefore, the online data and the offline data are combined, so that the information process has interactivity, interestingness and pertinence, and the effectiveness of information pushing is improved.
Fig. 6 illustrates an offline presentation terminal 600 for information push, where the offline presentation terminal 600 may include:
the first communication unit 61 is configured to establish a connection with a first client corresponding to a first user in a near field communication manner, where the first user interacts with a server through the first client;
the first communication unit 61 is further configured to receive identity information of the first user from the first client;
the second communication unit 62 is configured to send the identity information received through the first communication unit 61 to the server, so that the server determines at least one piece of push information for the first user from the plurality of pieces of information to be pushed according to the data record corresponding to the identity information;
an image acquisition unit 63 configured to acquire live image information of the first user;
an image processing unit 64 configured to generate presentation information for the first user based on the live image information;
and the display unit 65 is configured to combine the push information received from the server through the second communication unit 62 with the display information for displaying.
According to a specific embodiment, the communication unit 51 may be any one of the following modules: bluetooth low energy, Bluetooth, wifi, nfc. In one implementation, the communication unit 51 may include different modules, for example, a near field communication module (e.g., bluetooth low energy, bluetooth, nfc, etc.) and a remote communication module (e.g., wifi, wired network, etc.), the near field communication module is configured to establish a connection with a first client corresponding to a first user by using a near field communication method and acquire user information of the first user, and the remote communication module may feed back the user information to a server and receive push information from the server.
In an alternative implementation, the live image information may include a plurality of image frames, and the image processing unit 64 may be further configured to: and generating presentation information for the first user by fusing the plurality of image frames.
According to one possible design, the display information is a target image for displaying a target portion of a human body, and the image processing unit 64 is further configured to:
detecting human body target parts (such as human face parts, limb parts and the like) aiming at each image frame in the field image information to obtain each state image of the human body target parts;
and fusing the state images into a preset number of target images through a predetermined image fusion algorithm, and taking the target images as display information.
It is to be understood that the target image may be one or more sheets. When the target image includes a plurality of images, the target image may be displayed in a motion picture format.
In an alternative implementation, the image processing unit 64 may be further configured to:
extracting a plurality of characteristic points of the human body target part from each state image respectively;
and mapping each state image to a target image according to the incidence relation among the plurality of feature points.
According to one embodiment, the image acquisition unit 64 may be further configured to:
acquiring first field image information, and detecting a preset identification part of a field user from the first field image information;
sending the preset identification part of the field user to the server, so that the server can verify the preset identification part of the field user on the basis of the preset identification part of the first user which is prestored;
and determining the first live image information as the live image information of the first user when the verification is passed.
In some cases, a plurality of faces may be detected from the live image information, and at this time, the image processing unit 64 may further perform screening, specifically, matching each detected face image with a first face image corresponding to the first user received from the server; and selecting the face image matched with the first face image as the face image to be fused.
In one embodiment, the image processing unit 64 may also perform a preliminary filtering on the images in the user live image information:
filtering one or more image frames contained in the live image information according to a predetermined condition, the predetermined condition including at least one of: comprises a human target part and is a clear image;
and generating display information for the first user based on the screened image frames.
According to one possible design, the image processing unit 64 may also perform at least one of the following: cutting and background removal of the target part of the human body are reserved.
In one embodiment, the presentation unit 65 may be further configured to present the push information in combination with the presentation information by at least one of:
displaying the push information and the display information in a superposition manner;
and displaying the push information and the display information in a carousel manner.
According to an optional design, the push information includes a two-dimensional code picture, so that the first user scans the two-dimensional code picture through the first client, and receives information push aiming at the push information.
In a specific implementation, the image capturing unit is, for example, a camera, and the display unit is, for example, an electronic display screen, a touch screen, or the like.
It should be noted that the offline display terminal 600 shown in fig. 6 is an embodiment of an apparatus corresponding to the embodiment of the method shown in fig. 2, and the corresponding description in the embodiment of the method shown in fig. 2 is also applicable to the offline display terminal 600, and is not described herein again.
According to an embodiment of another aspect, a computer-readable storage medium is also provided, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the respectively described method.
According to an embodiment of yet another aspect, there is also provided a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor implementing the correspondingly described method when executing the executable code.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (26)

1. An information push method is applied to an offline display terminal, wherein the offline display terminal is an advertisement terminal corresponding to a predetermined server, and the method comprises the following steps:
the method comprises the steps that self identity identification is broadcasted in a short-distance wireless communication mode, so that connection is established with a first client corresponding to a first user, wherein the first user interacts with a server through the first client;
receiving identity information of the first user from the first client;
sending the identity information to the server side, so that the server side can determine at least one piece of pushing information aiming at the first user from a plurality of pieces of information to be pushed according to the data record corresponding to the identity information;
acquiring the scene image information of the first user;
generating presentation information for the first user based on the live image information under the condition that the live image information is matched with the identity authentication image corresponding to the first user;
and receiving the push information from the server side, and combining the push information with the display information for displaying.
2. The method according to claim 1, wherein the short-range wireless communication method is any one of: bluetooth low energy, Bluetooth, wifi, nfc.
3. The method of claim 1, wherein the live image information comprises a plurality of image frames, wherein the generating presentation information for the first user based on the live image information comprises:
generating presentation information for the first user by fusing the plurality of image frames.
4. The method according to claim 3, wherein the presentation information is a target image for presenting a target portion of a human body;
the generating presentation information for the first user based on the live image information comprises:
detecting the human body target part aiming at each image frame in the field image information to obtain each state image of the human body target part;
fusing each state image into a preset number of target images through a predetermined image fusion algorithm, and taking the target images as the display information.
5. The method of claim 4, wherein the human target site comprises one or more of:
human face parts and limb parts.
6. The method of claim 4, wherein said fusing the respective state images into a predetermined number of target images by a predetermined image fusion algorithm comprises:
extracting a plurality of characteristic points of the human body target part from each state image respectively;
and mapping each state image to the target image according to the incidence relation among the plurality of feature points.
7. The method of claim 1, wherein said acquiring live image information of the first user further comprises:
acquiring first field image information, and detecting a preset identification part of a field user from the first field image information;
sending the preset identification part of the field user to the server, so that the server can verify the preset identification part of the field user on the basis of the preset identification part of the first user which is prestored;
and determining the first live image information as the live image information of the first user when the verification is passed.
8. The method of claim 1, wherein, in the event that a plurality of faces are detected from the live image information, the generating presentation information for the first user based on the live image information comprises:
matching each detected face image with a first face image which is received from the server and corresponds to the first user;
and selecting the face image matched with the first face image as the face image to be fused.
9. The method of claim 1, wherein the generating presentation information for the first user based on the live image information comprises:
filtering one or more image frames contained in the live image information according to a predetermined condition, the predetermined condition including at least one of: comprises a human target part and is a clear image;
generating the presentation information for the first user based on the screened image frames.
10. The method of claim 1, wherein prior to generating presentation information for the first user based on the live image information, further comprising:
and respectively carrying out at least one of shearing and background removal for reserving the human body target part on one or more image frames in the field image information.
11. The method of claim 1, wherein the presenting the push information in conjunction with the presentation information comprises at least one of:
displaying the push information and the display information in a superposition manner;
and displaying the push information and the display information in a carousel manner.
12. The method of claim 1, wherein the push information includes a two-dimensional code picture, so that the first user scans the two-dimensional code picture through the first client to accept information push for the push information.
13. An offline display terminal for information push, wherein the offline display terminal is an advertisement terminal corresponding to a predetermined server, and the offline display terminal comprises:
the first communication unit is configured to broadcast an identity of the first communication unit by using a short-distance wireless communication mode so as to establish connection with a first client corresponding to a first user, wherein the first user interacts with the server through the first client;
the first communication unit is further configured to receive identity information of the first user from the first client;
the second communication unit is configured to send the identity information to the server, so that the server determines at least one piece of push information for the first user from a plurality of pieces of information to be pushed according to a data record corresponding to the identity information;
an image acquisition unit configured to acquire live image information of the first user;
the image processing unit is configured to generate display information for the first user based on the field image information under the condition that the field image information is matched with the identity authentication image corresponding to the first user;
the display unit is configured to combine the push information received from the server through the second communication unit with the display information for displaying.
14. The offline presentation terminal of claim 13, wherein the first communication unit is a communication module comprising any one of the following short-range communication methods: bluetooth low energy, Bluetooth, wifi, nfc.
15. The offline presentation terminal of claim 13, wherein said live image information comprises a plurality of image frames, said image processing unit being further configured to:
generating presentation information for the first user by fusing the plurality of image frames.
16. The offline presentation terminal of claim 15, wherein the presentation information is a target image for presenting a target portion of a human body;
the image processing unit is configured to:
detecting the human body target part aiming at each image frame in the field image information to obtain each state image of the human body target part;
fusing each state image into a preset number of target images through a predetermined image fusion algorithm, and taking the target images as the display information.
17. The offline display terminal of claim 16, wherein the human target site comprises one or more of:
human face parts and limb parts.
18. The offline presentation terminal of claim 16, wherein the image processing unit is further configured to:
extracting a plurality of characteristic points of the human body target part from each state image respectively;
and mapping each state image to the target image according to the incidence relation among the plurality of feature points.
19. The offline presentation terminal of claim 13, wherein the image acquisition unit is further configured to:
acquiring first field image information, and detecting a preset identification part of a field user from the first field image information;
sending the preset identification part of the field user to the server, so that the server can verify the preset identification part of the field user on the basis of the preset identification part of the first user which is prestored;
and determining the first live image information as the live image information of the first user when the verification is passed.
20. The offline presentation terminal of claim 13, wherein, in the event that a plurality of faces are detected from the live-image information, the image processing unit is further configured to:
matching each detected face image with a first face image which is received from the server and corresponds to the first user;
and selecting the face image matched with the first face image as the face image to be fused.
21. The offline presentation terminal of claim 13, wherein the image processing unit is further configured to:
filtering one or more image frames contained in the live image information according to a predetermined condition, the predetermined condition including at least one of: comprises a human target part and is a clear image;
generating the presentation information for the first user based on the screened image frames.
22. The offline presentation terminal of claim 13, wherein the image processing unit is further configured to:
and respectively carrying out at least one of shearing and background removal processing for reserving the human body target part on the screened image frames.
23. The offline presentation terminal of claim 13, wherein the presentation unit is further configured to present the push information in combination with the presentation information by at least one of:
displaying the push information and the display information in a superposition manner;
and displaying the push information and the display information in a carousel manner.
24. The offline presentation terminal of claim 13, wherein the push information includes a two-dimensional code picture, so that the first user scans the two-dimensional code picture through the first client to accept information push for the push information.
25. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-12.
26. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-12.
CN201910153386.2A 2019-02-28 2019-02-28 Information pushing method and offline display terminal Active CN110049094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910153386.2A CN110049094B (en) 2019-02-28 2019-02-28 Information pushing method and offline display terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910153386.2A CN110049094B (en) 2019-02-28 2019-02-28 Information pushing method and offline display terminal

Publications (2)

Publication Number Publication Date
CN110049094A CN110049094A (en) 2019-07-23
CN110049094B true CN110049094B (en) 2022-03-04

Family

ID=67274411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910153386.2A Active CN110049094B (en) 2019-02-28 2019-02-28 Information pushing method and offline display terminal

Country Status (1)

Country Link
CN (1) CN110049094B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533463A (en) * 2019-08-16 2019-12-03 深圳供电局有限公司 A kind of background management system and back-stage management method of catch-phrase screen
CN111506798A (en) * 2020-03-04 2020-08-07 平安科技(深圳)有限公司 User screening method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN103493068A (en) * 2011-04-11 2014-01-01 英特尔公司 Personalized advertisement selection system and method
WO2015129987A1 (en) * 2014-02-26 2015-09-03 에스케이플래닛 주식회사 Service apparatus for providing object recognition-based advertisement, user equipment for receiving object recognition-based advertisement, system for providing object recognition-based advertisement, method therefor and recording medium therefor in which computer program is recorded
CN108460622A (en) * 2018-01-30 2018-08-28 深圳冠思大数据服务有限公司 Interactive advertising system under a kind of line
CN108804064A (en) * 2018-05-31 2018-11-13 北京爱国小男孩科技有限公司 A kind of intelligent display system and its application process
CN108985836A (en) * 2018-07-09 2018-12-11 京东方科技集团股份有限公司 A kind of intelligent dressing method and system
CN109117779A (en) * 2018-08-06 2019-01-01 百度在线网络技术(北京)有限公司 One kind, which is worn, takes recommended method, device and electronic equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2344713B (en) * 1998-02-10 2003-05-07 Furuno Electric Co Display system
US20090018926A1 (en) * 2007-07-13 2009-01-15 Divina Buehlman Web-based virtual clothing coordinator including personal mannequin with customer-directed measurements, image modification and clothing
US20110093780A1 (en) * 2009-10-16 2011-04-21 Microsoft Corporation Advertising avatar
CN105844502A (en) * 2016-05-20 2016-08-10 广州市莱麦互联网科技有限公司 Information release method, device and system based on cloud and positioning technology
CN107784515A (en) * 2016-08-31 2018-03-09 上海阳淳电子股份有限公司 There is one kind AR reality to increase powerful information issuing system
US10282772B2 (en) * 2016-12-22 2019-05-07 Capital One Services, Llc Systems and methods for wardrobe management
CN107507017A (en) * 2017-07-07 2017-12-22 阿里巴巴集团控股有限公司 Shopping guide method and device under a kind of line
CN108769262B (en) * 2018-07-04 2023-11-17 厦门声连网信息科技有限公司 Large-screen information pushing system, large-screen equipment and method
CN109191585B (en) * 2018-08-21 2020-08-25 百度在线网络技术(北京)有限公司 Information processing method, device, equipment and computer readable storage medium
CN109299973A (en) * 2018-08-29 2019-02-01 中国建设银行股份有限公司 A kind of advertisement sending method and relevant device based on recognition of face

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103493068A (en) * 2011-04-11 2014-01-01 英特尔公司 Personalized advertisement selection system and method
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
WO2015129987A1 (en) * 2014-02-26 2015-09-03 에스케이플래닛 주식회사 Service apparatus for providing object recognition-based advertisement, user equipment for receiving object recognition-based advertisement, system for providing object recognition-based advertisement, method therefor and recording medium therefor in which computer program is recorded
CN108460622A (en) * 2018-01-30 2018-08-28 深圳冠思大数据服务有限公司 Interactive advertising system under a kind of line
CN108804064A (en) * 2018-05-31 2018-11-13 北京爱国小男孩科技有限公司 A kind of intelligent display system and its application process
CN108985836A (en) * 2018-07-09 2018-12-11 京东方科技集团股份有限公司 A kind of intelligent dressing method and system
CN109117779A (en) * 2018-08-06 2019-01-01 百度在线网络技术(北京)有限公司 One kind, which is worn, takes recommended method, device and electronic equipment

Also Published As

Publication number Publication date
CN110049094A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN106454407B (en) Video live broadcasting method and device
CN110139121B (en) Live broadcast-based article publishing method and device, electronic equipment and storage medium
US9529902B2 (en) Hand held bar code readers or mobile computers with cloud computing services
US9576195B2 (en) Integrated image searching system and service method thereof
CN111277849B (en) Image processing method and device, computer equipment and storage medium
KR102053128B1 (en) Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system
WO2013153718A1 (en) Information provision system
CN105139074B (en) Online reservation method and device
CN102467661A (en) Multimedia device and method for controlling the same
US20120194548A1 (en) System and method for remotely sharing augmented reality service
CN110049094B (en) Information pushing method and offline display terminal
US20120244891A1 (en) System and method for enabling a mobile chat session
CN111479119A (en) Method, device and system for collecting feedback information in live broadcast and storage medium
US20130142444A1 (en) Hand held bar code readers or mobile computers with cloud computing services
CN105072567B (en) Information processing method and electronic equipment
CN112866577B (en) Image processing method and device, computer readable medium and electronic equipment
CN108848404B (en) Two-dimensional code information sharing system of mobile terminal
CN112995779B (en) Method and device for establishing live broadcast room
CN106911948B (en) Display control method and device, control equipment and electronic equipment
JP6318289B1 (en) Related information display system
CN114501051B (en) Method and device for displaying marks of live objects, storage medium and electronic equipment
CN109905878B (en) Information pushing method and device
US9817849B2 (en) Image recognition method for offline and online synchronous operation
KR102208916B1 (en) System for recognizing broadcast program based on image recognition
JP6829391B2 (en) Information processing equipment, information distribution method, and information distribution program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201019

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20201019

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant