CN115631523A - Data processing method and device for business place, computer equipment and storage medium - Google Patents

Data processing method and device for business place, computer equipment and storage medium Download PDF

Info

Publication number
CN115631523A
CN115631523A CN202211305598.6A CN202211305598A CN115631523A CN 115631523 A CN115631523 A CN 115631523A CN 202211305598 A CN202211305598 A CN 202211305598A CN 115631523 A CN115631523 A CN 115631523A
Authority
CN
China
Prior art keywords
personnel
person
video
environment
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211305598.6A
Other languages
Chinese (zh)
Inventor
孙浩鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202211305598.6A priority Critical patent/CN115631523A/en
Publication of CN115631523A publication Critical patent/CN115631523A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a data processing method and device for a business place, computer equipment and a storage medium, wherein the scheme is used for carrying out portrait recognition and portrait tracking on an environment video of the surrounding environment of the business place to acquire a face image of each person in each video frame of the environment video; inputting the face image of each person in each video frame into a neural network model for face orientation prediction to obtain the face orientation of each person in each video frame; determining the accumulated time length of each person in the environment video towards the service place according to the face orientation of each person in each video frame of the environment video; carrying out fuzzy processing on the face part in the personnel image of each personnel to obtain a personnel searching image of each personnel; and arranging the personnel search images of the personnel from large to small according to the accumulated time length of the personnel to obtain a potential customer list, and sending the potential customer list to the service personnel in the service place, so that the accuracy and the efficiency of potential customer identification are improved.

Description

Data processing method and device for business place, computer equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method and apparatus for a business location, a computer device, and a storage medium.
Background
With the continuous development of the internet, people's lives are more and more convenient, and when handling some banking businesses, people also more and more pay more attention to the time cost spent in handling a banking business, so sometimes people give up entering a business place to handle a certain banking business when facing the business place with a large number of people in a bank.
In contrast, the existing processing means is to manually determine which people outside the business place belong to the potential customer, and this determination method will undoubtedly result in low accuracy of the identification of the potential customer, and if people are observed for a long time manually, the accuracy of the identification of the potential customer is improved, which will undoubtedly result in low identification efficiency.
Disclosure of Invention
The embodiment of the application provides a data processing method and device for a business place, computer equipment and a storage medium, and can improve the accuracy and efficiency of potential customer identification.
The embodiment of the application provides a data processing method for a service place, which comprises the following steps:
acquiring environmental data of the surrounding environment of a service place to obtain an environmental video of the service place;
carrying out portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video;
respectively inputting the face images of all the people in each video frame into a preset neural network model to predict the face orientation, so as to obtain the face orientation of all the people in each video frame;
determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video;
acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel;
and according to the accumulated time length of each person towards the service place, arranging the person searching images of each person from large to small to obtain a potential customer list, and sending the potential customer list to the service person of the service place.
Correspondingly, an embodiment of the present application further provides a data processing apparatus for a service site, including:
the data acquisition module is used for acquiring environmental data of the surrounding environment of the business place to obtain an environmental video of the business place;
the image acquisition module is used for carrying out portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video;
the orientation prediction module is used for respectively inputting the face images of all the persons in each video frame into a preset neural network model to carry out face orientation prediction so as to obtain the face orientation of all the persons in each video frame;
the time length determining module is used for determining the accumulated time length of each person in the environment video towards the service place according to the face orientation of each person in each video frame of the environment video;
the fuzzy processing module is used for acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel;
and the list sending module is used for arranging the personnel searching images of the personnel from large to small according to the accumulated time length of the personnel facing the service place to obtain a potential customer list and sending the potential customer list to the service personnel of the service place.
Correspondingly, the embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the data processing method of the service site provided in any of the embodiments of the present application.
Correspondingly, the embodiment of the application also provides a storage medium, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by the processor to execute the data processing method of the business place.
The method comprises the steps that environmental data are obtained from the surrounding environment of a service place, and an environmental video of the service place is obtained; carrying out portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video; respectively inputting the face images of all the people in each video frame into a preset neural network model for face orientation prediction to obtain the face orientation of all the people in each video frame; determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video; acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel; and arranging the personnel search images of the personnel from large to small according to the accumulated time length of the personnel facing the business place to obtain a potential client list, and sending the potential client list to the business personnel of the business place, so that the corresponding potential client list is generated according to the accumulated time length of the human faces of the personnel around the business place facing the business place, the business personnel of the business place are assisted to identify potential clients through the potential client list, and the accuracy and the efficiency of potential client identification are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a data processing method for a business location according to an embodiment of the present application.
Fig. 2 is a block diagram of a data processing apparatus of a business location according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a data processing method and device for a business place, a storage medium and computer equipment. Specifically, the data processing method of the service location in the embodiment of the present application may be executed by a computer device, where the computer device may be a server or a terminal. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform. The terminal may be, but is not limited to, a smart phone, a desktop computer, a notebook computer, a tablet computer, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
For example, the computer device may be a terminal, and the terminal may acquire environmental data of an environment around a service location to obtain an environmental video of the service location; carrying out portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video; respectively inputting the face images of all the people in each video frame into a preset neural network model to predict the face orientation, so as to obtain the face orientation of all the people in each video frame; determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video; acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel; and arranging the personnel searching images of the personnel from large to small according to the accumulated time length of the personnel facing the business place to obtain a potential customer list, and sending the potential customer list to the business personnel of the business place.
Based on the above problems, embodiments of the present application provide a data processing method and apparatus, a computer device, and a storage medium for a business location, which can improve accuracy and efficiency of potential customer identification.
The following are detailed descriptions. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiments of the present application provide a data processing method for a business location, where the method may be executed by a terminal or a server, and the data processing method for a business location is described as an example executed by a terminal.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a data processing method for a business location according to an embodiment of the present disclosure. The specific flow of the data processing method of the business place can be as follows:
101. and acquiring environmental data of the surrounding environment of the service place to obtain the environmental video of the service place.
The business place may be a place related to a bank, such as a business hall of a bank outlet, and a camera is usually arranged around the business place to improve security by a monitoring method.
In this embodiment, the terminal may continuously obtain the environmental data of the environment around the service location through the camera arranged around the service location to obtain an environmental video including the environment around the service location, so as to perform corresponding data processing and analysis based on the environmental video, and achieve mining of potential customers of the service location based on the data processing and analysis results.
It can be understood that, because the surroundings of the service place contain environment data in different directions, and the cameras generally record only the environment data in a fixed direction at a time, in order to avoid incomplete recording of the data of the surroundings of the service place, the terminal may determine the installation positions and recording ranges of all the cameras disposed outside the service place, and then determine at least two target cameras that may contain the surroundings of the service place according to the installation positions and recording ranges of the cameras, and then the terminal performs the environment data acquisition on the surroundings of the service place by controlling the at least two target cameras, so as to obtain the environment video of the service place.
Specifically, if the target camera is a camera that is fixed at a certain position outside the service location and can record environmental data in one direction all the time, the terminal needs to continuously control at least two target cameras to acquire the environmental data, so as to obtain the environmental videos of the service locations corresponding to the at least two cameras respectively.
Specifically, if the target camera is a camera slidably mounted at a certain position outside the service place, so that the recording of the environmental data in different directions is realized by sliding the camera, or the camera itself can rotate the viewing angle to record, that is, the camera can record the environmental data in different directions, for example, a rotary camera, the terminal can control the recording directions of at least two target cameras at a certain time, so as to realize the comprehensive recording of the data of the surrounding environment of the service place.
102. And carrying out portrait recognition and portrait tracking on the environment video to obtain the face image of each person in each video frame of the environment video.
In this embodiment, the terminal may perform face recognition on the person appearing in the environment video through a preset algorithm, and perform face tracking on the person based on a face recognition result of the person, so as to obtain a face image of each person in each video frame of the environment video, that is, obtain a face image of each person appearing in the environment video in each video frame.
Specifically, the performing of the portrait recognition and the portrait tracking on the environment video to obtain the facial image of each person in each video frame of the environment video may include: extracting human image features of an initial video frame in the environment video to obtain human image recognition results of people, wherein the human image recognition results comprise human face images of the people; and tracking the face of each video frame behind the initial video frame in the environment video based on the face recognition result, and acquiring the face image of the same person in each video frame behind the initial video frame.
The initial video frame is generally a video frame in which a new person appears in the environment video, the first frame video frame in the general environment video is an initial video frame, and some persons appear only in a certain frame of the environment video, so that the frame in which the new person appears in the environment video needs to be determined as the initial video frame, and therefore the person is subjected to face recognition in the initial video frame, so that face tracking is performed on the person in each subsequent video frame based on the face recognition result of the person, and the face image of the person in each subsequent video frame is obtained based on the face tracking result.
Specifically, the terminal can perform target detection through a YOLO algorithm, such as a YOLO-v3 algorithm, to realize portrait recognition, and perform target feature modeling through an sortt algorithm to realize portrait tracking.
Illustratively, when the terminal detects the target through the YOLO-v3 algorithm, the terminal may extract features through the backbone network Darknet53, perform sampling and feature fusion, and perform regression analysis based on the fusion result to obtain the prediction box information, that is, the face recognition result. When the terminal carries out target feature modeling through the SORT algorithm, the position of personnel can be updated through matching and tracking, and the Kalman filter is used for predicting the motion of the detection frame in the process, so that the portrait tracking result is finally obtained.
103. And respectively inputting the face images of the persons in each video frame into a preset neural network model to predict the face orientation, so as to obtain the face orientation of the persons in each video frame.
In this embodiment, the terminal trains in advance to obtain a neural network model capable of predicting the face orientation of the person, so that the obtained face image of the person is input into the neural network model for prediction to obtain the face orientation of the person.
In some embodiments, for convenience of processing, the brightness value of each pixel in the face image of the person may be scaled, for example, to 0 to 1, so that the face image with the scaled brightness values of the pixels is input into a preset neural network model.
In some embodiments, in order to improve the accuracy of the face orientation of the person in the obtained video frames, the above-mentioned inputting the face image of each person in each video frame into a preset neural network model for face orientation prediction to obtain the face orientation of each person in each video frame may include:
the terminal can input the face image of the person into a preset neural network model to predict the face orientation, and predicted values of the face orientation of the person in at least two directions are obtained, namely the neural network model can have at least two output units, the predicted values of the face orientation in at least two directions are output by the at least two output units, and therefore the predicted values of the face orientation in the corresponding directions of the output units can be obtained through the values output by the output units.
Finally, the face orientation of the person is determined according to the predicted values of the face orientation of the person in at least two directions, for example, the direction with the largest predicted value is selected from the at least two directions, and the direction with the largest predicted value is determined as the face orientation of the person.
The direction may be a relative direction with respect to a business place as a reference, for example, a right front side with respect to the business place, a left side with respect to the business place, a right side with respect to the business place, a rear side with respect to the business place, or the like.
Illustratively, the range of the predicted values output by the neural network model can be set to be 0.1 to 0.9, and the neural network model has four output units, the directions corresponding to the four output units are respectively straight ahead, left side, right side and back, and by inputting the face image of a person into the preset neural network model to predict the face orientation, the predicted values 0.9, 0.1 and 0.1 respectively corresponding to the face orientation of the person in straight ahead, left side, right side and back can be obtained, so the predicted value 0.9 in straight ahead is the largest among the predicted values of the directions corresponding to the face orientation of the person in the example, and therefore, the face orientation of the person in the example can be determined to be straight ahead.
In some embodiments, the terminal may obtain a preset number of personnel images in advance, and adjust pixels of the personnel images to preset pixels, so that the personnel images after the pixels are adjusted are randomly divided into a training set, a verification set and a test set according to a preset proportion. For example, 400 human images are acquired, pixels of the 400 human images are adjusted to 60 × 64, and then the 400 human images after pixel adjustment are randomly divided into a training set, a verification set and a test set according to a ratio of 6.
When training the neural network model, a hidden layer may be further set, where the number of input units of the hidden layer is 3, the learning rate is set to 0.3, the weight of the input unit is 0.0, and the weight of the output unit is a smaller random value, for example, a random value smaller than a preset value.
104. And determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video.
In this embodiment, the terminal may obtain the face orientation of each person in each video frame of the environment video by the above means, so as to comb the face orientation of each person appearing in the environment video in at least one video frame, and determine the accumulated duration of the face orientation of each person in the environment video to the service place based on the duration corresponding to one video frame.
The method comprises the steps of determining the number of video frames of people in an environment video facing a service place, calculating the product between the number of the video frames and the corresponding duration of one video frame, and determining the product as the accumulated duration of the people in the environment video facing the service place.
For example, if the corresponding time length of each video frame is set to be 1/12 second, and the number of video frames in the environment video in which the human face of the person faces the business place is set to be 48 frames, the cumulative time length of the person facing the business place in the environment video is 48 × 1/12=4 seconds.
105. And acquiring a personnel image of each personnel, and carrying out fuzzy processing on the face part in the personnel image to obtain a personnel searching image of each personnel.
In this embodiment, the terminal acquires the person image of each person appearing in the environment video, and performs fuzzy processing on the face part in the person image to prevent privacy disclosure of the person appearing in the environment around the service site, and finally obtains the person search image of the person, so that the person search image based on the person is convenient to remind the service person in the service site.
In some embodiments, the image searched by the person can be further processed according to the relative requirements, so that the service person in the service place can be accurately informed, the time for the service person to find the potential customer is shortened, and meanwhile, the influence on the privacy of the person appearing in the surrounding environment of the service place can be avoided.
Specifically, after obtaining the person search image of each person, the method may further include: and the terminal extracts the characteristics of the personnel search image, namely extracts the personnel characteristics in the personnel search image to obtain personnel identification characteristic information, wherein the personnel identification characteristic information comprises personnel position and personnel attribute characteristics. The personnel position is a relative position between the current personnel and the business place, such as a front position, a front left position and the like, and the personnel attribute characteristics are personnel characteristics of the personnel in the environment video, including but not limited to personnel clothes, personnel accessories, tools used by the personnel to move forward and the like, which are convenient for business personnel in the business place to search for the personnel.
106. And according to the accumulated time length of each person towards the service place, arranging the person searching images of each person from large to small to obtain a potential customer list, and sending the potential customer list to the service person of the service place.
In this embodiment, the terminal arranges the people search images corresponding to the duration from large to small based on the accumulated duration by using the obtained accumulated duration of each person facing the service location, that is, the people search images with longer accumulated duration are relatively located in front of the potential customer list, so that the arranged potential customer list is sent to the service staff of the service location for recommendation, so that the relevant service staff actively contacts the relevant customers according to the potential customer list, and provides corresponding help to assist in completing popularization of the service.
It can be understood that, in order to avoid privacy disclosure of the person in the process of making a recommendation of a potential customer to a service person in a service place, after processing a person image of the person currently being analyzed and prompted and prompting accordingly, the potential customer list may be deleted, that is, the potential customer list for prompting may not be saved, so as to prevent privacy disclosure of the person and avoid persistence of sensitive information.
In some embodiments, in order to facilitate searching for the relevant service person, the potential customer list may further store a person position corresponding to each person search image, or the person position may be added to the corresponding person search image, so that the relevant service person may search for the corresponding potential customer according to the person position and the person search image.
In some embodiments, when the terminal performs feature extraction on the people search images to obtain the people identification feature information, the arranging the people search images of the people from large to small according to the accumulated duration of the people towards the business place, and obtaining the potential customer list may include: and arranging the personnel identification characteristic information of each person from big to small according to the accumulated time length of each person towards the service place to obtain a potential customer list, wherein the potential customer list comprises the arranged personnel identification characteristic information corresponding to each person.
In some embodiments, at least two cameras exist in a service place at the same time to acquire environment data of an environment around the service place, or the same service place can continuously acquire environment videos of the service place, so that the same person exists in different environment videos of the same time period, or the same person appears in at least two adjacent environment videos, and before the accumulated duration of the persons facing the service place is obtained, the accumulated duration of the persons facing the service place can be updated according to the accumulated duration of the persons facing the service place, the searched images of the persons are arranged from large to small, and a potential client list is obtained, so as to obtain the total accumulated duration of the updated persons facing the service place.
Specifically, the terminal judges whether the same person exists in the environment video and the previous environment video based on each person in the environment video and each historical person in the previous environment video of the service place, and determines a target person existing in both the environment video and the previous environment video; acquiring the historical accumulated time length of the target person facing the service place in the previous environment video; and updating the accumulated time length of the target person facing the business place according to the historical accumulated time length corresponding to the target person, namely calculating the sum of the historical accumulated time length and the accumulated time length, and determining the sum as the updated accumulated time length of the target person facing the business place.
Illustratively, the historical accumulated time length of the target person facing the business place in the previous environment video is set as t1, and the historical accumulated time length of the target person facing the business place in the current environment video is set as t2, and then the accumulated time length of the target person facing the business place is updated to be the sum of t1 and t2, namely t1+ t2.
It is understood that the historical accumulated time length corresponding to the previous environment video includes, but is not limited to, the time length of the target person in the previous environment video facing the business location, for example, if the target person still exists in the environment video before the previous environment video, the historical accumulated time length in the previous environment video includes the time length of the target person existing in the environment video before the environment video facing the business location.
In some embodiments, after obtaining the potential customer list, the speed of searching for the potential customer by a service person in the service location may be increased through an intelligent prompt, in this embodiment, the method may further include: determining the positions of the personnel with the accumulated time length meeting the preset time length condition in the surrounding environment of the business place according to the accumulated time length of each personnel facing the business place; simultaneously, performing area division on the surrounding environment of the business place to obtain at least two distribution areas; determining a target distribution area with the largest number of people from each distribution area according to the positions of the people with accumulated time length meeting a preset time length condition in the surrounding environment of the business place; finally, the person searching image corresponding to the person in the target area is highlighted in the potential customer list, so that related business persons can simultaneously serve a plurality of potential customers with willingness, and the service efficiency is improved.
In some embodiments, after obtaining the potential customer list, the method may further include: and according to the accumulated time length of each person facing the service place, carrying out highlighting prompt operation on the person searching image corresponding to the person with the accumulated time length being greater than the preset time length threshold value in the potential customer list.
The embodiment of the application discloses a data processing method of a business place, which comprises the following steps: acquiring environmental data of the surrounding environment of a service place to obtain an environmental video of the service place; carrying out portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video; respectively inputting the face images of all the people in each video frame into a preset neural network model to predict the face orientation, so as to obtain the face orientation of all the people in each video frame; determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video; acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel; according to the accumulated time length of each person facing the business place, the person searching images of each person are arranged from large to small to obtain a potential customer list, and the potential customer list is sent to the business persons of the business place, so that the corresponding potential customer list is generated through the accumulated time length of the faces of the persons around the business place facing the business place, the business persons of the business place are assisted to identify potential customers through the potential customer list, and the accuracy and the efficiency of potential customer identification are improved.
In order to better implement the data processing method for the business location provided by the embodiment of the present application, the embodiment of the present application further provides a data processing apparatus for the business location based on the data processing method for the business location. The terms are the same as those in the data processing method of the business location, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 2, fig. 2 is a block diagram of a data processing apparatus of a business location according to an embodiment of the present application, where the apparatus includes:
a data acquisition module 201, configured to acquire environmental data of an environment around a service location to obtain an environmental video of the service location;
an image obtaining module 202, configured to perform portrait recognition and portrait tracking on the environment video, and obtain a face image of each person in each video frame of the environment video;
the orientation prediction module 203 is configured to input the face images of the persons in each video frame into a preset neural network model to perform face orientation prediction, so as to obtain the face orientations of the persons in each video frame;
a duration determining module 204, configured to determine, according to the face orientation of each person in each video frame of the environment video, an accumulated duration that each person in the environment video faces the service location;
a blurring processing module 205, configured to obtain a person image of each person, and perform blurring processing on a face portion in the person image to obtain a person search image of each person;
and the list sending module 206 is configured to arrange the people search images of the people from large to small according to the accumulated duration of the people facing the business place, obtain a potential customer list, and send the potential customer list to the business people of the business place.
In some embodiments, the data processing apparatus of the service site further includes:
a person determination module, configured to determine target persons existing in both the environment video and a previous environment video of the service location based on each person in the environment video and each historical person in the previous environment video;
the duration acquisition module is used for acquiring the historical accumulated duration of the target person facing the service place in the previous environment video;
and the duration updating module is used for updating the accumulated duration of the target person towards the business place according to the historical accumulated duration corresponding to the target person.
In some embodiments, the orientation prediction module 203 comprises:
the orientation prediction unit is used for inputting the face image of the person into a preset neural network model to predict the face orientation so as to obtain predicted values of the face orientation of the person in at least two directions;
and the orientation determining unit is used for determining the face orientation of the person according to the predicted values of the face orientation of the person in at least two directions.
In some embodiments, the data processing apparatus of the service site further includes:
the position determining module is used for determining the positions of the personnel with the accumulated time length meeting the preset time length condition in the surrounding environment of the business place according to the accumulated time length of each personnel facing the business place;
the area division module is used for carrying out area division on the surrounding environment of the business place to obtain at least two distribution areas;
the region determining module is used for determining a target distribution region with the largest number of people from each distribution region according to the positions of the people with the accumulated time length meeting the preset time length condition in the surrounding environment of the business place;
and the first prompting module is used for carrying out highlighting prompting operation on the personnel searching image corresponding to the personnel in the target area in the potential customer list.
In some embodiments, the image acquisition module 202 comprises:
the system comprises a feature extraction unit, a face recognition unit and a face recognition unit, wherein the feature extraction unit is used for extracting face features of an initial video frame in the environment video to obtain a face recognition result of a person, and the face recognition result comprises a face image of the person;
and the human image tracking unit is used for tracking the human images of all video frames behind the initial video frame in the environment video based on the human image recognition result and acquiring the human face images of the same person in all video frames behind the initial video frame.
In some embodiments, the data processing apparatus of the service site further includes:
the characteristic extraction module is used for extracting the characteristics of the personnel search image to obtain personnel identification characteristic information, and the personnel identification characteristic information comprises personnel positions and personnel attribute characteristics;
the list sending module 206 is further specifically configured to:
and arranging the personnel identification characteristic information of each personnel from big to small according to the accumulated time length of each personnel towards the service place to obtain a potential customer list.
In some embodiments, the data processing apparatus of the service site further includes:
and the second prompting module is used for carrying out highlighting prompting operation on the searching images of the persons corresponding to the persons with the accumulated time length larger than the preset time length threshold value in the potential customer list according to the accumulated time length of each person towards the service place.
The embodiment of the application discloses a data processing device of a business place, which obtains environmental data of the surrounding environment of the business place through a data obtaining module 201 to obtain an environmental video of the business place. The image acquisition module 202 performs image recognition and image tracking on the environment video, and acquires a face image of each person in each video frame of the environment video. The face images of the people in each video frame are respectively input into a preset neural network model through the face direction prediction module 203 to perform face direction prediction, so as to obtain the face directions of the people in each video frame. The accumulated time length of each person facing the service place in the environment video is determined by the time length determining module 204 according to the face orientation of each person in each video frame of the environment video. The personnel image of each person is obtained through the fuzzy processing module 205, and the human face part in the personnel image is subjected to fuzzy processing to obtain the personnel search image of each person. And arranging the personnel searching images of the personnel from big to small according to the accumulated time length of the personnel facing the service place through the list sending module 206 to obtain a potential customer list, and sending the potential customer list to the service personnel of the service place. Therefore, the corresponding potential customer list is generated through the accumulated time length of the faces of people around the business place facing the business place, so that the business personnel at the business place can be assisted to identify the potential customers through the potential customer list, and the accuracy and the efficiency of potential customer identification are improved.
The embodiment of the application also provides computer equipment, and the computer equipment can be a terminal. As shown in fig. 3, fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer-readable storage media, and a computer program stored on the memory 302 and executable on the processor. The processor 301 is electrically connected to the memory 302. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 301 is a control center of the computer apparatus 300, connects various parts of the entire computer apparatus 300 by various interfaces and lines, performs various functions of the computer apparatus 300 and processes data by running or loading software programs and/or modules stored in the memory 302, and calling data stored in the memory 302, thereby integrally monitoring the computer apparatus 300.
In this embodiment, the processor 301 in the computer device 300 loads instructions corresponding to processes of one or more application programs into the memory 302, and the processor 301 executes the application programs stored in the memory 302 according to the following steps, so as to implement various functions:
acquiring environmental data of the surrounding environment of a service place to obtain an environmental video of the service place;
carrying out portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video;
respectively inputting the face images of all the people in each video frame into a preset neural network model for face orientation prediction to obtain the face orientation of all the people in each video frame;
determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video;
acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel;
and according to the accumulated time length of each person towards the service place, arranging the person searching images of each person from large to small to obtain a potential customer list, and sending the potential customer list to the service person of the service place.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 3, the computer device 300 further includes: a touch display 303, a radio frequency circuit 304, an audio circuit 305, an input unit 306, and a power source 307. The processor 301 is electrically connected to the touch display 303, the radio frequency circuit 304, the audio circuit 305, the input unit 306, and the power source 307. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 3 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 303 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 303 may include a display panel and a touch panel. The display panel may be used, among other things, to display messages entered by or provided to a user and various graphical user interfaces of the computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid crystal display (LCD, liquid crystal client account l display client account y), an organic Light-Emitting Diode (OLED, organic Light-Emitting Diode), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives the touch message from the touch sensing device, converts the touch message into touch point coordinates, sends the touch point coordinates to the processor 301, and can receive and execute commands sent by the processor 301. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 301 to determine the type of the touch event, and then the processor 301 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 303 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 303 may also be used as a part of the input unit 306 to implement an input function.
The rf circuit 304 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
The audio circuit 305 may be used to provide an audio interface between the user and the computer device through speakers, microphones. The audio circuit 305 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 305 and converted into audio data, which is then processed by the audio data output processor 301, and then transmitted to, for example, another computer device via the radio frequency circuit 304, or output to the memory 302 for further processing. The audio circuit 305 may also include an earbud jack to provide communication of a peripheral headset with the computer device.
The input unit 306 may be used to receive input numbers, character messages, or user characteristic messages (e.g., fingerprints, irises, facial messages, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 307 is used to power the various components of the computer device 300. Optionally, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. Power supply 307 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 3, the computer device 300 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment obtains the environmental video of the service location by obtaining the environmental data of the surrounding environment of the service location; performing portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video; respectively inputting the face images of all the people in each video frame into a preset neural network model for face orientation prediction to obtain the face orientation of all the people in each video frame; determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video; acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel; and arranging the personnel searching images of the personnel from large to small according to the accumulated time length of the personnel facing the business place to obtain a potential customer list, and sending the potential customer list to the business personnel of the business place.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in the data processing method of any business location provided by the present application. For example, the computer program may perform the steps of:
acquiring environmental data of the surrounding environment of a service place to obtain an environmental video of the service place;
carrying out portrait recognition and portrait tracking on the environment video to acquire a face image of each person in each video frame of the environment video;
respectively inputting the face images of all the people in each video frame into a preset neural network model for face orientation prediction to obtain the face orientation of all the people in each video frame;
determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video;
acquiring personnel images of all personnel, and carrying out fuzzy processing on the face parts in the personnel images to obtain personnel searching images of all personnel;
and arranging the personnel searching images of the personnel from large to small according to the accumulated time length of the personnel facing the business place to obtain a potential customer list, and sending the potential customer list to the business personnel of the business place.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: a read Only Memory (ROM, re client account d Only Memory), a random access Memory (R client account M, R client account and access Memory), a magnetic disk or an optical disk, and the like.
Since the computer program stored in the storage medium can execute the steps in the data processing method for any service location provided in the embodiment of the present application, beneficial effects that can be achieved by the data processing method for any service location provided in the embodiment of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The data processing method, the data processing device, the storage medium, and the computer device in the service place provided in the embodiments of the present application are described in detail above, and specific examples are applied in this document to explain the principles and embodiments of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for processing data at a business location, the method comprising:
acquiring environmental data of the surrounding environment of a service place to obtain an environmental video of the service place;
carrying out portrait recognition and portrait tracking on the environment video to acquire a facial image of each person in each video frame of the environment video;
respectively inputting the face images of all the people in each video frame into a preset neural network model for face orientation prediction to obtain the face orientation of all the people in each video frame;
determining the accumulated time length of each person facing the service place in the environment video according to the face orientation of each person in each video frame of the environment video;
acquiring personnel images of all personnel, and carrying out fuzzy processing on human face parts in the personnel images to obtain personnel searching images of all personnel;
and arranging the personnel search images of the personnel from large to small according to the accumulated time length of the personnel facing the service place to obtain a potential customer list, and sending the potential customer list to the service personnel of the service place.
2. The method of claim 1, wherein before arranging the people search images of the people from large to small according to the accumulated time length of the people facing the business location to obtain the potential customer list, further comprising:
determining target persons existing in the environment video and the previous environment video based on the persons in the environment video and the historical persons in the previous environment video of the service site;
acquiring historical accumulated time length of a target person facing the service place in the previous environment video;
and updating the accumulated time length of the target person facing the service place according to the historical accumulated time length corresponding to the target person.
3. The method of claim 1, wherein the step of inputting the face images of the people in each video frame into a preset neural network model for face orientation prediction to obtain the face orientation of the people in each video frame comprises:
inputting a face image of a person into a preset neural network model to predict the face orientation, so as to obtain predicted values of the face orientation of the person in at least two directions;
and determining the face orientation of the person according to the predicted values of the face orientation of the person in at least two directions.
4. The method of claim 1, after obtaining the list of potential customers, further comprising:
determining the positions of the personnel with the accumulated time length meeting the preset time length condition in the surrounding environment of the service place according to the accumulated time length of each personnel facing the service place;
performing regional division on the surrounding environment of the business place to obtain at least two distribution regions;
determining a target distribution area with the largest number of people from each distribution area according to the positions of the people with the accumulated time length meeting the preset time length condition in the surrounding environment of the business place;
and carrying out highlighting prompt operation on the person searching image corresponding to the person in the target area in the potential customer list.
5. The method according to claim 1, wherein the performing portrait recognition and portrait tracking on the environment video to obtain facial images of people in each video frame of the environment video comprises:
extracting human image features of an initial video frame in the environment video to obtain human image recognition results of people, wherein the human image recognition results comprise human face images of the people;
and tracking the face of each video frame behind the initial video frame in the environment video based on the face recognition result, and acquiring the face image of the same person in each video frame behind the initial video frame.
6. The method of claim 1, further comprising, after obtaining the person finding image for each person:
extracting features of the personnel search image to obtain personnel identification feature information, wherein the personnel identification feature information comprises personnel positions and personnel attribute features;
the step of arranging the personnel search images of the personnel from large to small according to the accumulated time length of the personnel facing the service place to obtain a potential customer list comprises the following steps:
and arranging the personnel identification characteristic information of each personnel from large to small according to the accumulated time length of each personnel facing the service place to obtain a potential customer list.
7. The method of any of claims 1 to 6, further comprising, after obtaining the list of potential customers:
and according to the accumulated time length of each person facing the service place, carrying out highlighting prompt operation on the person searching image corresponding to the person with the accumulated time length being greater than a preset time length threshold value in the potential customer list.
8. A data processing apparatus at a business location, the apparatus comprising:
the data acquisition module is used for acquiring environmental data of the surrounding environment of the service place to obtain an environmental video of the service place;
the image acquisition module is used for carrying out portrait recognition and portrait tracking on the environment video and acquiring the face image of each person in each video frame of the environment video;
the orientation prediction module is used for respectively inputting the face images of all the persons in each video frame into a preset neural network model to carry out face orientation prediction so as to obtain the face orientation of all the persons in each video frame;
the duration determining module is used for determining the accumulated duration of each person in the environment video towards the service place according to the face orientation of each person in each video frame of the environment video;
the fuzzy processing module is used for acquiring personnel images of all personnel, and carrying out fuzzy processing on human face parts in the personnel images to obtain personnel searching images of all personnel;
and the list sending module is used for arranging the personnel searching images of the personnel from big to small according to the accumulated time length of the personnel facing the service place to obtain a potential customer list and sending the potential customer list to the service personnel of the service place.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and run on the processor, wherein the processor when executing the program implements the data processing method of a workplace of any of claims 1 to 7.
10. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of data processing for a service location of any one of claims 1 to 7.
CN202211305598.6A 2022-10-24 2022-10-24 Data processing method and device for business place, computer equipment and storage medium Pending CN115631523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211305598.6A CN115631523A (en) 2022-10-24 2022-10-24 Data processing method and device for business place, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211305598.6A CN115631523A (en) 2022-10-24 2022-10-24 Data processing method and device for business place, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115631523A true CN115631523A (en) 2023-01-20

Family

ID=84906999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211305598.6A Pending CN115631523A (en) 2022-10-24 2022-10-24 Data processing method and device for business place, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115631523A (en)

Similar Documents

Publication Publication Date Title
US11386698B2 (en) Method and device for sending alarm message
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN108833262B (en) Session processing method, device, terminal and storage medium
CN113987326A (en) Resource recommendation method and device, computer equipment and medium
CN112818733B (en) Information processing method, device, storage medium and terminal
CN112235629A (en) Bullet screen shielding method and device, computer equipment and storage medium
CN112766406A (en) Article image processing method and device, computer equipment and storage medium
CN116542740A (en) Live broadcasting room commodity recommendation method and device, electronic equipment and readable storage medium
CN111353513B (en) Target crowd screening method, device, terminal and storage medium
CN116307394A (en) Product user experience scoring method, device, medium and equipment
CN115171222A (en) Behavior detection method and device, computer equipment and storage medium
CN116342940A (en) Image approval method, device, medium and equipment
CN115631523A (en) Data processing method and device for business place, computer equipment and storage medium
CN115633195A (en) Data security protection method and device, computer equipment and storage medium
CN114844985A (en) Data quality inspection method, device, equipment and storage medium
CN111143441A (en) Gender determination method, device, equipment and storage medium
CN111243605A (en) Service processing method, device, equipment and storage medium
CN113591958B (en) Method, device and equipment for fusing internet of things data and information network data
CN114140864B (en) Trajectory tracking method and device, storage medium and electronic equipment
CN115798059A (en) Living body detection method and device, computer equipment and storage medium
CN115422517A (en) Identity authentication method, device, medium and equipment based on credit card
CN115205023A (en) Bill data monitoring method, device, medium and equipment
CN118025014A (en) Driving assistance method, device, medium and equipment
CN115002496A (en) Information processing method and device for live broadcast platform, computer equipment and storage medium
CN114428581A (en) Early warning data processing method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination