WO2021128682A1 - Machine learning-based web page rendering method and apparatus, and computer device - Google Patents

Machine learning-based web page rendering method and apparatus, and computer device Download PDF

Info

Publication number
WO2021128682A1
WO2021128682A1 PCT/CN2020/088015 CN2020088015W WO2021128682A1 WO 2021128682 A1 WO2021128682 A1 WO 2021128682A1 CN 2020088015 W CN2020088015 W CN 2020088015W WO 2021128682 A1 WO2021128682 A1 WO 2021128682A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
preset
information
image
linear classifier
Prior art date
Application number
PCT/CN2020/088015
Other languages
French (fr)
Chinese (zh)
Inventor
温桂龙
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021128682A1 publication Critical patent/WO2021128682A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to the computer field, and in particular to a web page rendering method, device, computer equipment and storage medium based on machine learning.
  • the traditional web page (or web page rendering method) only has a few fixed style web page settings, such as day mode web page, night mode web page, theme mode web page, etc., but the number of these web page settings is limited, and these Web page settings need to be manually set by the user, and cannot be automatically adjusted according to the user's characteristics.
  • the elderly due to physiological factors such as reduced eyesight, they are more suitable for web pages with simple layout and larger font size.
  • the web pages of traditional technologies cannot be automatically adjusted according to the characteristics of the user population, resulting in low efficiency for users to use the web pages.
  • the main purpose of this application is to provide a web page rendering method, device, computer equipment, and storage medium based on machine learning, aiming to make web page rendering adaptive to special groups of people (such as elderly people) and improve web page usage efficiency.
  • this application proposes a web page rendering method based on machine learning, which includes the following steps:
  • a preset image similarity calculation method calculate an image similarity value between the first face image and a prestored designated face image, and determine whether the similarity value is greater than a preset image similarity threshold;
  • the first face image is input into the linear classifier in the preset prediction model for calculation, so as to obtain the features output by the linear classifier Information, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset non-linear classifier; the linear classifier is pre-processed by including facial images and Trained on the sample data of the feature information corresponding to the face image;
  • the feature information is input into the non-linear classifier for calculation, so as to obtain the web page rendering information composed of multiple sub-information output by the non-linear classifier, wherein the multiple sub-information includes at least the color sub-information of the web page, Icon style sub-information and layout style sub-information;
  • the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
  • This application provides a web page rendering device based on machine learning, including:
  • the first face image acquisition unit is configured to use a preset camera to collect the user's first face image
  • the image similarity value judgment unit is used to calculate the image similarity value between the first face image and a pre-stored designated face image according to a preset image similarity calculation method, and determine whether the similarity value is Greater than the preset image similarity threshold;
  • the feature information acquiring unit is configured to, if the similarity value is not greater than the preset image similarity threshold, input the first face image into the linear classifier in the preset prediction model for calculation, so as to obtain the
  • the feature information output by the linear classifier the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset nonlinear classifier; the linear classifier It is pre-trained through sample data including facial images and feature information corresponding to facial images;
  • the web page rendering information acquisition unit is configured to input the feature information into the nonlinear classifier for calculation, so as to obtain web page rendering information composed of multiple sub-information output by the nonlinear classifier, wherein the multiple sub-information Including at least color sub-information, icon style sub-information, and layout style sub-information of the webpage; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
  • a sub-information sending unit configured to send the multiple sub-information and return request information of the sub-data package used for web page rendering to multiple corresponding servers;
  • the web page rendering unit is configured to receive the multiple sub-data packets correspondingly returned by the multiple servers, combine the multiple sub-data packets into a rendering data packet, and render the web page by using the rendering data packet.
  • the present application provides a computer device including a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above-mentioned machine learning-based webpage rendering method when the computer program is executed.
  • the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned method for rendering a webpage based on machine learning are realized.
  • This application realizes the adaptation of web page rendering to special people (such as the elderly) and improves the efficiency of web page use.
  • FIG. 1 is a schematic flowchart of a webpage rendering method based on machine learning according to an embodiment of the application
  • FIG. 2 is a schematic block diagram of the structure of a web page rendering apparatus based on machine learning according to an embodiment of the application;
  • FIG. 3 is a schematic block diagram of the structure of a computer device according to an embodiment of the application.
  • an embodiment of the present application provides a method for rendering a webpage based on machine learning, including the following steps:
  • the feature information includes at least the predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and the preset nonlinear classifier; the linear classifier is pre-processed by including facial The image and the sample data of the feature information corresponding to the face image are trained;
  • a preset camera is used to collect the user's first face image.
  • This application can provide a suitable webpage according to the actual situation of the user, so as to improve the efficiency of the user's use of the meeting. Accordingly, the preset camera is used to collect the user's first face image.
  • the first face image will be used as a basis for obtaining user information and be accurately analyzed.
  • the preset image similarity calculation method calculate the image similarity value between the first face image and a pre-stored designated face image, and determine whether the similarity value is greater than the preset image similarity value.
  • Set the image similarity threshold The method for acquiring the designated face image can be any feasible method, for example: acquiring the user's login information (such as user account, user ID number, etc.), according to the preset login information-the correspondence of the face image Relationship, retrieve the designated face image corresponding to the login information from the preset face image database.
  • one of the purposes of judging whether the similarity value is greater than a preset image similarity threshold includes at least screening out inaccurate users (for example, natural person A uses the account of user B to log in) to improve The accuracy of the rendering of the web pages of this application.
  • the number of the designated face image is one.
  • the image similarity value is used to measure whether the first face image is similar to the pre-stored specified face image.
  • the similarity value is high (that is, the similarity value is greater than the preset image similarity threshold), it indicates the specified face
  • the image is the user's face image, that is, the user's data is obtained in advance, so there is no need to further analyze the first face image, and directly according to the preset correspondence relationship between the face image and the data packet, obtain the specified face
  • a designated data package corresponding to the image wherein the designated data package is used to render a web page; the designated data package is used to render the web page to provide a suitable web page.
  • this application pre-collects the face images of multiple users and the customary web page rendering data (ie data packets) of multiple users, so as to form the correspondence between the face images and the data packets, so that when knowing the user’s After the face image, the corresponding data packet can be obtained according to the corresponding relationship between the face image and the data packet. If the similarity value is low, the first face image needs to be further analyzed.
  • the image similarity calculation method is, for example, performing scaling processing on the first face image according to a preset scaling method to obtain a scaled picture, wherein the scaled picture is the same size as the specified face image Same (the same size means that the length and width of the picture are the same, thereby facilitating the comparison of pictures); acquiring multiple first feature lengths in the zoomed picture, and acquiring multiple second features in the specified face image Length, wherein each of the plurality of first characteristic lengths and the plurality of second characteristic lengths includes at least a face length and a face width, and the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence; Generate the first matrix [U1,U2,...,Un] and the second matrix [P1,P2,...,Pn], where U1,U2,...,Un are the multiple first feature lengths , P1, P2,..., Pn are the multiple second characteristic lengths, there are n first characteristic lengths and n second characteristic lengths in total; according to the formula
  • the similarity value is not greater than the preset image similarity threshold
  • the first face image is input into the linear classifier in the preset prediction model for calculation, so as to obtain the
  • the feature information output by the linear classifier the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset nonlinear classifier.
  • This application adopts a segmented prediction model, that is, a prediction model formed by sequentially connecting the linear classifier and the preset nonlinear classifier to process the first face image. Since the segmented setting is adopted, it is beneficial to quickly analyze the cause of the error when the prediction model makes an error.
  • the linear classifier can be any feasible model, for example, a support vector machine (linear kernel), a Bayesian classifier, a single-layer perceptron, and so on.
  • the non-linear classifier can be any feasible model, such as a neural network-based classifier, decision tree, etc.
  • the feature information is input into the nonlinear classifier for calculation, so as to obtain the web page rendering information composed of a plurality of sub-information output by the nonlinear classifier, wherein the plurality of sub-information is at least Including webpage color sub-information, icon style sub-information and layout style sub-information.
  • the non-linear classifier may adopt a supervised learning method and be trained using pre-collected sample data, the sample data including feature information and multiple sub-information corresponding to the feature information.
  • the non-linear classifier is used to predict appropriate web page rendering information for the user according to the feature information.
  • this application adopts a method of rendering information on a webpage composed of multiple sub-information to realize the diversification of webpage settings.
  • step S5 sending the multiple sub-information and returning the request information of the sub-data packet used for web page rendering to multiple corresponding servers.
  • This application uses multiple servers to provide sub-information separately, so that the data required for web page rendering is scattered and distributed in different servers. Since each server only needs to provide sub-data packets, the computational pressure is relieved and more types can be provided.
  • the number of sub-data packages for the web pages will be increased by an order of magnitude (because each server can focus on a certain sub-data package, so the number of sub-data packages increases, and the types of web pages that can be rendered after the combination Also greatly increased).
  • the sub-data package contains data required to render a webpage, such as webpage parameters, pictures of icons, and so on.
  • the webpage is generated by staged rendering, such as the rendering of the sidebar of the webpage, the rendering of the top menu bar, and the rendering of the icons of the entire webpage.
  • staged rendering only means that the rendering is carried out by project, but does not mean
  • the rendering sequence for example, the rendering of the sidebar of the webpage and the rendering of the top menu bar can be performed at the same time or at different times.
  • each of the sub-data packets is only used for one microstructure (item) of the rendered webpage, where the microstructure refers to the smallest unit that can be rendered separately when the webpage is rendered, such as the sidebar
  • the multiple sub-data packets correspondingly returned by the multiple servers are received, the multiple sub-data packets are combined into a rendering data packet, and the webpage is rendered using the rendering data packet.
  • Rendering a web page requires a rendering packet.
  • the computer can extract data from the rendering data packet when rendering the webpage, and then render the webpage.
  • the step S2 of calculating the image similarity value between the first face image and a pre-stored designated face image according to a preset image similarity calculation method includes:
  • S201 Perform scaling processing on the first face image according to a preset scaling method to obtain a scaled picture, wherein the eyebrow and eye spacing of the scaled picture is equal to the eyebrow and eye spacing of the designated face image;
  • S202 Acquire multiple first feature lengths in the zoomed picture, and acquire multiple second feature lengths in the designated face image, wherein the multiple first feature lengths and the multiple second features
  • the lengths include at least face length and face width, and the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence;
  • Eyebrow spacing refers to the distance between a person's eyebrows and eyes. Different people may have different eyebrow spacing.
  • this application performs scaling processing on the first face image to obtain a scaled picture, wherein the eyebrow and eye spacing of the scaled picture is equal to the eyebrow and eye spacing of the designated face image, so that the first The face image can be compared with the pre-stored specified face image in feature length.
  • will be It is close to n, so the value of M will be close to infinity, so the image similarity value M can be used as a standard to measure the image similarity value.
  • the step S201 of performing zooming processing on the first face image according to a preset zooming method to obtain a zoomed picture includes:
  • S2011 Use a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and use the facial feature point detection model to perform detection on the specified face image Perform detection to obtain multiple second facial feature points;
  • S2013 Perform scaling processing on the first face image by using an equal-scale scaling method, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, thereby obtaining a scaled picture.
  • the first face image is zoomed according to the preset zooming method, so as to obtain a zoomed picture.
  • Accurate scaling is a prerequisite for image similarity calculation and is extremely important.
  • This application uses a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and uses the facial feature point detection model to detect the specified face image Perform detection to obtain a plurality of second facial feature points; generate a first minimum circumscribed rectangle of the plurality of first facial feature points, and generate a second minimum circumscribed rectangle of the plurality of second facial feature points; adopt The method of proportional scaling performs scaling processing on the first face image so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, which improves the accuracy of scaling.
  • the facial feature point detection model may be any model, for example, a model based on neural network model training.
  • the facial feature point detection model is obtained, for example, in the following manner: calling sample data, and dividing the sample data into training data and test data; wherein, the sample data is only composed of face images and face images placed in a standard posture.
  • the facial feature point is, for example, the midpoint of the pupil.
  • the present application adopts the method of generating the first minimum circumscribed rectangle of the plurality of first facial feature points and the method of generating the second minimum circumscribed rectangle of the plurality of second facial feature points, using all facial feature points. Then, the first face image is scaled by a method of equal scaling, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, so as to obtain a scaled image to achieve accurate Zoom.
  • the facial feature point detection model is only trained by a face image placed in a preset standard posture, the designated face image is placed in a standard posture, and the preset facial feature points are used.
  • the detection model detects the first face image to obtain a plurality of first facial feature points; and uses the facial feature point detection model to detect the specified face image, thereby obtaining a plurality of second facial features.
  • the step S2011 of facial feature points includes:
  • S20112. Determine whether the absolute value of the difference between the first quantity and the second quantity is less than a preset difference threshold
  • the first face image is detected by using a preset facial feature point detection model, so as to obtain a plurality of first facial feature points; and, the facial feature point detection model is used to detect the first face image. Specify the face image for detection, thereby obtaining multiple second facial feature points.
  • the facial feature point detection model used in this application is only trained from the face image placed in a preset standard posture, so that the facial feature point detection model can be trained in a short time (because only the face image placed in the standard posture , Does not consider other poses, thus effectively reducing the number of training samples), but correspondingly, the facial feature point detection model can only detect face images placed in standard poses. Therefore, the first face image needs to be rotated to be placed in a standard posture.
  • the facial feature point detection model may be any model, for example, a model based on neural network model training.
  • the facial feature point detection model is obtained, for example, in the following manner: calling sample data, and dividing the sample data into training data and test data; wherein, the sample data is only composed of face images and face images placed in a standard posture.
  • the rotation processing may be any feasible rotation processing, for example, taking the center of the face image as the rotation center, rotating clockwise or counterclockwise by a preset angle.
  • the preset angle is, for example, 1-180 degrees, preferably 90 degrees.
  • the image is rotated one or more times; the rotated image is detected in turn using the facial feature point detection model until the absolute value of the difference between the number of detected facial feature points minus the second number is less than a preset Difference threshold, and record the last facial feature point output by the facial feature point detection model as the first facial feature point; obtain the first facial feature point, and obtain the facial feature point detection model for all
  • the second facial image is detected to generate second facial feature points. Therefore, on the basis of ensuring the realization of facial feature point extraction, the training time of the facial feature point detection model is reduced.
  • the image similarity value between the first face image and a pre-stored designated face image is calculated according to a preset image similarity calculation method, and it is determined whether the similarity value is greater than After step S2 of the preset image similarity threshold, the method includes:
  • the use of the specified data package to render the web page is realized. If the similarity value is greater than the preset image similarity threshold, it indicates that the first face image is similar to the specified face image, that is, the specified face image can represent the user to which the first face image belongs Therefore, it is sufficient to directly use the specified data packet associated with the specified face image.
  • This application pre-saves the user’s information by pre-storing the designated face image, so that the user only needs to use the prediction model to process the first face image when rendering a web page for the first time.
  • the pre-stored specified data package can be directly called when rendering a web page once, thereby reducing unnecessary computing consumption and improving the rendering speed of the web page at the same time.
  • the similarity value is not greater than a preset image similarity threshold
  • the first face image is input into a linear classifier in a preset prediction model for calculation, thereby obtaining
  • the feature information output by the linear classifier the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and the preset nonlinear classifier before step S3, include:
  • the linear classifier can be any feasible model, for example, a support vector machine (linear kernel), a Bayesian classifier, a single-layer perceptron, and so on.
  • This application uses a supervised learning method to train the linear classifier, and uses training data and test data that both include a facial image and feature information corresponding to the facial image, which are used for training and verification, respectively.
  • the stochastic gradient descent method is used to train a preset initial linear classifier using the training data.
  • the stochastic gradient descent method refers to randomly selecting multiple training data to replace all the training data for training, so as to improve the training speed.
  • the test data is then used to verify the intermediate linear classifier. If the intermediate linear classifier passes the verification, it indicates that the intermediate linear classifier is available. Therefore, the intermediate linear classifier is used as the final linear classifier.
  • the web page rendering method is applied to a rendering terminal, the rendering terminal and the multiple servers are connected to each other through a gateway, and the step S5 of correspondingly sending the multiple sub-information to the multiple servers, include:
  • S501 Encrypt the multiple sub-information separately by using a first encryption method previously agreed with multiple servers, so as to obtain multiple first sub-ciphertexts;
  • S503 Encrypt the intermediate ciphertext by adopting a second encryption method agreed with the gateway in advance, so as to obtain a final ciphertext;
  • Both the first encryption method and the second encryption method may be any feasible encryption methods, including but not limited to symmetric encryption (for example, hash encryption) and asymmetric encryption.
  • the rendering terminal and the multiple servers respectively pre-appoint the first encryption method, so the multiple servers can decrypt the first sub-ciphertext; the rendering terminal and the gateway pre-appoint the second encryption method, so the gateway can decrypt the final ciphertext.
  • the gateway cannot know the specific information of the sub-ciphertext, and the server can only decrypt the corresponding first sub-ciphertext.
  • This application realizes the adaptation of web page rendering to special people (such as the elderly) and improves the efficiency of web page use.
  • an embodiment of the present application provides a web page rendering device based on machine learning, including:
  • the first facial image acquisition unit 10 is configured to use a preset camera to acquire a user's first facial image
  • the image similarity value judgment unit 20 is configured to calculate the image similarity value between the first face image and a pre-stored designated face image according to a preset image similarity calculation method, and determine the similarity value Whether it is greater than the preset image similarity threshold;
  • the feature information acquiring unit 30 is configured to, if the similarity value is not greater than a preset image similarity threshold, input the first face image into a linear classifier in a preset prediction model for calculation, thereby obtaining The feature information output by the linear classifier, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset nonlinear classifier; the linear classifier The device is pre-trained through sample data including facial images and feature information corresponding to the facial images;
  • the web page rendering information acquisition unit 40 is configured to input the feature information into the nonlinear classifier for calculation, thereby obtaining web page rendering information composed of multiple sub-information output by the nonlinear classifier, wherein the multiple sub-information
  • the information includes at least color sub-information, icon style sub-information, and layout style sub-information of the webpage;
  • the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
  • the sub-information sending unit 50 is configured to send the multiple sub-information and return the request information of the sub-data package used for web page rendering to multiple corresponding servers;
  • the web page rendering unit 60 is configured to receive the multiple sub-data packets correspondingly returned by the multiple servers, combine the multiple sub-data packets into a rendering data packet, and render the web page using the rendering data packet.
  • the image similarity value judgment unit 20 includes:
  • a zooming subunit configured to perform zooming processing on the first face image according to a preset zooming method to obtain a zoomed picture, wherein the eyebrow and eye spacing of the zoomed picture is equal to the eyebrow and eye spacing of the designated face image;
  • the feature length obtaining subunit is used to obtain multiple first feature lengths in the zoomed picture, and multiple second feature lengths in the designated face image, wherein the multiple first feature lengths and all
  • Each of the plurality of second characteristic lengths includes at least a face length and a face width, and the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence;
  • the matrix generation subunit is used to generate the first matrix [U1,U2,...,Un] and the second matrix [P1,P2,...,Pn], where U1,U2,...,Un are all
  • the plurality of first characteristic lengths, P1, P2,..., Pn are the plurality of second characteristic lengths, and there are n first characteristic lengths and n second characteristic lengths in total;
  • the zoom subunit includes:
  • the facial feature point detection module is configured to use a preset facial feature point detection model to detect the first face image, so as to obtain a plurality of first facial feature points; and, use the facial feature point detection model to Performing detection on the designated face image to obtain a plurality of second facial feature points;
  • a circumscribed rectangle generating module configured to generate a first minimum circumscribed rectangle of the plurality of first facial feature points, and generate a second minimum circumscribed rectangle of the plurality of second facial feature points;
  • the scaling module is configured to perform scaling processing on the first face image by adopting an equal scaling method, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, thereby obtaining a scaled picture.
  • the facial feature point detection model is only trained by a face image placed in a preset standard posture, the designated face image is placed in a standard posture, and the facial feature point detection module includes :
  • the quantity detection sub-module is configured to detect the first face image using a preset facial feature point detection model to obtain the first quantity, and use the facial feature point detection model to perform detection on the second face image Performing detection to obtain a second number, where the first number refers to the number of facial feature points in the first face image, and the second number refers to the number of facial feature points in the designated face image;
  • the difference threshold judging sub-module is used to judge whether the absolute value of the difference of the first quantity minus the second quantity is less than a preset difference threshold;
  • the rotation sub-module is configured to perform one or more rotation processing on the first face image if the absolute value of the difference between the first amount and the second amount is not less than a preset difference threshold, To get one or more rotated images;
  • the first facial feature point marking sub-module is used to sequentially use the facial feature point detection model to detect the rotated image until the absolute value of the difference between the number of detected facial feature points minus the second number is less than A preset difference threshold, and record the facial feature point output last time by the facial feature point detection model as the first facial feature point;
  • the feature point acquisition sub-module is configured to acquire the first facial feature point, and acquire the facial feature point detection model to detect the second face image, thereby generating the second facial feature point.
  • the device includes:
  • a designated data packet acquisition unit configured to, if the similarity value is greater than a preset image similarity threshold, acquire designated data corresponding to the designated face image according to the preset correspondence relationship between the face image and the data packet Package, wherein the specified data package is used to render a web page;
  • the designated data packet rendering unit is used to render the web page using the designated data packet.
  • the device includes:
  • a data acquisition unit for acquiring pre-collected training data and test data, where the training data and test data both include facial images and feature information corresponding to the facial images;
  • An intermediate linear classifier acquiring unit configured to adopt a stochastic gradient descent method and use the training data to train a preset initial linear classifier to obtain an intermediate linear classifier;
  • a verification unit configured to verify the intermediate linear classifier using the test data, and determine whether the intermediate linear classifier passes the verification
  • the linear classifier marking unit is configured to use the intermediate linear classifier as the final linear classifier if the verification of the intermediate linear classifier passes.
  • the web page rendering method is applied to a rendering terminal, the rendering terminal and the multiple servers are connected to each other through a gateway, and the sub-information sending unit 50 includes:
  • the first encryption subunit is configured to separately encrypt the plurality of sub-information by adopting the first encryption method respectively agreed with the multiple servers to obtain multiple first sub-ciphertexts;
  • the second encryption subunit is used to encrypt the intermediate ciphertext by adopting the second encryption method agreed with the gateway in advance, so as to obtain the final ciphertext;
  • the final ciphertext sending subunit is used to send the final ciphertext to the gateway, and request the gateway to decrypt and split the final ciphertext to obtain a plurality of first sub-ciphertexts, and send the The multiple first sub-ciphertexts are correspondingly sent to multiple servers.
  • the webpage rendering device based on machine learning of the present application realizes webpage rendering adaptive to special people (for example, elderly people), and improves webpage usage efficiency.
  • an embodiment of the present application also provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in the figure.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor designed by the computer is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium.
  • the database of the computer device is used to store data used in the web page rendering method based on machine learning.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a web page rendering method based on machine learning.
  • the above-mentioned processor executes the above-mentioned machine learning-based webpage rendering method, wherein the steps included in the method respectively correspond to the steps of executing the machine learning-based webpage rendering method of the aforementioned embodiment, and will not be repeated here.
  • An embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored.
  • a computer program is stored.
  • the computer program is executed by a processor, a method for rendering a webpage based on machine learning is implemented, wherein the steps included in the method are the same as those in the previous implementation.
  • the steps of the web page rendering method based on machine learning correspond one to one, so I won’t repeat them here.
  • the computer-readable storage medium may be non-volatile or volatile.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Disclosed by the present application are a machine learning-based web page rendering method and apparatus, a computer device and a storage medium. The method comprises: collecting a first face image of a user by using a preset camera; calculating an image similarity value between the first face image and a specified face image; if the similarity value is not greater than a preset image similarity threshold, then inputting the first face image into a linear classifier in a preset predictive model for calculation to obtain feature information outputted by the linear classifier; inputting the feature information into a non-linear classifier for calculation to obtain web page rendering information composed of multiple pieces of sub-information; correspondingly sending the multiple pieces of sub-information to a plurality of servers; and receiving a plurality of sub-data packets correspondingly returned by the plurality of servers and combining same into a rendering data packet, and using the rendering data packet to render a web page. Therefore, web page rendering is adaptive to special groups of people (such as elderly people), and the efficiency of web page usage is increased.

Description

基于机器学习的网页渲染方法、装置和计算机设备Web page rendering method, device and computer equipment based on machine learning
本申请要求于2019年12月23日提交中国专利局、申请号为201911342654.1,申请名称为“基于机器学习的网页渲染方法、装置和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on December 23, 2019, the application number is 201911342654.1, and the application title is "Machine Learning-based Web Page Rendering Method, Apparatus, and Computer Equipment", the entire content of which is incorporated by reference Incorporated in this application.
技术领域Technical field
本申请涉及到计算机领域,特别是涉及到一种基于机器学习的网页渲染方法、装置、计算机设备和存储介质。This application relates to the computer field, and in particular to a web page rendering method, device, computer equipment and storage medium based on machine learning.
背景技术Background technique
随着互联网行业的迅速发展,现今人们已经习惯于利用计算机设备浏览互联网上的网页。发明人意识到,传统技术的网页(或网页渲染方法),只存在固定样式的几种网页设置,例如白天模式网页、黑夜模式网页、主题模式网页等,但这些网页设置的数量有限,并且这些网页设置需要用户手动设置,无法根据用户的特征状况进行自动调整。另外,对于老年人,由于其视力降低等生理因素,其更适合于布局简单、字号较大的网页,然而传统技术的网页无法根据用户人群特征进行自动调整,从而使得用户使用网页的效率低下。With the rapid development of the Internet industry, people are now accustomed to browsing web pages on the Internet using computer equipment. The inventor realized that the traditional web page (or web page rendering method) only has a few fixed style web page settings, such as day mode web page, night mode web page, theme mode web page, etc., but the number of these web page settings is limited, and these Web page settings need to be manually set by the user, and cannot be automatically adjusted according to the user's characteristics. In addition, for the elderly, due to physiological factors such as reduced eyesight, they are more suitable for web pages with simple layout and larger font size. However, the web pages of traditional technologies cannot be automatically adjusted according to the characteristics of the user population, resulting in low efficiency for users to use the web pages.
技术问题technical problem
本申请的主要目的为提供一种基于机器学习的网页渲染方法、装置、计算机设备和存储介质,旨在使网页渲染自适应特殊人群(例如高龄人群),提高网页使用效率。The main purpose of this application is to provide a web page rendering method, device, computer equipment, and storage medium based on machine learning, aiming to make web page rendering adaptive to special groups of people (such as elderly people) and improve web page usage efficiency.
技术解决方案Technical solutions
为了实现上述发明目的,本申请提出一种基于机器学习的网页渲染方法,包括以下步骤:In order to achieve the above-mentioned purpose of the invention, this application proposes a web page rendering method based on machine learning, which includes the following steps:
利用预设的摄像头采集用户的第一人脸图像;Use the preset camera to collect the user's first face image;
根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;According to a preset image similarity calculation method, calculate an image similarity value between the first face image and a prestored designated face image, and determine whether the similarity value is greater than a preset image similarity threshold;
若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;If the similarity value is not greater than the preset image similarity threshold, the first face image is input into the linear classifier in the preset prediction model for calculation, so as to obtain the features output by the linear classifier Information, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset non-linear classifier; the linear classifier is pre-processed by including facial images and Trained on the sample data of the feature information corresponding to the face image;
将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;The feature information is input into the non-linear classifier for calculation, so as to obtain the web page rendering information composed of multiple sub-information output by the non-linear classifier, wherein the multiple sub-information includes at least the color sub-information of the web page, Icon style sub-information and layout style sub-information; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;Sending the multiple sub-information and returning the request information of the sub-data package used for web page rendering to multiple corresponding servers;
接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。Receiving the multiple sub-data packets correspondingly returned by the multiple servers, combining the multiple sub-data packets into a rendering data packet, and rendering the webpage by using the rendering data packet.
本申请提供一种基于机器学习的网页渲染装置,包括:This application provides a web page rendering device based on machine learning, including:
第一人脸图像采集单元,用于利用预设的摄像头采集用户的第一人脸图像;The first face image acquisition unit is configured to use a preset camera to collect the user's first face image;
图像相似度值判断单元,用于根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;The image similarity value judgment unit is used to calculate the image similarity value between the first face image and a pre-stored designated face image according to a preset image similarity calculation method, and determine whether the similarity value is Greater than the preset image similarity threshold;
特征信息获取单元,用于若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;The feature information acquiring unit is configured to, if the similarity value is not greater than the preset image similarity threshold, input the first face image into the linear classifier in the preset prediction model for calculation, so as to obtain the The feature information output by the linear classifier, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset nonlinear classifier; the linear classifier It is pre-trained through sample data including facial images and feature information corresponding to facial images;
网页渲染信息获取单元,用于将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;The web page rendering information acquisition unit is configured to input the feature information into the nonlinear classifier for calculation, so as to obtain web page rendering information composed of multiple sub-information output by the nonlinear classifier, wherein the multiple sub-information Including at least color sub-information, icon style sub-information, and layout style sub-information of the webpage; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
子信息发送单元,用于发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;A sub-information sending unit, configured to send the multiple sub-information and return request information of the sub-data package used for web page rendering to multiple corresponding servers;
网页渲染单元,用于接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。The web page rendering unit is configured to receive the multiple sub-data packets correspondingly returned by the multiple servers, combine the multiple sub-data packets into a rendering data packet, and render the web page by using the rendering data packet.
本申请提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述基于机器学习的网页渲染方法的步骤。The present application provides a computer device including a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above-mentioned machine learning-based webpage rendering method when the computer program is executed.
本申请提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述基于机器学习的网页渲染方法的步骤。The present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned method for rendering a webpage based on machine learning are realized.
有益效果Beneficial effect
本申请实现了网页渲染自适应特殊人群(例如高龄人群),提高网页使用效率。This application realizes the adaptation of web page rendering to special people (such as the elderly) and improves the efficiency of web page use.
附图说明Description of the drawings
图1 为本申请一实施例的基于机器学习的网页渲染方法的流程示意图;FIG. 1 is a schematic flowchart of a webpage rendering method based on machine learning according to an embodiment of the application;
图2 为本申请一实施例的基于机器学习的网页渲染装置的结构示意框图;2 is a schematic block diagram of the structure of a web page rendering apparatus based on machine learning according to an embodiment of the application;
图3 为本申请一实施例的计算机设备的结构示意框图。FIG. 3 is a schematic block diagram of the structure of a computer device according to an embodiment of the application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics, and advantages of the purpose of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
本发明的最佳实施方式The best mode of the present invention
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions, and advantages of this application clearer and clearer, the following further describes the application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, and are not used to limit the present application.
参照图1,本申请实施例提供一种基于机器学习的网页渲染方法,包括以下步骤:1, an embodiment of the present application provides a method for rendering a webpage based on machine learning, including the following steps:
S1、利用预设的摄像头采集用户的第一人脸图像;S1. Use a preset camera to collect the user's first face image;
S2、根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;S2, according to the preset image similarity calculation method, calculate the image similarity value between the first face image and a pre-stored designated face image, and determine whether the similarity value is greater than the preset image similarity Threshold
S3、若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;S3. If the similarity value is not greater than the preset image similarity threshold, input the first face image into the linear classifier in the preset prediction model for calculation, thereby obtaining the linear classifier output The feature information includes at least the predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and the preset nonlinear classifier; the linear classifier is pre-processed by including facial The image and the sample data of the feature information corresponding to the face image are trained;
S4、将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;S4. Input the feature information into the non-linear classifier for calculation, so as to obtain the web page rendering information composed of multiple sub-information output by the non-linear classifier, wherein the multiple sub-information at least includes the color sub-information of the web page. Information, icon style sub-information, and layout style sub-information; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
S5、发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;S5. Send the multiple sub-information and return the request information of the sub-data package used for web page rendering to multiple corresponding servers;
S6、接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。S6. Receive the multiple sub-data packets correspondingly returned by the multiple servers, combine the multiple sub-data packets into a rendering data package, and render the webpage by using the rendering data package.
如上述步骤S1所述,利用预设的摄像头采集用户的第一人脸图像。本申请能够实现针对用户的实际状况提供合适的网页,以提高用户使用见面的效率。据此,利用预设的摄像头采集用户的第一人脸图像。所述第一人脸图像将作为获取用户信息的基础,被准确解析。As described in step S1 above, a preset camera is used to collect the user's first face image. This application can provide a suitable webpage according to the actual situation of the user, so as to improve the efficiency of the user's use of the meeting. Accordingly, the preset camera is used to collect the user's first face image. The first face image will be used as a basis for obtaining user information and be accurately analyzed.
如上述步骤S2所述,根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值。所述指定人脸图像的获取方式可为任意可行方式,例如为:获取所述用户的登陆信息(例如用户帐号、用户身份证号等等),根据预设的登陆信息-人脸图像的对应关系,从预设的人脸图像数据库中调取与所述登陆信息对应的指定人脸图像。并且,所述判断所述相似度值是否大于预设的图像相似度阈值,的目的之一至少包括了,筛选出不准确的用户(例如自然人A使用了用户B的帐号进行登陆),以提高本申请的网页渲染的准确性。其中,所述指定人脸图像的数量为一幅。图像相似度值用于衡量所述第一人脸图像与预存的指定人脸图像是否相似,若相似度值高(即所述相似度值大于预设的图像相似度阈值),表明指定人脸图像即是用户的人脸图像,即用户的数据是预先获取了,因此无需进一步解析第一人脸图像,直接根据预设的人脸图像与数据包的对应关系,获取与所述指定人脸图像对应的指定数据包,其中所述指定数据包用于渲染网页;利用所述指定数据包渲染网页,即可提供合适的网页。其中,本申请预先收集有多个用户的人脸图像、以及多个用户的惯用网页渲染数据(即数据包),从而能够形成人脸图像与数据包的对应关系,因此在获知某个用户的人脸图像后,即可根据人脸图像与数据包的对应关系获取对应的数据包。若相似度值低,则需进一步解析第一人脸图像。其中,所述图像相似度计算方法例如为:根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,其中所述缩放图片与所述指定人脸图像的尺寸相同(尺寸相同指图片的长度与宽度均相同,从而利于图片的比对);获取所述缩放图片中的多个第一特征长度,以及获取所述指定人脸图像中的多个第二特征长度,其中所述多个第一特征长度和所述多个第二特征长度均至少包括脸长和脸宽,所述多个第一特征长度与所述多个第二特征长度一一对应;生成第一矩阵[U1,U2,...,Un]和第二矩阵[P1,P2,...,Pn],其中U1,U2,...,Un为所述多个第一特征长度,P1,P2,...,Pn为所述多个第二特征长度,共有n个第一特征长度和n个第二特征长度;根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,计算出所述第一人脸图像与指定人脸图像的图像相似度值M。 As described in step S2 above, according to the preset image similarity calculation method, calculate the image similarity value between the first face image and a pre-stored designated face image, and determine whether the similarity value is greater than the preset image similarity value. Set the image similarity threshold. The method for acquiring the designated face image can be any feasible method, for example: acquiring the user's login information (such as user account, user ID number, etc.), according to the preset login information-the correspondence of the face image Relationship, retrieve the designated face image corresponding to the login information from the preset face image database. In addition, one of the purposes of judging whether the similarity value is greater than a preset image similarity threshold includes at least screening out inaccurate users (for example, natural person A uses the account of user B to log in) to improve The accuracy of the rendering of the web pages of this application. Wherein, the number of the designated face image is one. The image similarity value is used to measure whether the first face image is similar to the pre-stored specified face image. If the similarity value is high (that is, the similarity value is greater than the preset image similarity threshold), it indicates the specified face The image is the user's face image, that is, the user's data is obtained in advance, so there is no need to further analyze the first face image, and directly according to the preset correspondence relationship between the face image and the data packet, obtain the specified face A designated data package corresponding to the image, wherein the designated data package is used to render a web page; the designated data package is used to render the web page to provide a suitable web page. Among them, this application pre-collects the face images of multiple users and the customary web page rendering data (ie data packets) of multiple users, so as to form the correspondence between the face images and the data packets, so that when knowing the user’s After the face image, the corresponding data packet can be obtained according to the corresponding relationship between the face image and the data packet. If the similarity value is low, the first face image needs to be further analyzed. Wherein, the image similarity calculation method is, for example, performing scaling processing on the first face image according to a preset scaling method to obtain a scaled picture, wherein the scaled picture is the same size as the specified face image Same (the same size means that the length and width of the picture are the same, thereby facilitating the comparison of pictures); acquiring multiple first feature lengths in the zoomed picture, and acquiring multiple second features in the specified face image Length, wherein each of the plurality of first characteristic lengths and the plurality of second characteristic lengths includes at least a face length and a face width, and the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence; Generate the first matrix [U1,U2,...,Un] and the second matrix [P1,P2,...,Pn], where U1,U2,...,Un are the multiple first feature lengths , P1, P2,..., Pn are the multiple second characteristic lengths, there are n first characteristic lengths and n second characteristic lengths in total; according to the formula: M=1/|||[U1,U2, ...,Un] [P1,P2,...,Pn] T ||-n|, calculate the image similarity value M between the first face image and the specified face image.
如上述步骤S3所述,若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成。本申请采用分段式设置的预测模型,即由所述线性分类器与预设的非线性分类器顺序连接而成的预测模型,以对所述第一人脸图像进行处理。由于采用了分段式设置,有利于当所述预测模型出错时,快速分析出错原因。其中所述线性分类器可为任意可行的模型,例如为支持向量机(线性内核)、贝叶斯分类器、单层感知机等。所述非线性分类器可为任意可行的模型,例如为基于神经网络的分类器、决策树等。As described in step S3 above, if the similarity value is not greater than the preset image similarity threshold, the first face image is input into the linear classifier in the preset prediction model for calculation, so as to obtain the The feature information output by the linear classifier, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset nonlinear classifier. This application adopts a segmented prediction model, that is, a prediction model formed by sequentially connecting the linear classifier and the preset nonlinear classifier to process the first face image. Since the segmented setting is adopted, it is beneficial to quickly analyze the cause of the error when the prediction model makes an error. The linear classifier can be any feasible model, for example, a support vector machine (linear kernel), a Bayesian classifier, a single-layer perceptron, and so on. The non-linear classifier can be any feasible model, such as a neural network-based classifier, decision tree, etc.
如上述步骤S4所述,将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息。其中,所述非线性分类器可以采用监督学习的方法,利用预先收集的样本数据训练而成,所述样本数据包括特征信息和特征信息对应的多个子信息。所述非线性分类器用于根据所述特征信息,预测出用户合适的网页渲染信息。并且本申请采用由多个子信息构成的网页渲染信息的方式,以实现网页设置的多样化。As described in step S4 above, the feature information is input into the nonlinear classifier for calculation, so as to obtain the web page rendering information composed of a plurality of sub-information output by the nonlinear classifier, wherein the plurality of sub-information is at least Including webpage color sub-information, icon style sub-information and layout style sub-information. Wherein, the non-linear classifier may adopt a supervised learning method and be trained using pre-collected sample data, the sample data including feature information and multiple sub-information corresponding to the feature information. The non-linear classifier is used to predict appropriate web page rendering information for the user according to the feature information. In addition, this application adopts a method of rendering information on a webpage composed of multiple sub-information to realize the diversification of webpage settings.
如上述步骤S5所述,发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器。本申请采用多个服务器分别提供子信息的方式,以使网页渲染需要的数据分散分布在不同的服务器,由于每个服务器只需提供子数据包,因此计算压力得到缓解,从而能够提供更多种类的子数据包,进而实现多种类的网页, 从而网页的种类数量将得到数量级的提升(因为每个服务器能专注于某种子数据包,因此子数据包的数量增加,组合后能渲染的网页种类也大大增加)。其中所述子数据包中包含渲染网页需要的数据,例如网页参数、图标的图片等。进一步地,所述网页通过分阶段式渲染生成,例如网页侧边栏的渲染、顶部菜单栏的渲染、全网页的图标渲染,其中所述分阶段式渲染仅表示分项目进行渲染,但不代表渲染顺序,例如网页侧边栏的渲染和顶部菜单栏的渲染可以同时或者不同时进行。相应的,每个所述子数据包只用于渲染的网页的一个微结构(项目),其中所述微结构指所述网页在渲染时能够单独渲染而成的最小单元,例如侧边栏的指定按钮的功能实现、全网页的图标渲染等等。As described in step S5 above, sending the multiple sub-information and returning the request information of the sub-data packet used for web page rendering to multiple corresponding servers. This application uses multiple servers to provide sub-information separately, so that the data required for web page rendering is scattered and distributed in different servers. Since each server only needs to provide sub-data packets, the computational pressure is relieved and more types can be provided. The number of sub-data packages for the web pages will be increased by an order of magnitude (because each server can focus on a certain sub-data package, so the number of sub-data packages increases, and the types of web pages that can be rendered after the combination Also greatly increased). Wherein, the sub-data package contains data required to render a webpage, such as webpage parameters, pictures of icons, and so on. Further, the webpage is generated by staged rendering, such as the rendering of the sidebar of the webpage, the rendering of the top menu bar, and the rendering of the icons of the entire webpage. The staged rendering only means that the rendering is carried out by project, but does not mean The rendering sequence, for example, the rendering of the sidebar of the webpage and the rendering of the top menu bar can be performed at the same time or at different times. Correspondingly, each of the sub-data packets is only used for one microstructure (item) of the rendered webpage, where the microstructure refers to the smallest unit that can be rendered separately when the webpage is rendered, such as the sidebar The function realization of the designated button, the icon rendering of the whole webpage, and so on.
如上述步骤S6所述,接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。渲染网页需要渲染数据包。通过接收所述多个服务器对应返回的所述多个子数据包,并组合为渲染数据包,从而在渲染网页时计算机能够从所述渲染数据包中提取数据,进而渲染网页。As described in the above step S6, the multiple sub-data packets correspondingly returned by the multiple servers are received, the multiple sub-data packets are combined into a rendering data packet, and the webpage is rendered using the rendering data packet. Rendering a web page requires a rendering packet. By receiving the multiple sub-data packets correspondingly returned by the multiple servers and combining them into a rendering data packet, the computer can extract data from the rendering data packet when rendering the webpage, and then render the webpage.
在一个实施方式中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值的步骤S2,包括:In one embodiment, the step S2 of calculating the image similarity value between the first face image and a pre-stored designated face image according to a preset image similarity calculation method includes:
S201、根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,其中所述缩放图片的眉眼间距与所述指定人脸图像的眉眼间距相等;S201: Perform scaling processing on the first face image according to a preset scaling method to obtain a scaled picture, wherein the eyebrow and eye spacing of the scaled picture is equal to the eyebrow and eye spacing of the designated face image;
S202、获取所述缩放图片中的多个第一特征长度,以及获取所述指定人脸图像中的多个第二特征长度,其中所述多个第一特征长度和所述多个第二特征长度均至少包括脸长和脸宽,所述多个第一特征长度与所述多个第二特征长度一一对应;S202. Acquire multiple first feature lengths in the zoomed picture, and acquire multiple second feature lengths in the designated face image, wherein the multiple first feature lengths and the multiple second features The lengths include at least face length and face width, and the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence;
S203、生成第一矩阵[U1,U2,...,Un]和第二矩阵[P1,P2,...,Pn],其中U1,U2,...,Un为所述多个第一特征长度,P1,P2,...,Pn为所述多个第二特征长度,共有n个第一特征长度和n个第二特征长度;S203. Generate a first matrix [U1, U2,..., Un] and a second matrix [P1, P2,..., Pn], where U1, U2,..., Un are the first Characteristic lengths, P1, P2,..., Pn are the plurality of second characteristic lengths, there are n first characteristic lengths and n second characteristic lengths in total;
S204、根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,计算出所述第一人脸图像与指定人脸图像的图像相似度值M。 S204. According to the formula: M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T ||-n|, calculate the first face image The image similarity value M with the specified face image.
如上所述,实现了计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值。眉眼间距指人的眉毛与眼睛之间的距离,不同人的眉眼间距可能不相同。本申请根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,其中所述缩放图片的眉眼间距与所述指定人脸图像的眉眼间距相等,从而使第一人脸图像与预存的指定人脸图像可以进行特征长度的对比。再获取所述缩放图片中的多个第一特征长度,以及获取所述指定人脸图像中的多个第二特征长度;生成第一矩阵[U1,U2,...,Un]和第二矩阵[P1,P2,...,Pn],根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,计算出所述第一人脸图像与指定人脸图像的图像相似度值M。若所述第一人脸图像与预存的指定人脸图像的图像相似,那么||[U1,U2,...,Un] [P1,P2,...,Pn] T||的数值将接近于n,从而M的值将接近于无穷大,因此图像相似度值M能够作为衡量图像相似度值的标准。另外,由于人体的结构特性,若人脸A的某个特征长度大于人脸B,那么一般而言,人脸A的所有特征长度均大于人脸B,因此本申请采用根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,能够将特征长度的差异进行放大,从而提高了图像相似度计算的准确性。 As described above, the calculation of the image similarity value between the first face image and a pre-stored designated face image is realized. Eyebrow spacing refers to the distance between a person's eyebrows and eyes. Different people may have different eyebrow spacing. According to the preset scaling method, this application performs scaling processing on the first face image to obtain a scaled picture, wherein the eyebrow and eye spacing of the scaled picture is equal to the eyebrow and eye spacing of the designated face image, so that the first The face image can be compared with the pre-stored specified face image in feature length. Then acquire multiple first feature lengths in the zoomed picture, and acquire multiple second feature lengths in the specified face image; generate a first matrix [U1, U2,..., Un] and a second matrix [U1, U2,..., Un] Matrix [P1,P2,...,Pn], according to the formula: M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T ||-n |, calculate the image similarity value M between the first face image and the specified face image. If the first face image is similar to the pre-stored designated face image, then the value of ||[U1,U2,...,Un] [P1,P2,...,Pn] T || will be It is close to n, so the value of M will be close to infinity, so the image similarity value M can be used as a standard to measure the image similarity value. In addition, due to the structural characteristics of the human body, if a certain feature length of face A is greater than face B, in general, all feature lengths of face A are greater than face B, so this application adopts the formula: M=1 /|||[U1,U2,...,Un] [P1,P2,...,Pn] T ||-n|, can amplify the difference in feature length, thereby improving the calculation of image similarity accuracy.
在一个实施方式中,所述根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片的步骤S201,包括:In one embodiment, the step S201 of performing zooming processing on the first face image according to a preset zooming method to obtain a zoomed picture includes:
S2011、利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点;S2011. Use a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and use the facial feature point detection model to perform detection on the specified face image Perform detection to obtain multiple second facial feature points;
S2012、生成所述多个第一面部特征点的第一最小外接矩形,以及生成所述多个第二面部特征点的第二最小外接矩形;S2012. Generate a first minimum circumscribed rectangle of the plurality of first facial feature points, and generate a second minimum circumscribed rectangle of the plurality of second facial feature points;
S2013、采用等比例缩放的方法对所述第一人脸图像进行缩放处理,以使所述第一最小外接矩形的面积等于所述第二最小外接矩形的面积,从而得到缩放图片。S2013: Perform scaling processing on the first face image by using an equal-scale scaling method, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, thereby obtaining a scaled picture.
如上所述,实现了根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片。缩放准确是图像相似度计算的前提条件,极为重要。本申请利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点;生成所述多个第一面部特征点的第一最小外接矩形,以及生成所述多个第二面部特征点的第二最小外接矩形;采用等比例缩放的方法对所述第一人脸图像进行缩放处理,以使所述第一最小外接矩形的面积等于所述第二最小外接矩形的面积的方式进行缩放,提高了缩放的准确性。其中,所述面部特征点检测模型可为任意模型,例如为基于神经网络模型训练而得的模型。所述面部特征点检测模型例如采用下述方式获得:调用样本数据,并将所述样本数据分成训练数据和测试数据;其中,样本数据仅由按标准姿态放置的人脸图像和人脸图像中标注的特征点构成;利用训练数据并采用随机梯度下降法进行训练预设的神经网络模型,从而得到面部特征点中间检测模型;采用所述测试数据验证面部特征点中间检测模型;若验证通过,则将所述面部特征点中间检测模型记为所述面部特征点检测模型。其中,面部特征点例如为瞳孔中点等。由于每个人脸的结构都是相似的,因此都具有相同的特征点,当然,特征点的位置因人而异。本申请采用生成所述多个第一面部特征点的第一最小外接矩形,以及生成所述多个第二面部特征点的第二最小外接矩形的方式,利用上了所有面部特征点。再采用等比例缩放的方法对所述第一人脸图像进行缩放处理,以使所述第一最小外接矩形的面积等于所述第二最小外接矩形的面积,从而得到缩放图片,以实现准确的缩放。As described above, it is realized that the first face image is zoomed according to the preset zooming method, so as to obtain a zoomed picture. Accurate scaling is a prerequisite for image similarity calculation and is extremely important. This application uses a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and uses the facial feature point detection model to detect the specified face image Perform detection to obtain a plurality of second facial feature points; generate a first minimum circumscribed rectangle of the plurality of first facial feature points, and generate a second minimum circumscribed rectangle of the plurality of second facial feature points; adopt The method of proportional scaling performs scaling processing on the first face image so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, which improves the accuracy of scaling. Wherein, the facial feature point detection model may be any model, for example, a model based on neural network model training. The facial feature point detection model is obtained, for example, in the following manner: calling sample data, and dividing the sample data into training data and test data; wherein, the sample data is only composed of face images and face images placed in a standard posture. The composition of the labeled feature points; use the training data and use the stochastic gradient descent method to train the preset neural network model to obtain the facial feature point intermediate detection model; use the test data to verify the facial feature point intermediate detection model; if the verification passes, Then, the intermediate detection model for facial feature points is recorded as the facial feature point detection model. Among them, the facial feature point is, for example, the midpoint of the pupil. Since the structure of each face is similar, they all have the same feature points. Of course, the location of the feature points varies from person to person. The present application adopts the method of generating the first minimum circumscribed rectangle of the plurality of first facial feature points and the method of generating the second minimum circumscribed rectangle of the plurality of second facial feature points, using all facial feature points. Then, the first face image is scaled by a method of equal scaling, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, so as to obtain a scaled image to achieve accurate Zoom.
在一个实施方式中,所述面部特征点检测模型仅由以预设的标准姿态放置的人脸图像训练而成,所述指定人脸图像以标准姿态放置,所述利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点的步骤S2011,包括:In one embodiment, the facial feature point detection model is only trained by a face image placed in a preset standard posture, the designated face image is placed in a standard posture, and the preset facial feature points are used. The detection model detects the first face image to obtain a plurality of first facial feature points; and uses the facial feature point detection model to detect the specified face image, thereby obtaining a plurality of second facial features. The step S2011 of facial feature points includes:
S20111、利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到第一数量,以及利用所述面部特征点检测模型对所述第二人脸图像进行检测,从而得到第二数量,其中所述第一数量指第一人脸图像中的面部特征点数量,所述第二数量指所述指定人脸图像中的面部特征点数量;S20111. Use a preset facial feature point detection model to detect the first face image to obtain a first number, and use the facial feature point detection model to detect the second face image to obtain A second number, wherein the first number refers to the number of facial feature points in the first face image, and the second number refers to the number of facial feature points in the designated face image;
S20112、判断所述第一数量减去所述第二数量的差值绝对值是否小于预设的差值阈值;S20112. Determine whether the absolute value of the difference between the first quantity and the second quantity is less than a preset difference threshold;
S20113、若所述第一数量减去所述第二数量的差值绝对值不小于预设的差值阈值,则对所述第一人脸图像进行一次或多次旋转处理,从而得到一张或多张旋转图像;S20113. If the absolute value of the difference between the first number and the second number is not less than a preset difference threshold, perform one or more rotation processing on the first face image to obtain a picture Or multiple rotated images;
S20114、依次利用所述面部特征点检测模型对所述旋转图像进行检测,直到检测到的面部特征点数量减去所述第二数量的差值绝对值小于预设的差值阈值,并将所述面部特征点检测模型最后一次输出的面部特征点记为第一面部特征点;S20114. Use the facial feature point detection model to detect the rotated image in sequence until the absolute value of the difference between the number of detected facial feature points minus the second number is less than a preset difference threshold, and The facial feature point output by the facial feature point detection model for the last time is recorded as the first facial feature point;
S20115、获取所述第一面部特征点,以及获取所述面部特征点检测模型对所述第二人脸图像进行检测,从而生成的第二面部特征点。S20115. Obtain the first facial feature point, and obtain the second facial feature point generated by the facial feature point detection model to detect the second face image.
如上所述,实现了利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点。本申请采用的面部特征点检测模型仅由以预设的标准姿态放置的人脸图像训练而成,从而面部特征点检测模型可以在短时间内训练完成(因为仅由标准姿态放置的人脸图像,不考虑其他姿态,因此有效减少了训练样本的数量),但相应地,面部特征点检测模型只能检测按标准姿态放置的人脸图像。因此,需要先将第一人脸图像旋转为按标准姿态放置。其中,所述面部特征点检测模型可为任意模型,例如为基于神经网络模型训练而得的模型。所述面部特征点检测模型例如采用下述方式获得:调用样本数据,并将所述样本数据分成训练数据和测试数据;其中,样本数据仅由按标准姿态放置的人脸图像和人脸图像中标注的特征点构成;利用训练数据并采用随机梯度下降法进行训练预设的神经网络模型,从而得到面部特征点中间检测模型;采用所述测试数据验证面部特征点中间检测模型;若验证通过,则将所述面部特征点中间检测模型记为所述面部特征点检测模型。其中,所述旋转处理可为任意可行的旋转处理,例如以人脸图像中心为旋转中心,顺时针旋转或者逆时针旋转预设角度。其中所述预设角度例如为1-180度,优选90度。其中,正是由于所述面部特征点检测模型仅由以预设的标准姿态放置的人脸图像训练而成,因此非标准姿态放置的第一人脸图像将无法被检测出面部特征点(或仅能检测出少量面部特征点),因此通过判断预设的面部特征点检测模型对所述第一人脸图像进行检测得到第一数量,即可获知第一人脸图像的放置姿态。若所述第一数量减去所述第二数量的差值绝对值不小于预设的差值阈值,则表明第一人脸图像的放置姿态不为标准姿态,因此对所述第一人脸图像进行一次或多次旋转处理;依次利用所述面部特征点检测模型对所述旋转图像进行检测,直到检测到的面部特征点数量减去所述第二数量的差值绝对值小于预设的差值阈值,并将所述面部特征点检测模型最后一次输出的面部特征点记为第一面部特征点;获取所述第一面部特征点,以及获取所述面部特征点检测模型对所述第二人脸图像进行检测,从而生成的第二面部特征点。从而在保证实现面部特征点提取的基础上,减少面部特征点检测模型的训练时间。As described above, it is achieved that the first face image is detected by using a preset facial feature point detection model, so as to obtain a plurality of first facial feature points; and, the facial feature point detection model is used to detect the first face image. Specify the face image for detection, thereby obtaining multiple second facial feature points. The facial feature point detection model used in this application is only trained from the face image placed in a preset standard posture, so that the facial feature point detection model can be trained in a short time (because only the face image placed in the standard posture , Does not consider other poses, thus effectively reducing the number of training samples), but correspondingly, the facial feature point detection model can only detect face images placed in standard poses. Therefore, the first face image needs to be rotated to be placed in a standard posture. Wherein, the facial feature point detection model may be any model, for example, a model based on neural network model training. The facial feature point detection model is obtained, for example, in the following manner: calling sample data, and dividing the sample data into training data and test data; wherein, the sample data is only composed of face images and face images placed in a standard posture. The composition of the labeled feature points; use the training data and use the stochastic gradient descent method to train the preset neural network model to obtain the facial feature point intermediate detection model; use the test data to verify the facial feature point intermediate detection model; if the verification passes, Then, the intermediate detection model for facial feature points is recorded as the facial feature point detection model. Wherein, the rotation processing may be any feasible rotation processing, for example, taking the center of the face image as the rotation center, rotating clockwise or counterclockwise by a preset angle. The preset angle is, for example, 1-180 degrees, preferably 90 degrees. Among them, it is precisely because the facial feature point detection model is only trained by the face image placed in the preset standard posture, the first face image placed in the non-standard posture will not be able to detect the facial feature points (or Only a small number of facial feature points can be detected). Therefore, the first face image is obtained by detecting the first face image by judging the preset facial feature point detection model, and the placement posture of the first face image can be obtained. If the absolute value of the difference between the first quantity minus the second quantity is not less than the preset difference threshold, it indicates that the placement posture of the first face image is not a standard posture, and therefore the first face image is not in a standard posture. The image is rotated one or more times; the rotated image is detected in turn using the facial feature point detection model until the absolute value of the difference between the number of detected facial feature points minus the second number is less than a preset Difference threshold, and record the last facial feature point output by the facial feature point detection model as the first facial feature point; obtain the first facial feature point, and obtain the facial feature point detection model for all The second facial image is detected to generate second facial feature points. Therefore, on the basis of ensuring the realization of facial feature point extraction, the training time of the facial feature point detection model is reduced.
在一个实施方式中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值的步骤S2之后,包括:In one embodiment, the image similarity value between the first face image and a pre-stored designated face image is calculated according to a preset image similarity calculation method, and it is determined whether the similarity value is greater than After step S2 of the preset image similarity threshold, the method includes:
S21、若所述相似度值大于预设的图像相似度阈值,则根据预设的人脸图像与数据包的对应关系,获取与所述指定人脸图像对应的指定数据包,其中所述指定数据包用于渲染网页;S21. If the similarity value is greater than a preset image similarity threshold, obtain a specified data packet corresponding to the specified face image according to the preset correspondence between the face image and the data packet, wherein the specified face image Data packets are used to render web pages;
S22、利用所述指定数据包渲染网页。S22. Render a web page using the specified data package.
如上所述,实现了利用所述指定数据包渲染网页。若所述相似度值大于预设的图像相似度阈值,表明所述第一人脸图像与所述指定人脸图像相似,即所述指定人脸图像能够代表所述第一人脸图像所属用户,因此直接利用所述指定人脸图像相关联的指定数据包即可。本申请通过预存指定人脸图像的方式,将用户的信息进行预先保存,使得用户仅在第一次渲染网页时需要使用所述预测模型对所述第一人脸图像进行处理,而在非第一次渲染网页时可直接调用预存的指定数据包,从而减少不必要的计算消耗,同时提高了网页渲染速度。As described above, the use of the specified data package to render the web page is realized. If the similarity value is greater than the preset image similarity threshold, it indicates that the first face image is similar to the specified face image, that is, the specified face image can represent the user to which the first face image belongs Therefore, it is sufficient to directly use the specified data packet associated with the specified face image. This application pre-saves the user’s information by pre-storing the designated face image, so that the user only needs to use the prediction model to process the first face image when rendering a web page for the first time. The pre-stored specified data package can be directly called when rendering a web page once, thereby reducing unnecessary computing consumption and improving the rendering speed of the web page at the same time.
在一个实施方式中,所述若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成的步骤S3之前,包括:In one embodiment, if the similarity value is not greater than a preset image similarity threshold, the first face image is input into a linear classifier in a preset prediction model for calculation, thereby obtaining The feature information output by the linear classifier, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and the preset nonlinear classifier before step S3, include:
S211、获取预先收集的训练数据和测试数据,所述训练数据和测试数据均包括面部图像和与面部图像对应的特征信息;S211. Obtain pre-collected training data and test data, where both the training data and the test data include a facial image and feature information corresponding to the facial image;
S212、采用随机梯度下降法,利用所述训练数据训练预设的初始线性分类器,从而得到中间线性分类器;S212. Adopt a stochastic gradient descent method and use the training data to train a preset initial linear classifier, so as to obtain an intermediate linear classifier;
S213、利用所述测试数据验证所述中间线性分类器,并判断所述中间线性分类器是否验证通过;S213. Use the test data to verify the intermediate linear classifier, and determine whether the intermediate linear classifier passes the verification;
S214、若所述中间线性分类器验证通过,则将所述中间线性分类器作为最终的线性分类器。S214: If the verification of the intermediate linear classifier passes, the intermediate linear classifier is used as the final linear classifier.
如上所述,实现了获取最终的线性分类器。其中所述线性分类器可为任意可行的模型,例如为支持向量机(线性内核)、贝叶斯分类器、单层感知机等。本申请采用监督学习的方式训练所述线性分类器,采用均包括面部图像和与面部图像对应的特征信息的训练数据和测试数据,分别用于训练与验证时使用。并采用随机梯度下降法,利用所述训练数据训练预设的初始线性分类器。所述随机梯度下降法指随机选取多个训练数据,以代替所有的训练数据进行训练,以提高训练速度。再采用所述测试数据验证所述中间线性分类器,若所述中间线性分类器验证通过,则表明所述中间线性分类器可用,因此将所述中间线性分类器作为最终的线性分类器。As mentioned above, the final linear classifier is obtained. The linear classifier can be any feasible model, for example, a support vector machine (linear kernel), a Bayesian classifier, a single-layer perceptron, and so on. This application uses a supervised learning method to train the linear classifier, and uses training data and test data that both include a facial image and feature information corresponding to the facial image, which are used for training and verification, respectively. The stochastic gradient descent method is used to train a preset initial linear classifier using the training data. The stochastic gradient descent method refers to randomly selecting multiple training data to replace all the training data for training, so as to improve the training speed. The test data is then used to verify the intermediate linear classifier. If the intermediate linear classifier passes the verification, it indicates that the intermediate linear classifier is available. Therefore, the intermediate linear classifier is used as the final linear classifier.
在一个实施方式中,所述网页渲染方法应用于渲染终端,所述渲染终端与所述多个服务器通过网关实现通信连接,所述将所述多个子信息对应发送给多个服务器的步骤S5,包括:In one embodiment, the web page rendering method is applied to a rendering terminal, the rendering terminal and the multiple servers are connected to each other through a gateway, and the step S5 of correspondingly sending the multiple sub-information to the multiple servers, include:
S501、采用与多个服务器分别预先约定的第一加密方法,对所述多个子信息进行分别加密,从而得到多个第一子密文;S501: Encrypt the multiple sub-information separately by using a first encryption method previously agreed with multiple servers, so as to obtain multiple first sub-ciphertexts;
S502、将所述多个第一子密文组合为中间密文;S502. Combine the multiple first sub-ciphertexts into an intermediate ciphertext.
S503、采用与网关预先约定的第二加密方法,对所述中间密文进行加密处理,从而得到最终密文;S503: Encrypt the intermediate ciphertext by adopting a second encryption method agreed with the gateway in advance, so as to obtain a final ciphertext;
S504、将所述最终密文发送给网关,并要求所述网关对所述最终密文进行解密并拆分处理以得到多个第一子密文,并将所述多个第一子密文对应发送给多个服务器。S504. Send the final ciphertext to the gateway, and request the gateway to decrypt and split the final ciphertext to obtain multiple first sub-ciphertexts, and combine the multiple first sub-ciphertexts Correspondingly sent to multiple servers.
如上所述,实现了将所述多个子信息对应发送给多个服务器。本申请通过二次加密的方式,提高了信息安全性。其中所述第一加密方法和第二加密方法均可以为任意可行的加密方法,包括且不限于对称加密(例如哈希加密)、非对称加密。其中,渲染终端与多个服务器分别预先约定第一加密方法,因此多个服务器能够解密第一子密文;渲染终端与网关预先约定的第二加密方法,因此网关能够解密最终密文。但是网关无法得知子密文的具体信息,而服务器也仅能解密对应的第一子密文。因此无论是对于意图窃取信息的第三方(需要截取最终密文,还需要获取第一加密方法和第二加密方法,因此最终密文较难被破解)、或者是网关以及多个服务器,均难以获取完整的信息,从而提高了信息安全性。As described above, the multiple sub-information is correspondingly sent to multiple servers. This application improves information security by means of secondary encryption. Both the first encryption method and the second encryption method may be any feasible encryption methods, including but not limited to symmetric encryption (for example, hash encryption) and asymmetric encryption. The rendering terminal and the multiple servers respectively pre-appoint the first encryption method, so the multiple servers can decrypt the first sub-ciphertext; the rendering terminal and the gateway pre-appoint the second encryption method, so the gateway can decrypt the final ciphertext. However, the gateway cannot know the specific information of the sub-ciphertext, and the server can only decrypt the corresponding first sub-ciphertext. Therefore, whether it is a third party who intends to steal information (need to intercept the final ciphertext, and also need to obtain the first encryption method and the second encryption method, so the final ciphertext is more difficult to be cracked), or gateways and multiple servers, it is difficult Obtain complete information, thereby improving information security.
本申请实现了网页渲染自适应特殊人群(例如高龄人群),提高网页使用效率。This application realizes the adaptation of web page rendering to special people (such as the elderly) and improves the efficiency of web page use.
参照图2,本申请实施例提供一种基于机器学习的网页渲染装置,包括:2, an embodiment of the present application provides a web page rendering device based on machine learning, including:
第一人脸图像采集单元10,用于利用预设的摄像头采集用户的第一人脸图像;The first facial image acquisition unit 10 is configured to use a preset camera to acquire a user's first facial image;
图像相似度值判断单元20,用于根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;The image similarity value judgment unit 20 is configured to calculate the image similarity value between the first face image and a pre-stored designated face image according to a preset image similarity calculation method, and determine the similarity value Whether it is greater than the preset image similarity threshold;
特征信息获取单元30,用于若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;The feature information acquiring unit 30 is configured to, if the similarity value is not greater than a preset image similarity threshold, input the first face image into a linear classifier in a preset prediction model for calculation, thereby obtaining The feature information output by the linear classifier, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset nonlinear classifier; the linear classifier The device is pre-trained through sample data including facial images and feature information corresponding to the facial images;
网页渲染信息获取单元40,用于将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;The web page rendering information acquisition unit 40 is configured to input the feature information into the nonlinear classifier for calculation, thereby obtaining web page rendering information composed of multiple sub-information output by the nonlinear classifier, wherein the multiple sub-information The information includes at least color sub-information, icon style sub-information, and layout style sub-information of the webpage; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
子信息发送单元50,用于发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;The sub-information sending unit 50 is configured to send the multiple sub-information and return the request information of the sub-data package used for web page rendering to multiple corresponding servers;
网页渲染单元60,用于接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。The web page rendering unit 60 is configured to receive the multiple sub-data packets correspondingly returned by the multiple servers, combine the multiple sub-data packets into a rendering data packet, and render the web page using the rendering data packet.
其中上述单元分别用于执行的操作与前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。The operations performed by the above-mentioned units respectively correspond to the steps of the machine learning-based web page rendering method of the aforementioned embodiment, and will not be repeated here.
在一个实施方式中,所述图像相似度值判断单元20,包括:In an embodiment, the image similarity value judgment unit 20 includes:
缩放子单元,用于根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,其中所述缩放图片的眉眼间距与所述指定人脸图像的眉眼间距相等;A zooming subunit, configured to perform zooming processing on the first face image according to a preset zooming method to obtain a zoomed picture, wherein the eyebrow and eye spacing of the zoomed picture is equal to the eyebrow and eye spacing of the designated face image;
特征长度获取子单元,用于获取所述缩放图片中的多个第一特征长度,以及获取所述指定人脸图像中的多个第二特征长度,其中所述多个第一特征长度和所述多个第二特征长度均至少包括脸长和脸宽,所述多个第一特征长度与所述多个第二特征长度一一对应;The feature length obtaining subunit is used to obtain multiple first feature lengths in the zoomed picture, and multiple second feature lengths in the designated face image, wherein the multiple first feature lengths and all Each of the plurality of second characteristic lengths includes at least a face length and a face width, and the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence;
矩阵生成子单元,用于生成第一矩阵[U1,U2,...,Un]和第二矩阵[P1,P2,...,Pn],其中U1,U2,...,Un为所述多个第一特征长度,P1,P2,...,Pn为所述多个第二特征长度,共有n个第一特征长度和n个第二特征长度;The matrix generation subunit is used to generate the first matrix [U1,U2,...,Un] and the second matrix [P1,P2,...,Pn], where U1,U2,...,Un are all The plurality of first characteristic lengths, P1, P2,..., Pn are the plurality of second characteristic lengths, and there are n first characteristic lengths and n second characteristic lengths in total;
图像相似度值M计算子单元,用于根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,计算出所述第一人脸图像与指定人脸图像的图像相似度值M。 The image similarity value M is used to calculate the subunit according to the formula: M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T ||-n|, Calculate the image similarity value M between the first face image and the specified face image.
其中上述子单元分别用于执行的操作与前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。The operations performed by the aforementioned sub-units respectively correspond to the steps of the machine learning-based web page rendering method of the foregoing embodiment, and will not be repeated here.
在一个实施方式中,所述缩放子单元,包括:In one embodiment, the zoom subunit includes:
面部特征点检测模块,用于利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点;The facial feature point detection module is configured to use a preset facial feature point detection model to detect the first face image, so as to obtain a plurality of first facial feature points; and, use the facial feature point detection model to Performing detection on the designated face image to obtain a plurality of second facial feature points;
外接矩形生成模块,用于生成所述多个第一面部特征点的第一最小外接矩形,以及生成所述多个第二面部特征点的第二最小外接矩形;A circumscribed rectangle generating module, configured to generate a first minimum circumscribed rectangle of the plurality of first facial feature points, and generate a second minimum circumscribed rectangle of the plurality of second facial feature points;
缩放模块,用于采用等比例缩放的方法对所述第一人脸图像进行缩放处理,以使所述第一最小外接矩形的面积等于所述第二最小外接矩形的面积,从而得到缩放图片。The scaling module is configured to perform scaling processing on the first face image by adopting an equal scaling method, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, thereby obtaining a scaled picture.
其中上述模块分别用于执行的操作与前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。The operations performed by the above-mentioned modules respectively correspond to the steps of the machine learning-based web page rendering method of the foregoing embodiment, and will not be repeated here.
在一个实施方式中,所述面部特征点检测模型仅由以预设的标准姿态放置的人脸图像训练而成,所述指定人脸图像以标准姿态放置,所述面部特征点检测模块,包括:In one embodiment, the facial feature point detection model is only trained by a face image placed in a preset standard posture, the designated face image is placed in a standard posture, and the facial feature point detection module includes :
数量检测子模块,用于利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到第一数量,以及利用所述面部特征点检测模型对所述第二人脸图像进行检测,从而得到第二数量,其中所述第一数量指第一人脸图像中的面部特征点数量,所述第二数量指所述指定人脸图像中的面部特征点数量;The quantity detection sub-module is configured to detect the first face image using a preset facial feature point detection model to obtain the first quantity, and use the facial feature point detection model to perform detection on the second face image Performing detection to obtain a second number, where the first number refers to the number of facial feature points in the first face image, and the second number refers to the number of facial feature points in the designated face image;
差值阈值判断子模块,用于判断所述第一数量减去所述第二数量的差值绝对值是否小于预设的差值阈值;The difference threshold judging sub-module is used to judge whether the absolute value of the difference of the first quantity minus the second quantity is less than a preset difference threshold;
旋转子模块,用于若所述第一数量减去所述第二数量的差值绝对值不小于预设的差值阈值,则对所述第一人脸图像进行一次或多次旋转处理,从而得到一张或多张旋转图像;The rotation sub-module is configured to perform one or more rotation processing on the first face image if the absolute value of the difference between the first amount and the second amount is not less than a preset difference threshold, To get one or more rotated images;
第一面部特征点标记子模块,用于依次利用所述面部特征点检测模型对所述旋转图像进行检测,直到检测到的面部特征点数量减去所述第二数量的差值绝对值小于预设的差值阈值,并将所述面部特征点检测模型最后一次输出的面部特征点记为第一面部特征点;The first facial feature point marking sub-module is used to sequentially use the facial feature point detection model to detect the rotated image until the absolute value of the difference between the number of detected facial feature points minus the second number is less than A preset difference threshold, and record the facial feature point output last time by the facial feature point detection model as the first facial feature point;
特征点获取子模块,用于获取所述第一面部特征点,以及获取所述面部特征点检测模型对所述第二人脸图像进行检测,从而生成的第二面部特征点。The feature point acquisition sub-module is configured to acquire the first facial feature point, and acquire the facial feature point detection model to detect the second face image, thereby generating the second facial feature point.
其中上述子模块分别用于执行的操作与前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。The operations performed by the above-mentioned sub-modules respectively correspond to the steps of the machine learning-based webpage rendering method of the foregoing embodiment, and will not be repeated here.
在一个实施方式中,所述装置,包括:In one embodiment, the device includes:
指定数据包获取单元,用于若所述相似度值大于预设的图像相似度阈值,则根据预设的人脸图像与数据包的对应关系,获取与所述指定人脸图像对应的指定数据包,其中所述指定数据包用于渲染网页;A designated data packet acquisition unit, configured to, if the similarity value is greater than a preset image similarity threshold, acquire designated data corresponding to the designated face image according to the preset correspondence relationship between the face image and the data packet Package, wherein the specified data package is used to render a web page;
指定数据包渲染单元,用于利用所述指定数据包渲染网页。The designated data packet rendering unit is used to render the web page using the designated data packet.
其中上述单元分别用于执行的操作与前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。The operations performed by the above-mentioned units respectively correspond to the steps of the machine learning-based web page rendering method of the aforementioned embodiment, and will not be repeated here.
在一个实施方式中,所述装置,包括:In one embodiment, the device includes:
数据获取单元,用于获取预先收集的训练数据和测试数据,所述训练数据和测试数据均包括面部图像和与面部图像对应的特征信息;A data acquisition unit for acquiring pre-collected training data and test data, where the training data and test data both include facial images and feature information corresponding to the facial images;
中间线性分类器获取单元,用于采用随机梯度下降法,利用所述训练数据训练预设的初始线性分类器,从而得到中间线性分类器;An intermediate linear classifier acquiring unit, configured to adopt a stochastic gradient descent method and use the training data to train a preset initial linear classifier to obtain an intermediate linear classifier;
验证单元,用于利用所述测试数据验证所述中间线性分类器,并判断所述中间线性分类器是否验证通过;A verification unit, configured to verify the intermediate linear classifier using the test data, and determine whether the intermediate linear classifier passes the verification;
线性分类器标记单元,用于若所述中间线性分类器验证通过,则将所述中间线性分类器作为最终的线性分类器。The linear classifier marking unit is configured to use the intermediate linear classifier as the final linear classifier if the verification of the intermediate linear classifier passes.
其中上述单元分别用于执行的操作与前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。The operations performed by the above-mentioned units respectively correspond to the steps of the machine learning-based web page rendering method of the aforementioned embodiment, and will not be repeated here.
在一个实施方式中,所述网页渲染方法应用于渲染终端,所述渲染终端与所述多个服务器通过网关实现通信连接,所述子信息发送单元50,包括:In one embodiment, the web page rendering method is applied to a rendering terminal, the rendering terminal and the multiple servers are connected to each other through a gateway, and the sub-information sending unit 50 includes:
第一加密子单元,用于采用与多个服务器分别预先约定的第一加密方法,对所述多个子信息进行分别加密,从而得到多个第一子密文;The first encryption subunit is configured to separately encrypt the plurality of sub-information by adopting the first encryption method respectively agreed with the multiple servers to obtain multiple first sub-ciphertexts;
密文组合子单元,用于将所述多个第一子密文组合为中间密文;A ciphertext combination subunit for combining the plurality of first sub-ciphertexts into an intermediate ciphertext;
第二加密子单元,用于采用与网关预先约定的第二加密方法,对所述中间密文进行加密处理,从而得到最终密文;The second encryption subunit is used to encrypt the intermediate ciphertext by adopting the second encryption method agreed with the gateway in advance, so as to obtain the final ciphertext;
最终密文发送子单元,用于将所述最终密文发送给网关,并要求所述网关对所述最终密文进行解密并拆分处理以得到多个第一子密文,并将所述多个第一子密文对应发送给多个服务器。The final ciphertext sending subunit is used to send the final ciphertext to the gateway, and request the gateway to decrypt and split the final ciphertext to obtain a plurality of first sub-ciphertexts, and send the The multiple first sub-ciphertexts are correspondingly sent to multiple servers.
其中上述子单元分别用于执行的操作与前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。The operations performed by the aforementioned sub-units respectively correspond to the steps of the machine learning-based web page rendering method of the foregoing embodiment, and will not be repeated here.
本申请的基于机器学习的网页渲染装置,实现网页渲染自适应特殊人群(例如高龄人群),提高网页使用效率。The webpage rendering device based on machine learning of the present application realizes webpage rendering adaptive to special people (for example, elderly people), and improves webpage usage efficiency.
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储基于机器学习的网页渲染方法所用数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种基于机器学习的网页渲染方法。上述处理器执行上述基于机器学习的网页渲染方法,其中所述方法包括的步骤分别与执行前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。3, an embodiment of the present application also provides a computer device. The computer device may be a server, and its internal structure may be as shown in the figure. The computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor designed by the computer is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used to store data used in the web page rendering method based on machine learning. The network interface of the computer device is used to communicate with an external terminal through a network connection. The computer program is executed by the processor to realize a web page rendering method based on machine learning. The above-mentioned processor executes the above-mentioned machine learning-based webpage rendering method, wherein the steps included in the method respectively correspond to the steps of executing the machine learning-based webpage rendering method of the aforementioned embodiment, and will not be repeated here.
本申请一实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现基于机器学习的网页渲染方法,其中所述方法包括的步骤分别与执行前述实施方式的基于机器学习的网页渲染方法的步骤一一对应,在此不再赘述。计算机可读存储介质可以是非易失性,也可以是易失。An embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, a method for rendering a webpage based on machine learning is implemented, wherein the steps included in the method are the same as those in the previous implementation. The steps of the web page rendering method based on machine learning correspond one to one, so I won’t repeat them here. The computer-readable storage medium may be non-volatile or volatile.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序在执行时,可包括如上述各方法的实施例的流程。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiments and methods can be implemented by a computer program instructing relevant hardware. When the computer program is executed, it may include the processes as in the above-mentioned embodiments of the methods. . It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, device, article or method including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, device, article, or method. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of this application, or directly or indirectly used in other related technical fields, are similarly included in the scope of patent protection of this application.

Claims (20)

  1. 一种基于机器学习的网页渲染方法,包括:A web page rendering method based on machine learning, including:
    利用预设的摄像头采集用户的第一人脸图像;Use the preset camera to collect the user's first face image;
    根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;According to a preset image similarity calculation method, calculate an image similarity value between the first face image and a prestored designated face image, and determine whether the similarity value is greater than a preset image similarity threshold;
    若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;If the similarity value is not greater than the preset image similarity threshold, the first face image is input into the linear classifier in the preset prediction model for calculation, so as to obtain the features output by the linear classifier Information, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset non-linear classifier; the linear classifier is pre-processed by including facial images and Trained on the sample data of the feature information corresponding to the face image;
    将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;The feature information is input into the non-linear classifier for calculation, so as to obtain the web page rendering information composed of multiple sub-information output by the non-linear classifier, wherein the multiple sub-information includes at least the color sub-information of the web page, Icon style sub-information and layout style sub-information; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
    发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;Sending the multiple sub-information and returning the request information of the sub-data package used for web page rendering to multiple corresponding servers;
    接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。Receiving the multiple sub-data packets correspondingly returned by the multiple servers, combining the multiple sub-data packets into a rendering data packet, and rendering the webpage by using the rendering data packet.
  2. 根据权利要求1所述的基于机器学习的网页渲染方法,其中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值的步骤,包括:The method for rendering a webpage based on machine learning according to claim 1, wherein the image similarity between the first face image and a pre-stored specified face image is calculated according to a preset image similarity calculation method Value steps include:
    根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,其中所述缩放图片的眉眼间距与所述指定人脸图像的眉眼间距相等;Performing a scaling process on the first face image according to a preset scaling method to obtain a scaled picture, wherein the eyebrow and eye spacing of the scaled picture is equal to the eyebrow and eye spacing of the designated face image;
    获取所述缩放图片中的多个第一特征长度,以及获取所述指定人脸图像中的多个第二特征长度,其中所述多个第一特征长度和所述多个第二特征长度均至少包括脸长和脸宽,所述多个第一特征长度与所述多个第二特征长度一一对应;Acquire multiple first feature lengths in the zoomed picture, and acquire multiple second feature lengths in the specified face image, wherein the multiple first feature lengths and the multiple second feature lengths are both Comprising at least a face length and a face width, and the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence;
    生成第一矩阵[U1,U2,...,Un]和第二矩阵[P1,P2,...,Pn],其中U1,U2,...,Un为所述多个第一特征长度,P1,P2,...,Pn为所述多个第二特征长度,共有n个第一特征长度和n个第二特征长度;Generate the first matrix [U1,U2,...,Un] and the second matrix [P1,P2,...,Pn], where U1,U2,...,Un are the multiple first feature lengths , P1, P2,..., Pn are the plurality of second characteristic lengths, and there are n first characteristic lengths and n second characteristic lengths in total;
    根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,计算出所述第一人脸图像与指定人脸图像的图像相似度值M。 According to the formula: M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T ||-n|, calculate the first face image and the specified The image similarity value M of the face image.
  3. 根据权利要求2所述的基于机器学习的网页渲染方法,其中,所述根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片的步骤,包括:The method for rendering a webpage based on machine learning according to claim 2, wherein the step of performing zooming processing on the first face image according to a preset zooming method to obtain a zoomed picture comprises:
    利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点;Use a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and use the facial feature point detection model to detect the specified face image , So as to obtain multiple second facial feature points;
    生成所述多个第一面部特征点的第一最小外接矩形,以及生成所述多个第二面部特征点的第二最小外接矩形;Generating a first minimum circumscribed rectangle of the plurality of first facial feature points, and generating a second minimum circumscribed rectangle of the plurality of second facial feature points;
    采用等比例缩放的方法对所述第一人脸图像进行缩放处理,以使所述第一最小外接矩形的面积等于所述第二最小外接矩形的面积,从而得到缩放图片。The scaling process is performed on the first face image by an equal scaling method, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, thereby obtaining a scaled picture.
  4. 根据权利要求3所述的基于机器学习的网页渲染方法,其中,所述面部特征点检测模型仅由以预设的标准姿态放置的人脸图像训练而成,所述指定人脸图像以标准姿态放置,所述利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点的步骤,包括:The method for rendering a web page based on machine learning according to claim 3, wherein the facial feature point detection model is only trained by a face image placed in a preset standard posture, and the designated face image is in a standard posture. Placing, said using a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and using the facial feature point detection model to detect the designated person The step of detecting the face image to obtain multiple second facial feature points includes:
    利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到第一数量,以及利用所述面部特征点检测模型对所述第二人脸图像进行检测,从而得到第二数量,其中所述第一数量指第一人脸图像中的面部特征点数量,所述第二数量指所述指定人脸图像中的面部特征点数量;Use a preset facial feature point detection model to detect the first face image to obtain a first number, and use the facial feature point detection model to detect the second face image to obtain a second face image. Number, wherein the first number refers to the number of facial feature points in the first face image, and the second number refers to the number of facial feature points in the designated face image;
    判断所述第一数量减去所述第二数量的差值绝对值是否小于预设的差值阈值;Judging whether the absolute value of the difference between the first quantity and the second quantity is less than a preset difference threshold;
    若所述第一数量减去所述第二数量的差值绝对值不小于预设的差值阈值,则对所述第一人脸图像进行一次或多次旋转处理,从而得到一张或多张旋转图像;If the absolute value of the difference between the first quantity minus the second quantity is not less than the preset difference threshold, the first face image is rotated one or more times to obtain one or more Rotated images;
    依次利用所述面部特征点检测模型对所述旋转图像进行检测,直到检测到的面部特征点数量减去所述第二数量的差值绝对值小于预设的差值阈值,并将所述面部特征点检测模型最后一次输出的面部特征点记为第一面部特征点;Use the facial feature point detection model to detect the rotated image in turn, until the absolute value of the difference between the number of detected facial feature points minus the second number is less than a preset difference threshold, and the face The last facial feature point output by the feature point detection model is recorded as the first facial feature point;
    获取所述第一面部特征点,以及获取所述面部特征点检测模型对所述第二人脸图像进行检测,从而生成的第二面部特征点。Acquire the first facial feature point, and obtain the second facial feature point generated by the facial feature point detection model to detect the second face image.
  5. 根据权利要求1所述的基于机器学习的网页渲染方法,其中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值的步骤之后,包括:The method for rendering a webpage based on machine learning according to claim 1, wherein the image similarity between the first face image and a pre-stored specified face image is calculated according to a preset image similarity calculation method After the step of determining whether the similarity value is greater than the preset image similarity threshold, the method includes:
    若所述相似度值大于预设的图像相似度阈值,则根据预设的人脸图像与数据包的对应关系,获取与所述指定人脸图像对应的指定数据包,其中所述指定数据包用于渲染网页;If the similarity value is greater than the preset image similarity threshold, the specified data packet corresponding to the specified face image is acquired according to the preset correspondence between the face image and the data packet, wherein the specified data packet Used to render web pages;
    利用所述指定数据包渲染网页。Render the web page using the specified data package.
  6. 根据权利要求1所述的基于机器学习的网页渲染方法,其中,所述若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成的步骤之前,包括:The method for rendering a web page based on machine learning according to claim 1, wherein if the similarity value is not greater than a preset image similarity threshold, the first face image is input into a preset prediction model The calculation is performed in the linear classifier in the linear classifier to obtain the feature information output by the linear classifier. The feature information includes at least the predicted age range; wherein the prediction model is composed of the linear classifier and a preset nonlinear Before the steps of sequential connection of classifiers, include:
    获取预先收集的训练数据和测试数据,所述训练数据和测试数据均包括面部图像和与面部图像对应的特征信息;Acquiring pre-collected training data and test data, where the training data and test data both include a facial image and feature information corresponding to the facial image;
    采用随机梯度下降法,利用所述训练数据训练预设的初始线性分类器,从而得到中间线性分类器;Adopting a stochastic gradient descent method and using the training data to train a preset initial linear classifier to obtain an intermediate linear classifier;
    利用所述测试数据验证所述中间线性分类器,并判断所述中间线性分类器是否验证通过;Use the test data to verify the intermediate linear classifier, and determine whether the intermediate linear classifier passes the verification;
    若所述中间线性分类器验证通过,则将所述中间线性分类器作为最终的线性分类器。If the verification of the intermediate linear classifier is passed, the intermediate linear classifier is used as the final linear classifier.
  7. 根据权利要求1所述的基于机器学习的网页渲染方法,其中,所述网页渲染方法应用于渲染终端,所述渲染终端与所述多个服务器通过网关实现通信连接,所述将所述多个子信息对应发送给多个服务器的步骤,包括:The method for rendering a webpage based on machine learning according to claim 1, wherein the method for rendering the webpage is applied to a rendering terminal, the rendering terminal and the multiple servers are connected to each other through a gateway, and the multiple sub The corresponding steps of sending information to multiple servers include:
    采用与多个服务器分别预先约定的第一加密方法,对所述多个子信息进行分别加密,从而得到多个第一子密文;Encrypting the multiple sub-information separately using a first encryption method previously agreed with multiple servers to obtain multiple first sub-ciphertexts;
    将所述多个第一子密文组合为中间密文;Combining the plurality of first sub-ciphertexts into an intermediate ciphertext;
    采用与网关预先约定的第二加密方法,对所述中间密文进行加密处理,从而得到最终密文;Encrypt the intermediate ciphertext by adopting the second encryption method agreed with the gateway in advance, so as to obtain the final ciphertext;
    将所述最终密文发送给网关,并要求所述网关对所述最终密文进行解密并拆分处理以得到多个第一子密文,并将所述多个第一子密文对应发送给多个服务器。Send the final ciphertext to the gateway, and request the gateway to decrypt and split the final ciphertext to obtain a plurality of first sub-ciphertexts, and send the plurality of first sub-ciphertexts correspondingly To multiple servers.
  8. 一种基于机器学习的网页渲染装置,包括:A web page rendering device based on machine learning includes:
    第一人脸图像采集单元,用于利用预设的摄像头采集用户的第一人脸图像;The first face image acquisition unit is configured to use a preset camera to collect the user's first face image;
    图像相似度值判断单元,用于根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;The image similarity value judgment unit is used to calculate the image similarity value between the first face image and a pre-stored designated face image according to a preset image similarity calculation method, and determine whether the similarity value is Greater than the preset image similarity threshold;
    特征信息获取单元,用于若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;The feature information acquiring unit is configured to, if the similarity value is not greater than the preset image similarity threshold, input the first face image into the linear classifier in the preset prediction model for calculation, so as to obtain the The feature information output by the linear classifier, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset nonlinear classifier; the linear classifier It is pre-trained through sample data including facial images and feature information corresponding to facial images;
    网页渲染信息获取单元,用于将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;The web page rendering information acquisition unit is configured to input the feature information into the nonlinear classifier for calculation, so as to obtain web page rendering information composed of multiple sub-information output by the nonlinear classifier, wherein the multiple sub-information Including at least color sub-information, icon style sub-information, and layout style sub-information of the webpage; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
    子信息发送单元,用于发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;A sub-information sending unit, configured to send the multiple sub-information and return request information of the sub-data package used for web page rendering to multiple corresponding servers;
    网页渲染单元,用于接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。The web page rendering unit is configured to receive the multiple sub-data packets correspondingly returned by the multiple servers, combine the multiple sub-data packets into a rendering data packet, and render the web page by using the rendering data packet.
  9. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现如下步骤:A computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
    利用预设的摄像头采集用户的第一人脸图像;Use the preset camera to collect the user's first face image;
    根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;According to a preset image similarity calculation method, calculate an image similarity value between the first face image and a prestored designated face image, and determine whether the similarity value is greater than a preset image similarity threshold;
    若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;If the similarity value is not greater than the preset image similarity threshold, the first face image is input into the linear classifier in the preset prediction model for calculation, so as to obtain the features output by the linear classifier Information, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset non-linear classifier; the linear classifier is pre-processed by including facial images and Trained on the sample data of the feature information corresponding to the face image;
    将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;The feature information is input into the non-linear classifier for calculation, so as to obtain the web page rendering information composed of multiple sub-information output by the non-linear classifier, wherein the multiple sub-information includes at least the color sub-information of the web page, Icon style sub-information and layout style sub-information; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
    发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;Sending the multiple sub-information and returning the request information of the sub-data package used for web page rendering to multiple corresponding servers;
    接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。Receiving the multiple sub-data packets correspondingly returned by the multiple servers, combining the multiple sub-data packets into a rendering data packet, and rendering the webpage by using the rendering data packet.
  10. 根据权利要求9所述的计算机设备,其中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,包括:The computer device according to claim 9, wherein the calculating the image similarity value between the first face image and a pre-stored specified face image according to a preset image similarity calculation method comprises:
    根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,其中所述缩放图片的眉眼间距与所述指定人脸图像的眉眼间距相等;Performing a scaling process on the first face image according to a preset scaling method to obtain a scaled picture, wherein the eyebrow and eye spacing of the scaled picture is equal to the eyebrow and eye spacing of the designated face image;
    获取所述缩放图片中的多个第一特征长度,以及获取所述指定人脸图像中的多个第二特征长度,其中所述多个第一特征长度和所述多个第二特征长度均至少包括脸长和脸宽,所述多个第一特征长度与所述多个第二特征长度一一对应;Acquire multiple first feature lengths in the zoomed picture, and acquire multiple second feature lengths in the specified face image, wherein the multiple first feature lengths and the multiple second feature lengths are both Including at least face length and face width, the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence;
    生成第一矩阵[U1,U2,...,Un]和第二矩阵[P1,P2,...,Pn],其中U1,U2,...,Un为所述多个第一特征长度,P1,P2,...,Pn为所述多个第二特征长度,共有n个第一特征长度和n个第二特征长度;Generate the first matrix [U1,U2,...,Un] and the second matrix [P1,P2,...,Pn], where U1,U2,...,Un are the multiple first feature lengths , P1, P2,..., Pn are the plurality of second characteristic lengths, and there are n first characteristic lengths and n second characteristic lengths in total;
    根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,计算出所述第一人脸图像与指定人脸图像的图像相似度值M。 According to the formula: M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T ||-n|, calculate the first face image and the specified The image similarity value M of the face image.
  11. 根据权利要求10所述的计算机设备,其中,所述根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,包括:10. The computer device according to claim 10, wherein the zooming process is performed on the first face image according to a preset zooming method to obtain a zoomed picture, comprising:
    利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点;Use a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and use the facial feature point detection model to detect the specified face image , So as to obtain multiple second facial feature points;
    生成所述多个第一面部特征点的第一最小外接矩形,以及生成所述多个第二面部特征点的第二最小外接矩形;Generating a first minimum circumscribed rectangle of the plurality of first facial feature points, and generating a second minimum circumscribed rectangle of the plurality of second facial feature points;
    采用等比例缩放的方法对所述第一人脸图像进行缩放处理,以使所述第一最小外接矩形的面积等于所述第二最小外接矩形的面积,从而得到缩放图片。The scaling process is performed on the first face image by an equal scaling method, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, thereby obtaining a scaled picture.
  12. 根据权利要求11所述的计算机设备,其中,所述面部特征点检测模型仅由以预设的标准姿态放置的人脸图像训练而成,所述指定人脸图像以标准姿态放置,所述利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点,包括:The computer device according to claim 11, wherein the facial feature point detection model is only trained by a face image placed in a preset standard posture, the designated face image is placed in a standard posture, and the use The preset facial feature point detection model detects the first face image to obtain a plurality of first facial feature points; and uses the facial feature point detection model to detect the specified face image, Thereby, multiple second facial feature points are obtained, including:
    利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到第一数量,以及利用所述面部特征点检测模型对所述第二人脸图像进行检测,从而得到第二数量,其中所述第一数量指第一人脸图像中的面部特征点数量,所述第二数量指所述指定人脸图像中的面部特征点数量;Use a preset facial feature point detection model to detect the first face image to obtain a first number, and use the facial feature point detection model to detect the second face image to obtain a second face image. Number, wherein the first number refers to the number of facial feature points in the first face image, and the second number refers to the number of facial feature points in the designated face image;
    判断所述第一数量减去所述第二数量的差值绝对值是否小于预设的差值阈值;Judging whether the absolute value of the difference between the first quantity and the second quantity is less than a preset difference threshold;
    若所述第一数量减去所述第二数量的差值绝对值不小于预设的差值阈值,则对所述第一人脸图像进行一次或多次旋转处理,从而得到一张或多张旋转图像;If the absolute value of the difference between the first quantity minus the second quantity is not less than the preset difference threshold, the first face image is rotated one or more times to obtain one or more Rotated images;
    依次利用所述面部特征点检测模型对所述旋转图像进行检测,直到检测到的面部特征点数量减去所述第二数量的差值绝对值小于预设的差值阈值,并将所述面部特征点检测模型最后一次输出的面部特征点记为第一面部特征点;Use the facial feature point detection model to detect the rotated image in turn, until the absolute value of the difference between the number of detected facial feature points minus the second number is less than a preset difference threshold, and the face The last facial feature point output by the feature point detection model is recorded as the first facial feature point;
    获取所述第一面部特征点,以及获取所述面部特征点检测模型对所述第二人脸图像进行检测,从而生成的第二面部特征点。Acquire the first facial feature point, and obtain the second facial feature point generated by the facial feature point detection model to detect the second face image.
  13. 根据权利要求9所述的计算机设备,其中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值之后,包括:8. The computer device according to claim 9, wherein the image similarity value between the first face image and a pre-stored designated face image is calculated according to a preset image similarity calculation method, and determined After describing whether the similarity value is greater than the preset image similarity threshold, it includes:
    若所述相似度值大于预设的图像相似度阈值,则根据预设的人脸图像与数据包的对应关系,获取与所述指定人脸图像对应的指定数据包,其中所述指定数据包用于渲染网页;If the similarity value is greater than the preset image similarity threshold, the specified data packet corresponding to the specified face image is acquired according to the preset correspondence between the face image and the data packet, wherein the specified data packet Used to render web pages;
    利用所述指定数据包渲染网页。Render the web page using the specified data package.
  14. 根据权利要求9所述的计算机设备,其中,所述若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成之前,包括:9. The computer device according to claim 9, wherein if the similarity value is not greater than a preset image similarity threshold, the first face image is input to a linear classifier in a preset prediction model In order to obtain the feature information output by the linear classifier, the feature information includes at least the predicted age range; wherein the prediction model is sequentially connected by the linear classifier and the preset nonlinear classifier. Before completion, including:
    获取预先收集的训练数据和测试数据,所述训练数据和测试数据均包括面部图像和与面部图像对应的特征信息;Acquiring pre-collected training data and test data, where the training data and test data both include a facial image and feature information corresponding to the facial image;
    采用随机梯度下降法,利用所述训练数据训练预设的初始线性分类器,从而得到中间线性分类器;Adopting a stochastic gradient descent method and using the training data to train a preset initial linear classifier to obtain an intermediate linear classifier;
    利用所述测试数据验证所述中间线性分类器,并判断所述中间线性分类器是否验证通过;Use the test data to verify the intermediate linear classifier, and determine whether the intermediate linear classifier passes the verification;
    若所述中间线性分类器验证通过,则将所述中间线性分类器作为最终的线性分类器。If the verification of the intermediate linear classifier is passed, the intermediate linear classifier is used as the final linear classifier.
  15. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如下步骤:A computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
    利用预设的摄像头采集用户的第一人脸图像;Use the preset camera to collect the user's first face image;
    根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值;According to a preset image similarity calculation method, calculate an image similarity value between the first face image and a prestored designated face image, and determine whether the similarity value is greater than a preset image similarity threshold;
    若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成;所述线性分类器是预先通过包括面部图像和与面部图像对应的特征信息的样本数据训练而成的;If the similarity value is not greater than the preset image similarity threshold, the first face image is input into the linear classifier in the preset prediction model for calculation, so as to obtain the features output by the linear classifier Information, the feature information includes at least a predicted age range; wherein the prediction model is formed by sequentially connecting the linear classifier and a preset non-linear classifier; the linear classifier is pre-processed by including facial images and Trained on the sample data of the feature information corresponding to the face image;
    将所述特征信息输入所述非线性分类器中进行计算,从而得到所述非线性分类器输出的由多个子信息构成的网页渲染信息,其中所述多个子信息至少包括网页的颜色子信息、图标风格子信息和布局风格子信息;所述非线性分类器是预先通过包括特征信息和特征信息对应的多个子信息的样本数据训练而成的;The feature information is input into the non-linear classifier for calculation, so as to obtain the web page rendering information composed of multiple sub-information output by the non-linear classifier, wherein the multiple sub-information includes at least the color sub-information of the web page, Icon style sub-information and layout style sub-information; the non-linear classifier is pre-trained through sample data including feature information and multiple sub-information corresponding to the feature information;
    发送所述多个子信息及返回用于网页渲染的子数据包的请求信息给对应的多个服务器;Sending the multiple sub-information and returning the request information of the sub-data package used for web page rendering to multiple corresponding servers;
    接收所述多个服务器对应返回的所述多个子数据包,并将所述多个子数据包组合为渲染数据包,利用所述渲染数据包渲染所述网页。Receiving the multiple sub-data packets correspondingly returned by the multiple servers, combining the multiple sub-data packets into a rendering data packet, and rendering the webpage by using the rendering data packet.
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,包括:15. The computer-readable storage medium according to claim 15, wherein said calculating the image similarity value between the first face image and a pre-stored specified face image according to a preset image similarity calculation method, include:
    根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,其中所述缩放图片的眉眼间距与所述指定人脸图像的眉眼间距相等;Performing a scaling process on the first face image according to a preset scaling method to obtain a scaled picture, wherein the eyebrow and eye spacing of the scaled picture is equal to the eyebrow and eye spacing of the designated face image;
    获取所述缩放图片中的多个第一特征长度,以及获取所述指定人脸图像中的多个第二特征长度,其中所述多个第一特征长度和所述多个第二特征长度均至少包括脸长和脸宽,所述多个第一特征长度与所述多个第二特征长度一一对应;Acquire multiple first feature lengths in the zoomed picture, and acquire multiple second feature lengths in the specified face image, wherein the multiple first feature lengths and the multiple second feature lengths are both Including at least face length and face width, the plurality of first characteristic lengths correspond to the plurality of second characteristic lengths in a one-to-one correspondence;
    生成第一矩阵[U1,U2,...,Un]和第二矩阵[P1,P2,...,Pn],其中U1,U2,...,Un为所述多个第一特征长度,P1,P2,...,Pn为所述多个第二特征长度,共有n个第一特征长度和n个第二特征长度;Generate the first matrix [U1,U2,...,Un] and the second matrix [P1,P2,...,Pn], where U1,U2,...,Un are the multiple first feature lengths , P1, P2,..., Pn are the plurality of second characteristic lengths, and there are n first characteristic lengths and n second characteristic lengths in total;
    根据公式:M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T||-n|,计算出所述第一人脸图像与指定人脸图像的图像相似度值M。 According to the formula: M=1/|||[U1,U2,...,Un] [P1,P2,...,Pn] T ||-n|, calculate the first face image and the specified The image similarity value M of the face image.
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述根据预设的缩放方法,对所述第一人脸图像进行缩放处理,从而得到缩放图片,包括:16. The computer-readable storage medium according to claim 16, wherein the zooming process is performed on the first face image according to a preset zooming method to obtain a zoomed picture, comprising:
    利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点;Use a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and use the facial feature point detection model to detect the specified face image , So as to obtain multiple second facial feature points;
    生成所述多个第一面部特征点的第一最小外接矩形,以及生成所述多个第二面部特征点的第二最小外接矩形;Generating a first minimum circumscribed rectangle of the plurality of first facial feature points, and generating a second minimum circumscribed rectangle of the plurality of second facial feature points;
    采用等比例缩放的方法对所述第一人脸图像进行缩放处理,以使所述第一最小外接矩形的面积等于所述第二最小外接矩形的面积,从而得到缩放图片。The scaling process is performed on the first face image by an equal scaling method, so that the area of the first minimum circumscribed rectangle is equal to the area of the second minimum circumscribed rectangle, thereby obtaining a scaled picture.
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述面部特征点检测模型仅由以预设的标准姿态放置的人脸图像训练而成,所述指定人脸图像以标准姿态放置,所述利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到多个第一面部特征点;以及,利用所述面部特征点检测模型对所述指定人脸图像进行检测,从而得到多个第二面部特征点,包括:18. The computer-readable storage medium of claim 17, wherein the facial feature point detection model is only trained by a face image placed in a preset standard posture, and the designated face image is placed in a standard posture, Said using a preset facial feature point detection model to detect the first face image to obtain a plurality of first facial feature points; and using the facial feature point detection model to detect the specified face image Perform detection to obtain multiple second facial feature points, including:
    利用预设的面部特征点检测模型对所述第一人脸图像进行检测,从而得到第一数量,以及利用所述面部特征点检测模型对所述第二人脸图像进行检测,从而得到第二数量,其中所述第一数量指第一人脸图像中的面部特征点数量,所述第二数量指所述指定人脸图像中的面部特征点数量;Use a preset facial feature point detection model to detect the first face image to obtain a first number, and use the facial feature point detection model to detect the second face image to obtain a second face image. Number, wherein the first number refers to the number of facial feature points in the first face image, and the second number refers to the number of facial feature points in the designated face image;
    判断所述第一数量减去所述第二数量的差值绝对值是否小于预设的差值阈值;Judging whether the absolute value of the difference between the first quantity and the second quantity is less than a preset difference threshold;
    若所述第一数量减去所述第二数量的差值绝对值不小于预设的差值阈值,则对所述第一人脸图像进行一次或多次旋转处理,从而得到一张或多张旋转图像;If the absolute value of the difference between the first quantity minus the second quantity is not less than the preset difference threshold, the first face image is rotated one or more times to obtain one or more Rotated images;
    依次利用所述面部特征点检测模型对所述旋转图像进行检测,直到检测到的面部特征点数量减去所述第二数量的差值绝对值小于预设的差值阈值,并将所述面部特征点检测模型最后一次输出的面部特征点记为第一面部特征点;Use the facial feature point detection model to detect the rotated image in turn, until the absolute value of the difference between the number of detected facial feature points minus the second number is less than a preset difference threshold, and the face The last facial feature point output by the feature point detection model is recorded as the first facial feature point;
    获取所述第一面部特征点,以及获取所述面部特征点检测模型对所述第二人脸图像进行检测,从而生成的第二面部特征点。Acquire the first facial feature point, and obtain the second facial feature point generated by the facial feature point detection model to detect the second face image.
  19. 根据权利要求15所述的计算机可读存储介质,其中,所述根据预设的图像相似度计算方法,计算所述第一人脸图像与预存的一幅指定人脸图像的图像相似度值,并判断所述相似度值是否大于预设的图像相似度阈值之后,包括:15. The computer-readable storage medium according to claim 15, wherein said calculating the image similarity value between the first face image and a pre-stored specified face image according to a preset image similarity calculation method, And after judging whether the similarity value is greater than the preset image similarity threshold, the method includes:
    若所述相似度值大于预设的图像相似度阈值,则根据预设的人脸图像与数据包的对应关系,获取与所述指定人脸图像对应的指定数据包,其中所述指定数据包用于渲染网页;If the similarity value is greater than the preset image similarity threshold, the specified data packet corresponding to the specified face image is acquired according to the preset correspondence between the face image and the data packet, wherein the specified data packet Used to render web pages;
    利用所述指定数据包渲染网页。Render the web page using the specified data package.
  20. 根据权利要求15所述的计算机可读存储介质,其中,所述若所述相似度值不大于预设的图像相似度阈值,则将所述第一人脸图像输入预设的预测模型中的线性分类器中进行计算,从而得到所述线性分类器输出的特征信息,所述特征信息至少包括预测的年龄段区间;其中所述预测模型由所述线性分类器与预设的非线性分类器顺序连接而成之前,包括:The computer-readable storage medium according to claim 15, wherein if the similarity value is not greater than a preset image similarity threshold, the first face image is input into a preset prediction model The calculation is performed in the linear classifier to obtain the feature information output by the linear classifier. The feature information includes at least a predicted age range; wherein the prediction model is composed of the linear classifier and a preset nonlinear Before the sequence is connected, it includes:
    获取预先收集的训练数据和测试数据,所述训练数据和测试数据均包括面部图像和与面部图像对应的特征信息;Acquiring pre-collected training data and test data, where the training data and test data both include a facial image and feature information corresponding to the facial image;
    采用随机梯度下降法,利用所述训练数据训练预设的初始线性分类器,从而得到中间线性分类器;Adopting a stochastic gradient descent method and using the training data to train a preset initial linear classifier to obtain an intermediate linear classifier;
    利用所述测试数据验证所述中间线性分类器,并判断所述中间线性分类器是否验证通过;Use the test data to verify the intermediate linear classifier, and determine whether the intermediate linear classifier passes the verification;
    若所述中间线性分类器验证通过,则将所述中间线性分类器作为最终的线性分类器。If the verification of the intermediate linear classifier is passed, the intermediate linear classifier is used as the final linear classifier.
PCT/CN2020/088015 2019-12-23 2020-04-30 Machine learning-based web page rendering method and apparatus, and computer device WO2021128682A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911342654.1 2019-12-23
CN201911342654.1A CN111177622A (en) 2019-12-23 2019-12-23 Webpage rendering method and device based on machine learning and computer equipment

Publications (1)

Publication Number Publication Date
WO2021128682A1 true WO2021128682A1 (en) 2021-07-01

Family

ID=70646312

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/088015 WO2021128682A1 (en) 2019-12-23 2020-04-30 Machine learning-based web page rendering method and apparatus, and computer device

Country Status (2)

Country Link
CN (1) CN111177622A (en)
WO (1) WO2021128682A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780228A (en) * 2021-09-18 2021-12-10 中科海微(北京)科技有限公司 Method, system, terminal and medium for comparing testimony of a witness

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360819A (en) * 2021-05-14 2021-09-07 山东英信计算机技术有限公司 Page layout method and device
CN116127587B (en) * 2023-04-17 2023-06-16 矩阵纵横设计股份有限公司 Rendering method and system in indoor design

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683337A (en) * 2015-02-13 2015-06-03 小米科技有限责任公司 Webpage access method, device and system
CN105373628A (en) * 2015-12-15 2016-03-02 广州唯品会信息科技有限公司 Webpage display method and system
CN107045622A (en) * 2016-12-30 2017-08-15 浙江大学 The face age estimation method learnt based on adaptive age distribution
US20190287120A1 (en) * 2018-03-19 2019-09-19 Target Brands, Inc. Content management of digital retail displays

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683337A (en) * 2015-02-13 2015-06-03 小米科技有限责任公司 Webpage access method, device and system
CN105373628A (en) * 2015-12-15 2016-03-02 广州唯品会信息科技有限公司 Webpage display method and system
CN107045622A (en) * 2016-12-30 2017-08-15 浙江大学 The face age estimation method learnt based on adaptive age distribution
US20190287120A1 (en) * 2018-03-19 2019-09-19 Target Brands, Inc. Content management of digital retail displays

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780228A (en) * 2021-09-18 2021-12-10 中科海微(北京)科技有限公司 Method, system, terminal and medium for comparing testimony of a witness
CN113780228B (en) * 2021-09-18 2023-07-11 中科海微(北京)科技有限公司 Person evidence comparison method, system, terminal and medium

Also Published As

Publication number Publication date
CN111177622A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2021128682A1 (en) Machine learning-based web page rendering method and apparatus, and computer device
US10268910B1 (en) Authentication based on heartbeat detection and facial recognition in video data
CN106599872A (en) Method and equipment for verifying living face images
US11263441B1 (en) Systems and methods for passive-subject liveness verification in digital media
JP2018532188A (en) Image-based CAPTCHA challenge
WO2022033220A1 (en) Face liveness detection method, system and apparatus, computer device, and storage medium
JP5856330B2 (en) Authentication using video signature
TWI711305B (en) Method, device and electronic apparatus for video abstraction generation and storage medium thereof
JP2007072703A (en) Content provision device, content provision method and content provision program
US11373449B1 (en) Systems and methods for passive-subject liveness verification in digital media
EP3001343B1 (en) System and method of enhanced identity recognition incorporating random actions
CN110008664A (en) Authentication information acquisition, account-opening method, device and electronic equipment
CN108985133A (en) A kind of the age prediction technique and device of facial image
WO2021082562A1 (en) Spoofing detection method and apparatus, electronic device, storage medium and program product
CN110049309A (en) The Detection of Stability method and apparatus of picture frame in video flowing
CN108511066A (en) information generating method and device
CN113989156A (en) Method, apparatus, medium, device, and program for reliability verification of desensitization method
CN105407069B (en) Living body authentication method, apparatus, client device and server
US11710353B2 (en) Spoof detection based on challenge response analysis
CN110633677A (en) Face recognition method and device
JP6651038B1 (en) Age privacy protection method and system for face recognition
WO2021072883A1 (en) Picture correction method and apparatus based on facial feature point detection, and computer device
Prabhu et al. Design of multiple share creation with optimal signcryption based secure biometric authentication system for cloud environment
CN108460811B (en) Face image processing method and device and computer equipment
CN115830668A (en) User authentication method and device based on facial recognition, computing equipment and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905302

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28/10/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20905302

Country of ref document: EP

Kind code of ref document: A1