CN115661447B - Product image adjustment method based on big data - Google Patents

Product image adjustment method based on big data Download PDF

Info

Publication number
CN115661447B
CN115661447B CN202211471120.0A CN202211471120A CN115661447B CN 115661447 B CN115661447 B CN 115661447B CN 202211471120 A CN202211471120 A CN 202211471120A CN 115661447 B CN115661447 B CN 115661447B
Authority
CN
China
Prior art keywords
region
image
visual
characteristic
visual key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211471120.0A
Other languages
Chinese (zh)
Other versions
CN115661447A (en
Inventor
任红萍
陈波名
文武
曹凯奇
张慧聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Haixie Technology Co ltd
Shanghai Xingyun Information Technology Co ltd
Original Assignee
Shanghai Xingyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xingyun Information Technology Co ltd filed Critical Shanghai Xingyun Information Technology Co ltd
Priority to CN202211471120.0A priority Critical patent/CN115661447B/en
Publication of CN115661447A publication Critical patent/CN115661447A/en
Application granted granted Critical
Publication of CN115661447B publication Critical patent/CN115661447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a product image adjusting method based on big data, which comprises the following steps: the information acquisition module of the intelligent electronic commerce platform sends a plurality of display images corresponding to the target product to each user terminal, and acquires image feedback information of each test user; the interest recognition module analyzes the interest degree of each image characteristic region and the region information quantity of the corresponding image characteristic region according to all the test users to obtain the visual priority of each image characteristic region; the parameter enhancement module determines the region processing parameters of each visual key region in the corresponding display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region; and the image processing module enhances the contrast of each visual key area in the corresponding display image according to the area processing parameters so as to obtain an effect enhanced image of the corresponding display image.

Description

Product image adjustment method based on big data
Technical Field
The invention relates to the field of electronic commerce, in particular to a product image adjustment method based on big data.
Background
Electronic commerce generally refers to a novel business operation mode for realizing online shopping of consumers, online transaction and online electronic payment among merchants, various business activities, transaction activities, financial activities and related comprehensive service activities based on client/server application modes in a global and wide-ranging business trade activities in an internet open network environment.
Along with the increasing requirement of human on information transmission accuracy, the image serving as an effective carrier for transmitting information becomes an indispensable element in our life, and the human eyes have very wide vision but very small attention scope when observing the image, and the time input at the attention point is relatively large, so that the influence of the partial area on the image quality is very large compared with other areas.
The display image of the traditional product contains too complex scene content, and the display effect of the product display image is not ideal, so that the attention of a user is more dispersed, the product display image is difficult to highlight the product details which are more interesting to the consumer, and the attention of the user is difficult to attract.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a product image adjusting method based on big data, which comprises the following steps:
the intelligent electronic commerce platform information acquisition module responds to an image optimization request sent by the management terminal, sends a plurality of display images corresponding to the target product to user terminals of all test users, and acquires image feedback information of all the test users on different display images;
the interest recognition module analyzes the interest degree of the test user in each image characteristic region in the corresponding test image according to the image feedback information of the test user, and analyzes the interest degree of the test user in each image characteristic region and the region information quantity of the corresponding image characteristic region to obtain the visual priority of each image characteristic region;
the parameter enhancement module identifies visual key regions in the corresponding display image according to the visual priority of each image characteristic region, and determines region processing parameters of each visual key region in the corresponding display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region;
and the image processing module enhances the contrast of each visual key area in the corresponding display image according to the area processing parameters so as to obtain an effect enhanced image of the corresponding display image.
According to a preferred embodiment, the image optimization request comprises a device identifier, a product number, a number of presentation images of the target product and format information of the presentation images; the display image is used for displaying the shape and structure of the target product.
According to a preferred embodiment, the image feedback credit is used to characterize the gaze information of the test user for different image areas of the presentation image, including the position information of the image gaze point, dwell time, gaze scan path of the test user and number of hops.
According to a preferred embodiment, the interest recognition module analyzes the interest degree of the test user in the corresponding test image according to the image feedback information of the test user, and the interest degree comprises:
the interest recognition module analyzes the eye movement state of the test user according to the image feedback information of the test user to obtain eye movement characteristics of the corresponding test user, and obtains the interest degree of the test user in each image characteristic area according to the eye movement characteristic analysis, wherein the eye movement characteristics are used for representing the stay time, the eye jump data and the scanning track of the corresponding test user in each image characteristic area in the corresponding display image.
According to a preferred embodiment, the interest recognition module obtains the visual priority of each image feature region according to the interest degree of all the test users on each image feature region and the region information quantity analysis of the corresponding image feature region, and the visual priority comprises:
the interest identification module determines an area weight value of each image characteristic area according to an area information quantity corresponding to each image characteristic area, wherein the area information quantity is used for representing the quantity of product characteristics of a target product contained in the corresponding image characteristic area;
and the interest recognition module performs weighted fusion on different interestingness of all test users corresponding to the same image characteristic region according to the region weight value of each image characteristic region so as to obtain the visual priority of the corresponding image characteristic region.
According to a preferred embodiment, the parameter enhancement module determines the region processing parameters of the respective visual key regions in the display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region, including:
the parameter enhancement module acquires pixel characteristics of each visual key region to compare the pixel characteristics with global pixel characteristics of a corresponding display image to obtain first region difference characteristics of each visual key region, and analyzes the first characteristic difference degree corresponding to the first region difference characteristics of each visual key region to obtain a first parameter interval corresponding to the visual key region;
the parameter enhancement module compares the pixel characteristics of each visual key region with the pixel characteristics of the related adjacent image characteristic regions to obtain second region difference characteristics of each visual key region, and analyzes the second characteristic difference degree corresponding to the second region difference characteristics of each visual key region to obtain a second parameter interval corresponding to the visual key region;
the parameter enhancement module determines a weight coefficient of each pixel point of the visual key region, which corresponds to the visual key region with the visual priority greater than a preset priority threshold value in the display image, according to the first parameter interval and the second parameter interval of each visual key region, and obtains the region processing parameter of the corresponding visual key region by fusion according to the pixel value and the weight coefficient of each pixel point.
According to a preferred embodiment, the analyzing the second feature difference degree corresponding to the second region difference feature of each visual key region to obtain the second parameter interval of the corresponding visual key region includes:
the parameter enhancement module compares the difference of each characteristic value in the second region difference characteristic corresponding to each visual key region with a second difference threshold value to obtain a second characteristic difference degree between each visual key region and the related adjacent image characteristic region, wherein the second characteristic difference degree is used for representing the local chromaticity contrast and the local brightness contrast between each visual key region and the related adjacent image characteristic region;
the parameter enhancement module determines a transformable range of each characteristic value difference in the difference characteristics of the second area according to the minimum visual difference of the human eyes and the second characteristic difference degree of the corresponding visual key area to obtain a second parameter interval of the corresponding visual key area.
According to a preferred embodiment, the parameter enhancement module compares the pixel characteristics of each visual-key region with the pixel characteristics of its associated neighboring image-characteristic region to obtain a second region-difference characteristic for each visual-key region comprises:
the parameter enhancement module establishes a corresponding first key feature matrix for the corresponding visual key region according to the pixel feature of each visual key region, and establishes a corresponding second key feature matrix according to the pixel feature of each adjacent image feature region related to the first key feature matrix;
the parameter enhancement module obtains a first key neighborhood entropy of each visual key region according to matrix variance of a first key feature matrix corresponding to each visual key region and a first matrix neighborhood entropy corresponding to each first key feature matrix, and obtains a second key neighborhood entropy of each adjacent image feature region according to matrix variance of a second key feature matrix corresponding to each adjacent image feature region corresponding to each visual key region and a second matrix neighborhood entropy corresponding to each second key feature matrix, wherein the first matrix neighborhood entropy is used for representing weight coefficients of feature vectors in the first key feature matrix;
the parameter enhancement module projects the characteristic component of the first key neighborhood entropy of each visual key region and the characteristic component of the second key neighborhood entropy of each adjacent image characteristic region related to the first key neighborhood entropy into characteristic subspaces of different scales to obtain a plurality of characteristic value differences between each visual key region and the adjacent image characteristic regions related to the visual key region, and generates a second region difference characteristic of the corresponding visual key region according to the plurality of characteristic value differences, wherein the second region difference characteristic is used for representing pixel average value differences and gray level aggregation differences between each visual key region and the adjacent image characteristic regions related to the visual key region.
According to a preferred embodiment, the first parameter interval is used to characterize the pixel value enhanced range of each visual key region compared to the corresponding display image; the second parameter interval is used for representing pixel value enhanced range of each visual key region compared with the adjacent image characteristic region related to the visual key region.
According to a preferred embodiment, the minimum visual difference is a priori knowledge of the human body pre-stored by the system, which represents the minimum pixel value difference perceivable to the human eye.
The invention has the following beneficial effects:
according to the big data-based product image adjustment method provided by the invention, the interest degree of each test user on different image areas in the display image of the product is obtained through eye movement feature analysis when the test user views different display images of the product, the visual key areas in the corresponding display image are obtained according to the interest degree analysis of each test user, and then the contrast of each visual key area is enhanced according to the area processing parameters of each visual key area. The invention obviously improves the display effect of the commodity image by identifying the visual interest areas of a plurality of users in the commodity image and enhancing the contrast of the visual interest areas, is beneficial to the commodity image to show the product details which are more interesting to the consumers, and improves the purchasing desire of the consumers.
Drawings
Fig. 1 is a flowchart of a method for adjusting a product image based on big data according to an exemplary embodiment.
Detailed Description
The objects, technical solutions and advantages of the present invention will become more apparent by the following detailed description of the present invention with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the invention. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Referring to fig. 1, in one embodiment, a big data based product image adjustment method may include:
s1, an information acquisition module of the intelligent electronic commerce platform responds to an image optimization request sent by a management terminal, sends a plurality of display images corresponding to a target product to user terminals of all test users, and acquires image feedback information of all the test users on different display images.
Optionally, the image optimization request includes a device identifier, a product number, a plurality of display images of the target product, and format information of the display images; the display image is used for displaying the shape and structure of the target product.
Optionally, the device identifier is used for uniquely identifying the management terminal; the product number is used for uniquely identifying the product; the format information is used to characterize the compression format and decoding format of the corresponding presentation image.
Optionally, the image feedback credit is used for representing the fixation information of the test user on different image areas of the display image, and the fixation information comprises the position information of the image fixation point, the stay time, the sight scanning path of the test user and the eye jump number, and the fixation information is obtained by acquiring the eye movement behaviors of the rehabilitation user in real time through the camera device externally connected with the corresponding user terminal.
Optionally, the device with computing function, storage function and communication function used by the management terminal for the product seller comprises: smart phones, desktop computers, and notebook computers.
S2, the interest recognition module analyzes the interest degree of the test user in each image characteristic region in the corresponding test image according to the image feedback information of the test user, and analyzes the interest degree of the test user in each image characteristic region and the region information quantity of the corresponding image characteristic region according to all the test users to obtain the visual priority of each image characteristic region.
Specifically, the interest recognition module analyzes and obtains the interest degree of the test user in the corresponding test image on each image characteristic area according to the image feedback information of the test user, and the interest degree comprises the following steps:
the interest recognition module analyzes the eye movement state of the test user according to the image feedback information of the test user to obtain eye movement characteristics of the corresponding test user, and obtains the interest degree of the test user in each image characteristic area according to the eye movement characteristic analysis, wherein the eye movement characteristics are used for representing the stay time, the eye jump data and the scanning track of the corresponding test user in each image characteristic area in the corresponding display image.
Specifically, the interest recognition module analyzes the interest degree of each image feature area and the area information quantity of the corresponding image feature area according to all the test users to obtain the visual priority of each image feature area, which comprises the following steps:
the interest identification module determines an area weight value of each image characteristic area according to an area information quantity corresponding to each image characteristic area, wherein the area information quantity is used for representing the quantity of product characteristics of a target product contained in the corresponding image characteristic area;
and the interest recognition module performs weighted fusion on different interestingness of all test users corresponding to the same image characteristic region according to the region weight value of each image characteristic region so as to obtain the visual priority of the corresponding image characteristic region.
Optionally, the size of the region weight value is used for representing the importance degree of the corresponding image feature region, that is, the more product features contained in the image feature region, the larger the occupied region weight value. Optionally, the visual priority is used for representing the attraction degree of the corresponding image feature region to human eye vision, that is, the larger the visual priority is, the larger the user attention degree of the corresponding image feature region is.
And S3, the parameter enhancement module identifies the visual key region in the corresponding display image according to the visual priority of each image characteristic region, and determines the region processing parameters of each visual key region in the corresponding display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region.
Optionally, the region processing parameter is used for adjusting pixels of each pixel point in the corresponding visual key region.
Specifically, the determining, by the parameter enhancement module, the region processing parameters of the respective visual key regions in the display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region includes:
the parameter enhancement module obtains pixel characteristics of each visual key region to compare the pixel characteristics with global pixel characteristics of a corresponding display image to obtain first region difference characteristics of each visual key region, and obtains a first parameter interval of the corresponding visual key region according to first characteristic difference degree analysis corresponding to the first region difference characteristics of each visual key region, wherein the first region difference characteristics are used for representing pixel average value difference and gray level aggregation difference between each visual key region and the corresponding display image; the first feature difference is used for representing global chromaticity contrast and global brightness contrast between each visual key region and the corresponding display image;
the parameter enhancement module compares the pixel characteristics of each visual key region with the pixel characteristics of the related adjacent image characteristic regions to obtain second region difference characteristics of each visual key region, and analyzes the second characteristic difference degree corresponding to the second region difference characteristics of each visual key region to obtain a second parameter interval corresponding to the visual key region;
the parameter enhancement module determines a weight coefficient of each pixel point of the visual key region, which corresponds to the visual key region with the visual priority greater than a preset priority threshold value in the display image, according to the first parameter interval and the second parameter interval of each visual key region, and obtains the region processing parameter of the corresponding visual key region by fusion according to the pixel value and the weight coefficient of each pixel point.
Optionally, the preset priority threshold is a value preset by the system and used for judging whether the attraction degree of the corresponding image feature area to the human eye sight line is larger.
Specifically, the analyzing the second feature difference degree corresponding to the second region difference feature of each visual key region to obtain the second parameter interval corresponding to the visual key region includes:
the parameter enhancement module compares the difference of each characteristic value in the second region difference characteristic corresponding to each visual key region with a second difference threshold value to obtain a second characteristic difference degree between each visual key region and the related adjacent image characteristic region, wherein the second characteristic difference degree is used for representing the local chromaticity contrast and the local brightness contrast between each visual key region and the related adjacent image characteristic region;
the parameter enhancement module determines a transformable range of each characteristic value difference in the difference characteristics of the second area according to the minimum visual difference of the human eyes and the second characteristic difference degree of the corresponding visual key area to obtain a second parameter interval of the corresponding visual key area.
Optionally, the minimum visual difference is human body priori knowledge pre-stored by the system, which represents a minimum pixel value difference perceptible to human eyes.
Specifically, the parameter enhancement module comparing the pixel characteristics of each visual-critical region with the pixel characteristics of its associated adjacent image-characteristic region to obtain a second region-difference characteristic for each visual-critical region comprises:
the parameter enhancement module establishes a corresponding first key feature matrix for the corresponding visual key region according to the pixel feature of each visual key region, and establishes a corresponding second key feature matrix according to the pixel feature of each adjacent image feature region related to the first key feature matrix;
the parameter enhancement module obtains a first key neighborhood entropy of each visual key region according to matrix variance of a first key feature matrix corresponding to each visual key region and a first matrix neighborhood entropy corresponding to each first key feature matrix, and obtains a second key neighborhood entropy of each adjacent image feature region according to matrix variance of a second key feature matrix corresponding to each adjacent image feature region corresponding to each visual key region and a second matrix neighborhood entropy corresponding to each second key feature matrix, wherein the first matrix neighborhood entropy is used for representing weight coefficients of feature vectors in the first key feature matrix;
the parameter enhancement module projects the characteristic component of the first key neighborhood entropy of each visual key region and the characteristic component of the second key neighborhood entropy of each adjacent image characteristic region related to the first key neighborhood entropy into characteristic subspaces of different scales to obtain a plurality of characteristic value differences between each visual key region and the adjacent image characteristic regions related to the visual key region, and generates a second region difference characteristic of the corresponding visual key region according to the plurality of characteristic value differences, wherein the second region difference characteristic is used for representing pixel average value differences and gray level aggregation differences between each visual key region and the adjacent image characteristic regions related to the visual key region.
Optionally, the first parameter interval is used for characterizing an enhanced range of pixel values of each visual key region compared with the corresponding display image; the second parameter interval is used for representing pixel value enhanced range of each visual key region compared with the adjacent image characteristic region related to the visual key region.
Optionally, the second matrix neighborhood entropy is used for representing the weight coefficient of each feature vector in the second key feature matrix; the first key neighborhood entropy is used for representing pixel dispersion of each pixel point in the corresponding visual key region; the second key neighborhood entropy is used for representing pixel dispersion of each pixel point in the adjacent image characteristic region corresponding to the visual key region.
And S4, the image processing module enhances the contrast of each visual key area in the corresponding display image according to the area processing parameters so as to obtain an effect enhanced image of the corresponding display image.
According to the big data-based product image adjustment method provided by the invention, the interest degree of each test user on different image areas in the display image of the product is obtained through eye movement feature analysis when the test user views different display images of the product, the visual key areas in the corresponding display image are obtained according to the interest degree analysis of each test user, and then the contrast of each visual key area is enhanced according to the area processing parameters of each visual key area. The invention obviously improves the display effect of the commodity image by identifying the visual interest areas of a plurality of users in the commodity image and enhancing the contrast of the visual interest areas, is beneficial to the commodity image to show the product details which are more interesting to the consumers, and improves the purchasing desire of the consumers.
In one embodiment, a big data based product image adjustment system for performing the method of the present invention includes a management terminal, a user terminal, and an intelligent e-commerce platform. The intelligent e-commerce platform is in communication connection with the management terminal and the user terminal respectively. The user terminal is a device with computing, storage and communication functions for use by a consumer of a product, comprising: smart phones, desktop computers, and notebook computers.
The intelligent electronic commerce platform comprises an information acquisition module, an interest identification module, a parameter enhancement module and an image processing module.
The information acquisition module is used for responding to the image optimization request sent by the management terminal, sending a plurality of display images corresponding to the target product to the user terminals of all the test users, and acquiring image feedback information of all the test users on different display images.
The interest recognition module is used for analyzing and obtaining the interest degree of the test user in each image characteristic region in the corresponding test image according to the image feedback information of the test user, and analyzing and obtaining the visual priority of each image characteristic region according to the interest degree of all the test users in each image characteristic region and the region information quantity of the corresponding image characteristic region.
The parameter enhancement module is used for identifying visual key regions in the corresponding display image according to the visual priority of each image characteristic region, and determining region processing parameters of each visual key region in the corresponding display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region.
The image processing module is used for enhancing the contrast of each visual key area in the corresponding display image according to the area processing parameters so as to obtain an effect enhanced image of the corresponding display image.
In addition, while specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be divided into multiple modules and/or at least some of the functions of the multiple modules may be combined into a single module. Additionally, a particular module performing an action discussed herein includes the particular module itself performing the action, or alternatively the particular module invoking or otherwise accessing another component or module performing the action (or performing the action in conjunction with the particular module). Thus, a particular module that performs an action may include the particular module itself that performs the action and/or another module that the particular module that performs the action invokes or otherwise accesses.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various devices, elements, components or elements, these devices, elements, components or elements should not be limited by these terms. These terms are only used to distinguish one device, element, component, or element from another device, element, component, or element.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (7)

1. A method for adjusting a product image based on big data, the method comprising:
the intelligent electronic commerce platform information acquisition module responds to an image optimization request sent by the management terminal, sends a plurality of display images corresponding to the target product to user terminals of all test users, and acquires image feedback information of all the test users on different display images;
the interest recognition module analyzes the interest degree of the test user in each image characteristic region in the corresponding test image according to the image feedback information of the test user, and analyzes the interest degree of the test user in each image characteristic region and the region information quantity of the corresponding image characteristic region to obtain the visual priority of each image characteristic region;
the parameter enhancement module identifies visual key regions in the corresponding display image according to the visual priority of each image characteristic region, and determines region processing parameters of each visual key region in the corresponding display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region;
the parameter enhancement module determines the region processing parameters of each visual key region in the corresponding display image according to the first parameter interval, the second parameter interval and the visual priority of each visual key region, and the region processing parameters comprise: the parameter enhancement module acquires pixel characteristics of each visual key region to compare the pixel characteristics with global pixel characteristics of a corresponding display image to obtain first region difference characteristics of each visual key region, and analyzes the first characteristic difference degree corresponding to the first region difference characteristics of each visual key region to obtain a first parameter interval corresponding to the visual key region; the parameter enhancement module compares the pixel characteristics of each visual key region with the pixel characteristics of the related adjacent image characteristic regions to obtain second region difference characteristics of each visual key region, and analyzes the second characteristic difference degree corresponding to the second region difference characteristics of each visual key region to obtain a second parameter interval corresponding to the visual key region; the parameter enhancement module determines a weight coefficient of each pixel point of the visual key region, which corresponds to the visual key region with the visual priority greater than a preset priority threshold value in the display image, according to the first parameter interval and the second parameter interval of each visual key region, and fuses the pixel value of each pixel point and the weight coefficient to obtain a region processing parameter of the corresponding visual key region;
the parameter enhancement module comparing the pixel characteristics of each visual-key region with the pixel characteristics of its associated adjacent image-characteristic region to obtain a second region-difference characteristic for each visual-key region comprises: the parameter enhancement module establishes a corresponding first key feature matrix for the corresponding visual key region according to the pixel feature of each visual key region, and establishes a corresponding second key feature matrix according to the pixel feature of each adjacent image feature region related to the first key feature matrix; the parameter enhancement module obtains a first key neighborhood entropy of each visual key region according to matrix variance of a first key feature matrix corresponding to each visual key region and a first matrix neighborhood entropy corresponding to each first key feature matrix, and obtains a second key neighborhood entropy of each adjacent image feature region according to matrix variance of a second key feature matrix corresponding to each adjacent image feature region corresponding to each visual key region and a second matrix neighborhood entropy corresponding to each second key feature matrix, wherein the first matrix neighborhood entropy is used for representing weight coefficients of feature vectors in the first key feature matrix; the parameter enhancement module projects the characteristic component of the first key neighborhood entropy of each visual key region and the characteristic component of the second key neighborhood entropy of each adjacent image characteristic region related to the first key neighborhood entropy into characteristic subspaces of different scales to obtain a plurality of characteristic value differences between each visual key region and the adjacent image characteristic region related to the visual key region, and generates a second region difference characteristic of the corresponding visual key region according to the plurality of characteristic value differences, wherein the second region difference characteristic is used for representing pixel average value differences and gray level aggregation differences between each visual key region and the adjacent image characteristic region related to the visual key region;
the analyzing the second feature difference degree corresponding to the second region difference feature of each visual key region to obtain a second parameter interval corresponding to the visual key region includes: the parameter enhancement module compares the difference of each characteristic value in the second region difference characteristic corresponding to each visual key region with a second difference threshold value to obtain a second characteristic difference degree between each visual key region and the related adjacent image characteristic region, wherein the second characteristic difference degree is used for representing the local chromaticity contrast and the local brightness contrast between each visual key region and the related adjacent image characteristic region; the parameter enhancement module determines a transformable range of each characteristic value difference in the difference characteristics of the second area according to the minimum visual difference of the human eyes and the second characteristic difference degree of the corresponding visual key area to obtain a second parameter interval of the corresponding visual key area;
and the image processing module enhances the contrast of each visual key area in the corresponding display image according to the area processing parameters so as to obtain an effect enhanced image of the corresponding display image.
2. The method of claim 1, wherein the image optimization request includes a device identifier, a product number, a number of presentation images of the target product, and format information of the presentation images; the display image is used for displaying the shape and structure of the target product.
3. The method of claim 2, wherein the interest recognition module analyzing the visual priority of each image feature region according to the interest level of all test users in each image feature region and the region information amount of the corresponding image feature region comprises:
the interest identification module determines an area weight value of each image characteristic area according to an area information quantity corresponding to each image characteristic area, wherein the area information quantity is used for representing the quantity of product characteristics of a target product contained in the corresponding image characteristic area;
and the interest recognition module performs weighted fusion on different interestingness of all test users corresponding to the same image characteristic region according to the region weight value of each image characteristic region so as to obtain the visual priority of the corresponding image characteristic region.
4. A method according to claim 3, wherein the image feedback credits are used to characterize the gaze information of the test user for different image areas of the presentation image, including the position information of the image gaze point, dwell time, gaze scan path and number of hops of the test user.
5. The method of claim 4, wherein the interest recognition module analyzing the interest level of the test user in the corresponding test image according to the image feedback information of the test user comprises:
the interest recognition module analyzes the eye movement state of the test user according to the image feedback information of the test user to obtain eye movement characteristics of the corresponding test user, and obtains the interest degree of the test user in each image characteristic area according to the eye movement characteristic analysis, wherein the eye movement characteristics are used for representing the stay time, the eye jump data and the scanning track of the corresponding test user in each image characteristic area in the corresponding display image.
6. The method of claim 5, wherein the first parameter interval is used to characterize an enhanced range of pixel values for each visual key region as compared to a corresponding presentation image; the second parameter interval is used for representing pixel value enhanced range of each visual key region compared with the adjacent image characteristic region related to the visual key region.
7. The method of claim 6, wherein the minimum visual difference is a priori knowledge of the human body pre-stored by the system, which represents a minimum pixel value difference perceptible to the human eye.
CN202211471120.0A 2022-11-23 2022-11-23 Product image adjustment method based on big data Active CN115661447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211471120.0A CN115661447B (en) 2022-11-23 2022-11-23 Product image adjustment method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211471120.0A CN115661447B (en) 2022-11-23 2022-11-23 Product image adjustment method based on big data

Publications (2)

Publication Number Publication Date
CN115661447A CN115661447A (en) 2023-01-31
CN115661447B true CN115661447B (en) 2023-08-04

Family

ID=85019192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211471120.0A Active CN115661447B (en) 2022-11-23 2022-11-23 Product image adjustment method based on big data

Country Status (1)

Country Link
CN (1) CN115661447B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778130B (en) * 2023-08-25 2023-12-05 江苏盖睿健康科技有限公司 Intelligent recognition method and system for test result based on image processing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052809A (en) * 2020-09-10 2020-12-08 四川创客知佳科技有限公司 Facility monitoring and protecting method based on intelligent park

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023253A (en) * 2015-07-16 2015-11-04 上海理工大学 Visual underlying feature-based image enhancement method
WO2017053871A2 (en) * 2015-09-24 2017-03-30 Supereye, Inc. Methods and devices for providing enhanced visual acuity
CN108052973B (en) * 2017-12-11 2020-05-05 中国人民解放军战略支援部队信息工程大学 Map symbol user interest analysis method based on multiple items of eye movement data
CN110163219B (en) * 2019-04-17 2023-05-16 安阳师范学院 Target detection method based on image edge recognition
CN113313650B (en) * 2021-06-09 2023-10-13 北京百度网讯科技有限公司 Image quality enhancement method, device, equipment and medium
CN114298921B (en) * 2021-12-10 2024-06-21 苏州创捷传媒展览股份有限公司 Objective content driving-based method for evaluating visual attention effect of audience
CN115035114B (en) * 2022-08-11 2022-11-11 高密德隆汽车配件制造有限公司 Hay crusher state monitoring method based on image processing
CN115345256B (en) * 2022-09-16 2023-10-27 北京国联视讯信息技术股份有限公司 Industrial product testing system applied to intelligent manufacturing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052809A (en) * 2020-09-10 2020-12-08 四川创客知佳科技有限公司 Facility monitoring and protecting method based on intelligent park

Also Published As

Publication number Publication date
CN115661447A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
US20230161155A1 (en) Measurement Method And System
KR101972285B1 (en) Image evaluation
US9439563B2 (en) Measurement method and system
US20180268458A1 (en) Automated recommendation and virtualization systems and methods for e-commerce
CN115661447B (en) Product image adjustment method based on big data
US20090327308A1 (en) Systems and methods for providing a consumption network
Zhu et al. Prominent attributes under limited attention
Zheng et al. UIF: An objective quality assessment for underwater image enhancement
Kannaiah et al. The impact of augmented reality on e-commerce
CN107808314B (en) User recommendation method and device
CN113052656B (en) E-commerce platform management system based on big data
CN113269607B (en) Online and offline experience smart display system
JP5600148B2 (en) VIDEO DISTRIBUTION DEVICE, VIDEO DISTRIBUTION METHOD, AND VIDEO DISTRIBUTION PROGRAM
KR102315675B1 (en) Proximity-based inter-computing device negotiation
CN113015998A (en) Sight line analysis device and sight line analysis system and method using same
CN111369301A (en) Transaction evaluation method, device and terminal
CN115660789B (en) Product image management system based on intelligent electronic commerce platform
US20170214757A1 (en) System and method for automatic data collection
Krasula et al. Objective evaluation of naturalness, contrast, and colorfulness of tone-mapped images
CN112308648A (en) Information processing method and device
Dhivya et al. An Analysis Of Consumer Electronics Products To Determine The Impact Of Digital Marketing On Customer Purchasing Behaviour
CN116956204A (en) Network structure determining method, data predicting method and device of multi-task model
EP1031935A2 (en) Electronic commerce apparatus and electronic commerce method for improving a sales efficiency, and recording medium for storing electronic commerce programs
US11715298B1 (en) Augmented reality item obscuring
CN111429183A (en) Commodity analysis method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230713

Address after: Room 202, Floor 2, No. 11, Lane 1500, Kongjiang Road, Yangpu District, Shanghai, 200092

Applicant after: Shanghai Xingyun Information Technology Co.,Ltd.

Address before: No. 508-2A, Baoli Tianji North Block, Qiandenghu, Guicheng Street, Nanhai District, Foshan City, Guangdong Province, 528200

Applicant before: Foshan Haixie Technology Co.,Ltd.

Effective date of registration: 20230713

Address after: No. 508-2A, Baoli Tianji North Block, Qiandenghu, Guicheng Street, Nanhai District, Foshan City, Guangdong Province, 528200

Applicant after: Foshan Haixie Technology Co.,Ltd.

Address before: No.24, Section 1, Xuefu Road, Southwest Airport Economic Development Zone, Chengdu, Sichuan 610200

Applicant before: CHENGDU University OF INFORMATION TECHNOLOGY

GR01 Patent grant
GR01 Patent grant