CN117237786B - Evaluation data acquisition method, device, system, electronic equipment and storage medium - Google Patents

Evaluation data acquisition method, device, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN117237786B
CN117237786B CN202311509109.3A CN202311509109A CN117237786B CN 117237786 B CN117237786 B CN 117237786B CN 202311509109 A CN202311509109 A CN 202311509109A CN 117237786 B CN117237786 B CN 117237786B
Authority
CN
China
Prior art keywords
target
sample
time
data
evaluation data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311509109.3A
Other languages
Chinese (zh)
Other versions
CN117237786A (en
Inventor
孙源
顾行发
杨健
王栋
闻建光
李莘莘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202311509109.3A priority Critical patent/CN117237786B/en
Publication of CN117237786A publication Critical patent/CN117237786A/en
Application granted granted Critical
Publication of CN117237786B publication Critical patent/CN117237786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method, a device, a system, electronic equipment and a storage medium for acquiring evaluation data, which relate to the technical field of remote sensing, and the method comprises the following steps: under the condition that an evaluation data acquisition task corresponding to a target area is triggered, acquiring a change amplitude value of the pupil diameter of a target person and a change amplitude value of the brightness of the environment in the target area in a target process based on target pupil data and target optical data; and inputting the change amplitude values of the pupil diameter and the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of the ground object distribution in the target area as the evaluation data of the target area. The evaluation data acquisition method, the device, the system, the electronic equipment and the storage medium provided by the invention can be used for more objectively acquiring the evaluation data for judging whether the area is suitable for carrying out the authenticity check of the remote sensing product without using an unmanned plane technology, and further can be used for more objectively judging whether the area is suitable for carrying out the authenticity check of the remote sensing product.

Description

Evaluation data acquisition method, device, system, electronic equipment and storage medium
Technical Field
The present invention relates to the field of remote sensing technologies, and in particular, to a method, an apparatus, a system, an electronic device, and a storage medium for acquiring evaluation data.
Background
With the development of satellite remote sensing technology, various remote sensing products can be obtained based on satellite remote sensing images acquired by a satellite sensor, and the remote sensing products can provide data support for resource environment monitoring and sustainable development. The authenticity inspection of the remote sensing satellite is an important way for evaluating the remote sensing satellite.
In the remote sensing product authenticity verification work, whether a certain area is suitable for remote sensing product authenticity verification can be determined based on some evaluation data, such as at least one of uniformity of ground object distribution, vegetation coverage and vegetation greenness index. In the related art, the unmanned aerial vehicle technology may be used to collect image data of the region, and further, evaluation data of the region may be obtained based on the image data of the region.
However, because unmanned aerial vehicle is bulky and expensive, so when technical personnel carries out the field investigation on spot, can not carry unmanned aerial vehicle as the requisite article. If a technician considers that a certain area is suitable for carrying out the authenticity verification of the remote sensing product but does not carry an unmanned aerial vehicle when carrying out field investigation, the technician can only determine whether the area is suitable for carrying out the authenticity verification of the remote sensing product based on subjective judgment of the technician, and it is difficult to objectively determine whether the area is suitable for carrying out the authenticity verification of the remote sensing product.
Disclosure of Invention
The invention provides an evaluation data acquisition method, an apparatus, a system, electronic equipment and a storage medium, which are used for solving the defect that in the prior art, on the basis of not utilizing an unmanned aerial vehicle technology, evaluation data for judging whether a region is suitable for carrying out the authenticity check of a remote sensing product are difficult to objectively acquire, and on the basis of not utilizing the unmanned aerial vehicle technology, evaluation data for judging whether the region is suitable for carrying out the authenticity check of the remote sensing product are more objectively acquired, so that whether the region is suitable for carrying out the authenticity check of the remote sensing product is more objectively determined.
The invention provides an evaluation data acquisition method, which comprises the following steps:
under the condition that an evaluation data acquisition task corresponding to a target area is triggered, acquiring target pupil data and target optical data, wherein the target pupil data comprise change data of pupil diameters of target personnel in a target process, the target process comprises a process that a gaze point of a sight of the target personnel moves in the target area according to a preset path at a preset speed, and the target optical data comprise change data of ambient light brightness in the target area in the target process;
Acquiring a change amplitude value of the pupil diameter of the target person in the target process based on the target pupil data, and acquiring a change amplitude value of the ambient light brightness in the target area in the target process based on the target optical data;
inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of ground object distribution in the target area output by the evaluation data acquisition model as evaluation data of the target area;
the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprise a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area according to the preset path at the preset speed; the sample optical data includes a magnitude of change in ambient light brightness within the sample region during the sample; the first sample tag includes uniformity of distribution of features within the sample region.
According to the evaluation data acquisition method provided by the invention, after the target pupil data and the target optical data are acquired, the method further comprises the following steps:
for any moment in the target process, determining any moment as a first target moment when the change rate between the pupil diameter of the target person at the any moment and the pupil diameter of the target person at the moment before the any moment exceeds a change rate threshold value based on the target pupil data;
for any first target moment, determining any first target moment as a second target moment when the ground object seen by the target person at any first target moment is vegetation based on the pupil diameter of the target person at any first target moment and the ambient light brightness in the target area at any first target moment, and determining any first target moment as a third target moment when the ground object seen by the target person at any first target moment is not vegetation based on the pupil diameter of the target person at any first target moment and the ambient light brightness in the target area at any first target moment;
In the case that the first target time in the target process is the second target time, determining vegetation coverage of the target area as evaluation data of the target area based on the total time length of the target process and the time length between any second target time in the target process and the next third target time in any second target time and/or the time length between the last second target time in the target process and the termination time of the target process,
under the condition that the first target time and the last target time in the target process are both second target times, calculating vegetation coverage of the target area based on total time length of the target process, time length between any second target time in the target process and the next third target time of any second target time and time length between the last second target time in the target process and the ending time of the target process, and taking the vegetation coverage as evaluation data of the target area,
calculating vegetation coverage of the target area based on the total duration of the target process and the duration between any one of the second target time and the next third target time in the target process when the first target time in the target process is the second target time and the last first target time in the target process is the third target time, as evaluation data of the target area,
Under the condition that only one second target time is included in the target process, calculating the vegetation coverage of the target area based on the total time length of the target process and the time length between the second target time and the ending time of the target process, and taking the vegetation coverage as evaluation data of the target area,
in the case where the first target time and the last first target time in the target process are both third target times, determining vegetation coverage of the target area as evaluation data of the target area based on a total time length of the target process, a time length between the first third target time in the target process and a start time of the target process, and a time length between any second target time in the target process and a next third target time of any second target time,
determining vegetation coverage of the target area as evaluation data of the target area based on a total time length of the target process, a time length between a first third target time length in the target process and a start time length of the target process, a time length between any second target time length in the target process and a next third target time length in the target process and any second target time length in the target process and a time length between a last second target time length in the target process and a termination time length of the target process, in a case where the first target time length in the target process is the third target time length and the last first target time length is the second target time length,
And under the condition that the target process only comprises a third target time, calculating to obtain vegetation coverage of the target area as evaluation data of the target area based on the total duration of the target process and the duration between the third target time and the starting time of the target process.
According to the evaluation data acquisition method provided by the invention, the evaluation data acquisition model is obtained based on the following modes:
performing regression analysis on the first sample data and the first sample tag to obtain an evaluation data acquisition model for describing the corresponding relation between the first sample data and the first sample tag;
alternatively, the evaluation data acquisition model is obtained based on the following:
and training a convolutional neural network model based on the first sample data and the first sample label to obtain the evaluation data acquisition model.
According to the method for acquiring evaluation data provided by the invention, based on the pupil diameter of the target person at any first target moment and the ambient light brightness in the target area at any first target moment, whether the ground object seen by the target person at any first target moment is vegetation is determined, comprising the following steps:
Inputting the pupil diameter of the target person at any first target moment and the ambient light intensity in the target area at any first target moment into a ground object type determining model, and obtaining a ground object type determining result at any first target moment output by the ground object type determining model;
determining whether the ground object seen by the target person at any first target moment is vegetation or not based on the ground object type determination result at any first target moment;
wherein the ground object type determination model is obtained based on second sample data; the second sample data includes a pupil diameter of the target person when viewing vegetation at different ambient light levels and a pupil diameter of the target person when viewing non-vegetation at different ambient light levels.
According to the evaluation data acquisition method provided by the invention, the ground object type determination model is obtained based on the following modes:
performing regression analysis on the second sample data to obtain a ground object type determination model for describing the correspondence between different pupil diameters and different ambient light levels of the target person and whether the ground object seen by the target person is vegetation or not;
Alternatively, the feature type determination model is obtained based on the following manner:
and training a convolutional neural network by taking the pupil diameter of the target person and the ambient light brightness as samples and taking whether the ground object seen by the target person corresponding to the pupil diameter of the target person and the ambient light brightness is vegetation as a sample label to obtain the ground object type determining model.
According to the method for acquiring evaluation data provided by the invention, after the vegetation coverage of the target area is obtained through calculation, the method further comprises the following steps:
and acquiring the greenness of the target area based on the vegetation coverage of the target area, the target pupil data and the target optical data, and taking the greenness of the target area as evaluation data of the target area.
According to the method for acquiring evaluation data provided by the invention, the method for acquiring the green degree of the target area based on the vegetation coverage of the target area, the target pupil data and the target optical data, as the evaluation data of the target area, comprises the following steps:
acquiring an average value of pupil diameters of the target personnel in the target process based on the target pupil data, and acquiring an average value of ambient light brightness in the target area in the target process based on the target optical data;
Inputting an average value of pupil diameters of the target personnel in the target process and an average value of the brightness of the environment in the target area in the target process into a green degree estimation model, and obtaining the original green degree of the target area output by the green degree estimation model;
wherein the green degree estimation model is obtained based on third sample data; the third sample data includes pupil diameters of the target person when the target person sees vegetation of different greenness at different ambient light levels.
The invention also provides an evaluation data acquisition device, which comprises:
the system comprises a data acquisition module, a target processing module and a target processing module, wherein the data acquisition module is used for acquiring target pupil data and target optical data under the condition that an evaluation data acquisition task corresponding to a target area is triggered, the target pupil data comprise change data of pupil diameter of a target person in a target process, the target process comprises a process that a gaze point of a sight of the target person moves in the target area according to a preset path at a preset speed, and the target optical data comprise change data of brightness in the target area in the target process;
the numerical calculation module is used for acquiring a change amplitude value of the pupil diameter of the target person in the target process based on the target pupil data and acquiring a change amplitude value of the ambient light brightness in the target area in the target process based on the target optical data;
The data conversion module is used for inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of the ground object distribution in the target area output by the evaluation data acquisition model as the evaluation data of the target area;
the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprise a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area according to the preset path at the preset speed; the sample optical data includes a magnitude of change in ambient light brightness within the sample region during the sample; the first sample tag includes uniformity of distribution of features within the sample region.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing any of the above-mentioned evaluation data acquisition methods when executing the program.
The invention also provides an evaluation data acquisition system, which comprises: a head-mounted pupil diameter acquisition device, an ambient light level sensor, and an electronic device as described above; the head-mounted pupil diameter acquisition device and the ambient light brightness sensor are electrically connected with the electronic device; the ambient light brightness sensor is arranged in the target area;
the head-mounted pupil diameter acquisition device is used for responding to the control of the electronic device, starting to acquire the pupil diameter of a target person and sending the acquired pupil diameter of the target person to the electronic device;
the environment light brightness sensor is used for responding to the control of the electronic equipment, starting to collect the environment light brightness in a target area and sending the collected environment light brightness in the target area to the electronic equipment;
wherein the target person is a person wearing the head-mounted pupil diameter acquisition device.
According to the evaluation data acquisition system provided by the invention, the system further comprises: a user interaction device; the user interaction device is electrically connected with the electronic device;
the user interaction device is used for receiving a first input for representing triggering of the evaluation data acquisition task and sending the first input to the electronic device, so that the electronic device responds to the first input,
The user interaction device is further used for receiving a second input used for indicating that a target process is completed, and sending the second input to the electronic device, so that the electronic device responds to the second input to control the head-mounted pupil diameter acquisition device to stop acquiring the pupil diameter of the target person;
the electronic device is further configured to trigger the evaluation data acquisition task in response to the first input and control the head-mounted pupil diameter acquisition device to start acquiring the pupil diameter of the target person and the ambient light level sensor to start acquiring the ambient light level in the target area,
the electronic device is further configured to, in response to the second input, control the head-mounted pupil diameter collection device to stop collecting a pupil diameter of a target person and control the ambient light level sensor to stop collecting ambient light levels in a target area if the second input is received.
According to the evaluation data acquisition system provided by the invention, the system further comprises: a timing device and a voice broadcasting device; the timing device and the voice broadcasting device are electrically connected with the electronic equipment;
The timing device is used for responding to the control of the electronic equipment, recording the starting time and the ending time of a target process, sending the starting time and the ending time of the target process to the electronic equipment,
the timing device is also used for responding to the control of the electronic equipment, returning a timing signal to the electronic equipment every preset time length so that the electronic equipment can control the voice broadcasting device to conduct voice broadcasting under the condition that the timing signal is received;
the voice broadcasting device is used for responding to the control of the electronic equipment to conduct voice broadcasting, so that the target personnel can move the gaze point of the sight in the target area according to the voice broadcasting and a preset path and a preset speed.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the evaluation data acquisition methods described above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a method of evaluating data as described in any of the above.
In the embodiment of the invention, under the condition of triggering the evaluation data acquisition task corresponding to the target area, the change data of the pupil diameter of the target person in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed is taken as target pupil data, the change data of the environmental light brightness in the target area in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed is taken as target optical data, the change amplitude value of the pupil diameter of the target person in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed and the change amplitude value of the environmental light brightness in the target area are acquired based on the target pupil data and the target optical data, further, the change amplitude value of the pupil diameter of the target person and the change amplitude value of the ambient light brightness in the target area are input into an evaluation data acquisition model, the uniformity of the ground object distribution in the target area output by the evaluation data acquisition model is acquired, as the evaluation data of the target area, the evaluation data for judging whether the area is suitable for carrying out the remote sensing product authenticity check can be acquired more objectively under the condition of not using an unmanned aerial vehicle technology based on the correlation between the pupil diameter of the human eye and the reflectivity of the ground object seen by the human eye and the ambient light brightness where the human eye is located, further, whether the area is suitable for carrying out the remote sensing product authenticity check can be judged more objectively, the defect of stronger subjectivity when judging whether a certain area is suitable for carrying out the remote sensing product authenticity check under the condition of not using the unmanned aerial vehicle technology can be overcome, the equipment cost and the time cost required to be input for acquiring the evaluation data can be reduced, the efficiency of the authenticity verification of the remote sensing product can be improved, the cost input of the authenticity verification of the remote sensing product can be reduced, the operation required by technicians is simple, and the operation technical level requirements on the technicians are low.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an evaluation data acquisition method provided by the invention;
FIG. 2 is a schematic diagram of the distribution of the second target time and the third target time in the target process in the evaluation data acquisition method provided by the invention;
FIG. 3 is a second schematic distribution diagram of a second target time and a third target time in a target process in the evaluation data acquisition method provided by the invention;
FIG. 4 is a third schematic distribution diagram of a second target time and a third target time in a target process in the evaluation data acquisition method according to the present invention;
FIG. 5 is a schematic diagram showing the distribution of a second target time and a third target time in a target process in the evaluation data acquisition method according to the present invention;
FIG. 6 is a schematic diagram showing the distribution of the second target time and the third target time in the target process in the evaluation data acquisition method according to the present invention;
FIG. 7 is a diagram showing the distribution of the second target time and the third target time in the target process in the evaluation data acquisition method according to the present invention;
fig. 8 is a schematic diagram of the structure of an evaluation data acquisition apparatus provided by the present invention;
fig. 9 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the invention, it should be noted that, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the description of the present application, the terms "first," "second," and the like are used for distinguishing between similar objects and not for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. In addition, in the description of the present application, "and/or" means at least one of the connected objects, and the character "/", generally means a relationship in which the front and rear associated objects are one kind of "or".
It should be noted that, with the development of remote sensing technology, a large number of satellite sensors with earth vegetation detection capability have been emitted. The remote sensing product is information and data related to vegetation obtained by a remote sensing technology. Common remote sensing products include Vegetation index (Vegetation Indices), vegetation coverage (Vegetation Cover), vegetation Type (Vegetation Type), vegetation growth monitoring (Vegetation Monitoring), and the like.
Based on satellite sensors with different spatial resolutions, remote sensing products with different spatial resolutions in a certain area can be obtained. The remote sensing product can provide an effective method and basic data support for resource environment monitoring, sustainable development and the like.
The authenticity inspection of the remote sensing product is an important way for evaluating the quality of the remote sensing product. The authenticity test of the remote sensing product can not only provide support for the early-stage principle research and design of satellite effective load, but also provide basis for the quality evaluation, analysis and control of remote sensing data.
In the verification of the authenticity of the remote sensing product, the ground measurement value capable of representing the vegetation characteristics of the ground surface can be used as a relative true value, the matching degree of the remote sensing product to be verified and the relative true value can be evaluated through an independent method, and the uncertainty of the matching degree is analyzed, so that the authenticity verification result of the remote sensing product is obtained.
Therefore, the method has important significance in improving the accuracy of the authenticity check of the remote sensing product by determining the proper area to collect the ground measured value (namely, determining the proper area to carry out the authenticity check of the remote sensing product).
In general, whether or not the area is suitable for collecting ground measurement values may be determined based on evaluation data such as uniformity of ground distribution, vegetation coverage, and vegetation greenness index of the area. In the related art, the unmanned aerial vehicle technology may be used to collect image data of the region, and further, evaluation data of the region may be obtained based on the image data of the region.
However, on the one hand, because unmanned aerial vehicle is bulky and expensive, so when technical personnel carry out the investigation in the field, unmanned aerial vehicle can not carry as requisite article. If a technician considers that a certain area is suitable for carrying out the authenticity verification of the remote sensing product but does not carry the unmanned aerial vehicle when carrying out the field investigation, the evaluation data of the area is difficult to obtain, and further whether the area is suitable for carrying out the authenticity verification of the remote sensing product is difficult to determine.
On the other hand, the operation of the unmanned aerial vehicle when collecting image data is comparatively loaded down with trivial details to technical staff control, and the time consuming of collecting image data is longer. Moreover, the unmanned aerial vehicle is controlled to collect image data, so that the operation technical requirement on technicians is high, if the operation technology of the technicians is not high, the quality of the collected image data is possibly low, the unmanned aerial vehicle is possibly damaged, and economic loss is caused.
Human-computer interaction based on eye movement technology accords with psychological and physiological habits of people. With the technical development of artificial intelligence and machine learning, the computer can effectively capture the intention of a user, and based on man-machine interaction, the cost is greatly saved, and the working efficiency is improved. The information processing of human beings depends on vision to a great extent, about 80% -90% of information from the outside is obtained through eyes of the human beings, and the visual perception information of the human beings can be obtained through tracking the eye point of the human eyes. Human-computer interaction research based on eye movement technology is considered to be the most effective means in visual information processing research.
In this regard, the invention provides an evaluation data acquisition method applied to a remote sensing product authenticity verification scene, which is based on the change of the pupil diameter of human eyes according to the change of a target, can more objectively acquire evaluation data for judging whether an area is suitable for carrying out remote sensing product authenticity verification on the basis of not utilizing an unmanned aerial vehicle technology, can reduce equipment cost and time cost required for acquiring the evaluation data, can improve the efficiency of remote sensing product authenticity verification, and can reduce cost input of remote sensing product authenticity verification.
Fig. 1 is a schematic flow chart of an evaluation data acquisition method provided by the invention. The evaluation data acquisition method of the present invention is described below with reference to fig. 1. As shown in fig. 1, the method includes: step 101, under the condition that an evaluation data acquisition task corresponding to a target area is triggered, acquiring target pupil data and target optical data, wherein the target pupil data comprises change data of pupil diameters of target personnel in a target process, the target process comprises a process that a gaze point of a sight of the target personnel moves in the target area at a preset speed according to a preset path, and the target optical data comprises change data of ambient light brightness in the target area in the target process.
It should be noted that, the execution subject of the embodiment of the present invention is an evaluation data acquisition device.
Specifically, the application scenario of the evaluation data acquisition method provided by the invention may include: in the process of field investigation by a target person, if the target person considers that the target area is suitable for carrying out the authenticity verification of the remote sensing product, the evaluation data of the target area needs to be acquired for determining whether the target area is suitable for carrying out the scene of the authenticity verification of the remote sensing product.
Accordingly, the evaluation data acquisition device in the embodiment of the invention can be a mobile terminal used by a target person. The mobile terminal in the embodiment of the present invention may be a terminal that has a communication function and may be used in mobile, for example: smart phones, tablet computers or notebook computers.
It should be noted that, in the embodiment of the present invention, the target area may be determined by a target person based on an actual situation. The target area is not particularly limited in the embodiment of the present invention.
After the target person determines the target area, the evaluation data acquisition task corresponding to the target area can be triggered through input operation, that is, in the embodiment of the invention, the evaluation data acquisition task corresponding to the target area can be triggered based on the input of the target person. The input may be represented as a touch output to a mobile terminal used by the target person, and the touch output may include, but is not limited to, a click input, a slide input, a press input, and the like. The input may also be represented as an entity key input corresponding to the mobile terminal used by the target person. The input may also be presented as a voice input by the target person.
It will be appreciated that the pupil diameter of the human eye is related to the reflectivity of the features seen by the human eye and the ambient light level in which the human eye is located.
Ambient light intensity refers to the intensity of light in the surrounding environment. Under the condition of lower ambient light, the pupil diameter can be automatically increased to receive more light rays to enter eyes so as to improve visual definition; in the case of high ambient light levels, the pupil diameter automatically constricts to limit the amount of light entering the human eye.
The reflectivity of a feature refers to the degree of reflection of the feature on incident light. Different features, such as bodies of water, vegetation, bare soil, buildings, snow and ice, etc., have different reflectivities. In general, under the same light conditions, a feature with higher reflectivity will reflect more light, while a feature with lower reflectivity will absorb more light. Under the same condition of light, if the human eyes see a ground object with higher reflectivity, the pupil diameter can be automatically reduced to limit the amount of light entering the human eyes, and if the human eyes see a ground object with lower reflectivity, the pupil diameter can be automatically increased to receive more light entering the eyes, so that the visual definition is improved.
Therefore, the embodiment of the invention can quantify data based on the correlation between the pupil diameter of the human eye, the reflectivity of the ground object seen by the human eye and the brightness of the environment where the human eye is located: the change data of pupil diameter and the change data of ambient light brightness are converted into quantized data: the uniformity of the ground object distribution can be used for solving the defect of stronger subjectivity when a technician judges whether a certain area is suitable for carrying out the authenticity verification of a remote sensing product under the condition of not using an unmanned aerial vehicle technology in the related technology.
After triggering the evaluation data acquisition task corresponding to the target area through input operation, the gaze point of the sight of the target person moves in the target area at a preset speed according to a preset path.
Wherein the preset path and the preset speed may be determined based on a priori knowledge and/or actual conditions. The preset path is not particularly limited in the embodiment of the present invention.
It can be appreciated that the target person knows the preset path and the preset speed before triggering the evaluation data acquisition task corresponding to the target area.
Optionally, in the embodiment of the present invention, the target area may be a square area, and the preset path in the embodiment of the present invention may be a serpentine path, where a start point and an end point of the preset path are two opposite angles of the target area respectively.
Optionally, in the embodiment of the present invention, the target area may be a square area, and the preset path in the embodiment of the present invention may also pass through two diagonal lines of the target area respectively.
Under the condition that the evaluation data acquisition task corresponding to the target area is triggered, the change data of the pupil diameter of the target person in the process that the gaze point of the line of sight of the target person moves in the target area according to the preset path at the preset speed can be acquired by using the head-mounted pupil diameter acquisition equipment worn by the target person as target pupil data, and the change data of the environment light brightness in the target area in the process that the gaze point of the line of sight of the target person moves in the target area according to the preset path at the preset speed can be acquired by using the environment light brightness sensor as target optical data.
The head-mounted pupil diameter acquisition device in the embodiment of the invention is a device which can be worn on the head of a user and is used for measuring and recording pupil diameter changes of the user in real time. The head-mounted pupil diameter acquisition device can comprise a head-mounted device, a pupil tracking sensor, a communication module, a power supply and other components. The pupil tracking sensor, the communication module and the power supply are arranged on the head-mounted device;
The headset can be used for fixing the pupil tracking sensor, the communication module and the power supply with the head of the user, so that the head-mounted pupil diameter acquisition device can be ensured to be firmly fixed on the head of the user, and the pupil diameter can be accurately measured. Headgear typically employs a headband or eyeglass frame with adjustment functions;
pupil tracking sensors may be used to detect and record pupil size changes in real time. The pupil tracking sensor can accurately track the position and the diameter of the pupil by utilizing infrared rays or a camera shooting technology;
the communication module can be used for realizing communication with other electronic equipment, so that the change data of the pupil diameter acquired by the pupil tracking sensor can be sent to the other electronic equipment;
the power supply may be used to power the pupil tracking sensor and the communication module to ensure proper operation of the headset pupil diameter acquisition device. The power source may be a rechargeable battery or a replaceable battery.
The ambient light intensity sensor in the embodiment of the invention is a sensor for measuring the intensity of ambient light. The ambient light level sensor can sense the ambient light level and convert the ambient light level into an electrical signal or a digital signal for other electronic devices or systems;
Ambient light level sensors typically use a photosensitive element (e.g., a photoresistor, photodiode, or photosensitive capacitor) as the photosensitive element. When ambient light irradiates the photosensitive element, the resistance, current or capacitance value of the photosensitive element changes, and information of the ambient light brightness can be obtained by measuring the change.
It should be noted that, the ambient light sensor in the embodiment of the present invention may be disposed on the headset.
It should be noted that, in the embodiment of the present invention, a process in which the gaze point of the sight of the target person moves in the target area at a preset speed according to the preset path is referred to as a target process. The target pupil data may include the pupil diameter of the target person at each time in the target process. The target optical data may include the ambient light level within the target area at each time during the target process.
It should be noted that, in the embodiment of the present invention, the preset duration of the interval between any two adjacent moments in the target process is set.
Step 102, acquiring a change amplitude value of pupil diameter of a target person in a target process based on target pupil data, and acquiring a change amplitude value of ambient light brightness in a target area in the target process based on target optical data.
Specifically, after the target pupil data are obtained, the maximum value and the minimum value of the pupil diameter of the target person in the target process can be calculated in a numerical calculation mode, so that the variation amplitude value of the pupil diameter of the target person in the target process is obtained.
After the target optical data is obtained, the maximum value and the minimum value of the ambient light brightness in the target area in the target process can be calculated by a numerical calculation mode, so that the variation amplitude value of the ambient light brightness in the target area in the target process is obtained.
Step 103, inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of the ground object distribution in a target area output by the evaluation data acquisition model as the evaluation data of the target area;
the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprises a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area at a preset speed according to a preset path; the sample optical data comprises a variation amplitude value of the brightness of the environment in the sample area in the sample process; the first sample tag includes uniformity of distribution of features within the sample region.
It should be noted that in the embodiment of the present invention, the number of sample personnel may be plural. And, the sample person may include a target person.
In the embodiment of the invention, the sample area can comprise a plurality of ground object types, and the number of the sample areas can also be a plurality of.
It is understood that the greater the number of sample persons and sample areas, the higher the calculation accuracy of the evaluation data acquisition model obtained based on the first sample data and the first sample tag corresponding to the first sample data.
Specifically, in the embodiment of the present invention, by using a head-mounted pupil diameter acquisition device and an ambient light brightness sensor, the change data of the pupil diameter of the sample person in the process that the gaze point of the gaze of any sample person moves in any sample area according to a preset path at a preset speed and the change data of the ambient light brightness in the sample area in the process that the gaze point of the gaze of the sample person moves in the sample area according to the preset path at the preset speed are acquired, and further, the change amplitude value of the pupil diameter of the sample person and the change amplitude value of the ambient light brightness in the sample area in the process that the gaze point of the sample person moves in the sample area according to the preset path at the preset speed are calculated by a numerical calculation method, so as to be used as a set of sample pupil data and sample optical data with a corresponding relationship, and then the set of pupil data and sample optical data with a corresponding relationship can be used as a first sample data. According to the embodiment of the invention, the uniformity of the ground object distribution in the sample area can be obtained by using an unmanned aerial vehicle technology and used as a first sample tag of the first sample data. That is, in the embodiment of the present invention, there is a one-to-one correspondence between the sample pupil data and the sample optical data in the first sample data, and the first sample label.
After the first sample data and the first sample label corresponding to the first sample data are acquired, an evaluation data acquisition model can be constructed by means of data analysis or deep learning technology and the like based on the first sample data and the first sample label.
As an alternative embodiment, the evaluation data acquisition model is obtained based on the following: carrying out regression analysis on the first sample data and the first sample label to obtain an evaluation data acquisition model for describing the corresponding relation between the first sample data and the first sample label; alternatively, the evaluation data acquisition model is obtained based on the following manner: training the convolutional neural network model based on the first sample data and the first sample label to obtain an evaluation data acquisition model.
Optionally, after the first sample data and the first sample label corresponding to the first sample data are obtained, the sample pupil data and the sample optical data in the first sample data may be used as independent variables, the first sample label corresponding to the first sample data is used as a dependent variable, and regression analysis is performed on the first sample data and the first sample label corresponding to the first sample data.
By performing regression analysis on the first sample data and the first sample label corresponding to the first sample data, an evaluation data acquisition model for describing the correspondence between the sample pupil data and the sample optical data in the first sample data and the first sample label corresponding to the first sample data can be obtained.
It will be appreciated that the above-described evaluation data acquisition model is a mathematical model.
Optionally, after the first sample data and the first sample label corresponding to the first sample data are acquired, the first sample data may be further taken as a sample, the first sample label corresponding to the first sample data is taken as a label, and the convolutional neural network model is trained to obtain the evaluation data acquisition model.
According to the embodiment of the invention, the evaluation data acquisition model can be acquired more accurately and more efficiently by means of data regression or model training based on the first sample data and the first sample label corresponding to the first sample data, and the calculation accuracy of the evaluation data acquisition model can be improved.
After the change amplitude value of the pupil diameter of the target person in the target process and the change amplitude value of the ambient light brightness in the target area in the target process are obtained, the change amplitude value of the pupil diameter of the target person in the target process and the change amplitude value of the ambient light brightness in the target area in the target process can be input into the evaluation data acquisition model.
The evaluation data acquisition model can acquire and output the uniformity of the ground object distribution in the target area based on the change amplitude value of the pupil diameter of the target person in the target process and the change amplitude value of the ambient light brightness in the target area in the target process.
After the uniformity of the feature distribution in the target area output by the evaluation data acquisition model is acquired, the uniformity of the feature distribution in the target area can be determined as the evaluation data of the target area.
After the evaluation data of the target area is obtained, whether the target area is suitable for carrying out the authenticity verification of the remote sensing product can be determined based on the evaluation data of the target area.
In the embodiment of the invention, under the condition that an evaluation data acquisition task corresponding to a target area is triggered, the change data of the pupil diameter of a target person in the process that the gaze point of the gaze of the target person moves in the target area according to a preset path at a preset speed is acquired as target pupil data, the change data of the ambient light brightness in the target area in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed is acquired as target optical data, then the change amplitude value of the pupil diameter of the target person and the change amplitude value of the ambient light brightness in the target area in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed are acquired based on the target pupil data and the target optical data, and then the change amplitude value of the pupil diameter of the target person and the change amplitude value of the ambient light brightness in the target area are input into an evaluation data acquisition model, the uniformity of the ground object distribution in the target area output by the evaluation data acquisition model is acquired as the evaluation data of the target area, the evaluation data for judging whether the area is suitable for carrying out the authenticity check of the remote sensing product can be more objectively acquired under the condition of not using an unmanned aerial vehicle technology based on the correlation between the pupil diameter of the human eye and the reflectivity of the ground object seen by the human eye and the brightness of the environment where the human eye is positioned, the defect of stronger subjectivity when judging whether a certain area is suitable for carrying out the authenticity check of the remote sensing product under the condition of not using the unmanned aerial vehicle technology can be overcome, the equipment cost and the time cost required for acquiring the evaluation data can be reduced, the efficiency of the authenticity check of the remote sensing product can be improved, the cost investment of the authenticity verification of the remote sensing product can be reduced, the operation required by technicians is simple, and the operation technical level requirements on the technicians are low.
As an alternative embodiment, after acquiring the target pupil data and the target optical data, the method further comprises: for any time in the target process, when it is determined that the rate of change between the pupil diameter of the target person at any time and the pupil diameter of the target person at the previous time at any time exceeds the rate of change threshold based on the target pupil data, any time is determined as the first target time.
Specifically, for any time in the target process, if the change rate of the pupil diameter of the target person at the time and the pupil diameter of the target person at the time immediately before the time exceeds the change rate threshold, it may be described that the feature seen by the target person at the time is changed from the feature seen by the target person at the time immediately before the time, and the time may be determined as the first target time.
It will be appreciated that the change in ambient light intensity within the target area is not significant during the target process, since the time taken to complete the target process is relatively short, typically only a few minutes. Therefore, in the embodiment of the invention, when the first target moment is determined, the influence of the change of the ambient light brightness in the target area on the pupil diameter of the target person is ignored.
It should be noted that, the change rate threshold in the embodiment of the present invention may be determined based on a priori knowledge and/or actual conditions. The specific value of the change rate threshold is not limited in the embodiment of the present invention.
After the target pupil data is obtained, whether the change rate of the pupil diameter of the target person at the moment and the pupil diameter of the target person at the moment exceeds a change rate threshold value can be determined by a numerical comparison mode based on the change rate of the pupil diameter of the target person at any moment and the pupil diameter of the target person at the moment in the target process in a numerical calculation mode.
It will be appreciated that one or more first target moments may be included in the target process.
For any first target moment, determining any first target moment as a second target moment when the ground feature seen by the target person at any first target moment is vegetation based on the pupil diameter of the target person at any first target moment and the ambient light brightness in the target area at any first target moment, and determining any first target moment as a third target moment when the ground feature seen by the target person at any first target moment is not vegetation based on the pupil diameter of the target person at any first target moment and the ambient light brightness in the target area at any first target moment.
Specifically, for any first target time in the target process, whether the ground object seen by the target person at the first target time is vegetation may be determined through a data analysis or a deep learning technology based on the pupil diameter of the target person at the first target time and the ambient light brightness in the target area at the first target time.
As an optional embodiment, determining whether the ground object seen by the target person at any first target time is vegetation based on the pupil diameter of the target person at any first target time and the ambient light intensity in the target area at any first target time includes: inputting the pupil diameter of the target person at any first target moment and the ambient light intensity in the target area at any first target moment into a ground object type determining model, and obtaining a ground object type determining result at any first target moment output by the ground object type determining model;
determining whether the ground object seen by the target person at any first target moment is vegetation or not based on the ground object type determination result at any first target moment;
wherein the ground object type determination model is obtained based on the second sample data; the second sample data includes pupil diameters of the target person looking at vegetation at different ambient light levels and pupil diameters of the target person looking at non-vegetation at different ambient light levels.
It should be noted that, since the uniformity of the feature distribution is related to the value of the variation amplitude of the pupil diameter and the value of the variation amplitude of the ambient light brightness, even if the sample person is different from the target person, the value of the variation amplitude of the pupil diameter of the sample person in the sample process can still be used to obtain the uniformity of the feature distribution in the target area.
However, due to the variability of the human body, pupil diameters may not be the same when different people see the same terrain. Therefore, in the embodiment of the invention, the pupil diameter of the target person when seeing vegetation under different environmental light levels and the pupil diameter of the target person when seeing non-vegetation under different environmental light levels are adopted to construct a ground object type determination model.
Specifically, in the embodiment of the invention, the pupil diameter of the target person when seeing vegetation in different environmental light brightness and the pupil diameter of the target person when seeing non-vegetation in different environmental light brightness can be obtained by using the head-mounted pupil diameter acquisition device and the environmental light brightness sensor to serve as second sample data.
After the second sample data is acquired, a model for determining the type of the ground object can be constructed through a data analysis or a deep learning technology based on the second sample data.
As an alternative embodiment, the terrain type determination model is obtained based on the following: carrying out regression analysis on the second sample data to obtain a ground object type determination model for describing the corresponding relation between different pupil diameters and different ambient light brightness of the target personnel and whether the ground object seen by the target personnel is vegetation or not;
alternatively, the terrain type determination model is obtained based on the following: taking the pupil diameter of the target person and the ambient light brightness as samples, taking whether the ground object seen by the target person corresponding to the pupil diameter of the target person and the ambient light brightness is vegetation as a sample label, and training the convolutional neural network to obtain a ground object type determining model.
It should be noted that, after the second sample data is obtained, a one-to-one correspondence relationship between the pupil diameter of the target person, the ambient light level, and whether the ground object seen by the target person is vegetation may be obtained based on the second sample data.
Optionally, based on the one-to-one correspondence between the pupil diameter of the target person, the ambient light level, and whether the ground object seen by the target person is vegetation, regression analysis may be performed on the second sample data with the ambient light level and the pupil diameter of the target person as dependent variables and the target person seeing vegetation or seeing non-vegetation as dependent variables.
And carrying out regression analysis on the second sample data to obtain a ground object type determination model for describing the correspondence between different pupil diameters and different environmental light brightness of the target personnel and whether the ground object seen by the target personnel is vegetation.
It will be appreciated that the above-described terrain type determination model is a mathematical model.
Optionally, based on a one-to-one correspondence between the pupil diameter of the target person, the ambient light level, and whether the ground object seen by the target person is vegetation, the pupil diameter of the target person and the ambient light level may be taken as samples, whether the ground object seen by the target person corresponding to the pupil diameter of the target person and the ambient light level is vegetation is taken as a sample label, and the convolutional neural network model is trained to obtain the ground object type determination model.
According to the embodiment of the invention, based on the second sample data, the ground object type determining model can be more accurately and more efficiently obtained in a data regression or model training mode, and the calculation accuracy of the ground object type determining model can be improved.
And inputting the pupil diameter of a target person at the first target moment and the ambient light brightness in a target area at the first target moment into a ground object type determining model at any first target moment in the target process.
The feature type determination model may determine and output a feature type determination result at the first target time indicating whether the feature seen by the target person at the first target time is vegetation, based on the pupil diameter of the target person at the first target time and the ambient light level in the target area at the first target time.
After the feature type determination result of the first target moment output by the feature type determination mode is obtained, whether the feature seen by the target person at the first target moment is vegetation can be determined based on the feature type determination result of the first target moment.
If the ground object seen by the target person at the first target time is determined to be vegetation based on the ground object type determination result at the first target time, the first target time may be determined to be a second target time; if it is determined that the ground object seen by the target person at the first target time is not vegetation based on the ground object type determination result at the first target time, the first target time may be determined as a third target time.
Calculating a vegetation coverage of the target area based on a total time length of the target process, a time length between any second target time in the target process and a next third target time in any second target time and any second target time in the target process and a time length between the last second target time in the target process and a termination time of the target process under the condition that both the first target time in the target process and the last first target time are the second target time and the last first target time is the third target time, calculating a vegetation coverage of the target area based on the total time length of the target process and the time length between any second target time in the target process and the next third target time in any second target time, calculating a vegetation coverage of the target area based on the total time length of the target process and the time length between the second target time and the termination time of the target process under the condition that only one second target time is included in the target process, calculating a coverage of the target area based on the total time length between the first target time and the second target time and the termination time of the target process, determining the vegetation coverage of the target area based on the total time length between any second target time in the target process and the first target time in the first target process and the first target time and the third target time, and under the condition that the first target time in the target process is the third target time and the last first target time is the second target time, determining vegetation coverage of the target area as evaluation data of the target area based on the total time of the target process, the time between the first third target time and the starting time of the target process in the target process, the time between any second target time and the next third target time in the target process and the time between the last second target time and the ending time of the target process, and calculating vegetation coverage of the target area as evaluation data of the target area based on the total time of the target process and the time between the third target time and the starting time of the target process in the case that only one third target time is included in the target process.
Fig. 2 is a schematic diagram of the distribution of the second target time and the third target time in the target process in the evaluation data acquisition method provided by the invention. As shown in fig. 2, the first target time and the last target time in the target process are both the second target time.
As shown in fig. 2, in the duration between any second target time and the third target time in any second target time in the target process, the ground features seen by the target personnel are vegetation; and in the time period between the last second target time in the target process and the ending time of the target process, the ground objects seen by the target personnel are vegetation.
Therefore, under the condition that the first target time and the last target time in the target process are both second target times, the sum of the time length between any second target time in the target process and the next third target time of any second target time and the time length between the last second target time in the target process and the termination time of the target process can be calculated through numerical calculation to be used as an intermediate result, and then the quotient of the intermediate result and the total time length of the target process can be calculated through numerical calculation to be used as the vegetation coverage of the target area.
Fig. 3 is a second schematic distribution diagram of a second target time and a third target time in a target process in the evaluation data acquisition method provided by the invention. As shown in fig. 3, the first target time in the target process is the second target time, and the last first target time in the target process is the third target time.
As shown in fig. 3, in the period between any second target time and the third target time in any second target time in the target process, the ground objects seen by the target personnel are all vegetation.
Therefore, under the condition that the first target time in the target process is the second target time and the last first target time is the third target time, the sum of the time lengths between any second target time and the next third target time in the target process can be calculated through numerical calculation to serve as an intermediate result, and further, the quotient of the intermediate result and the total time length of the target process can be calculated through numerical calculation to serve as the vegetation coverage of the target area.
Fig. 4 is a third schematic distribution diagram of the second target time and the third target time in the target process in the evaluation data acquisition method provided by the invention. As shown in fig. 4, the target process includes only one second target time.
As shown in fig. 4, the ground objects seen by the target person are vegetation during the period between the second target time in the target process and the termination time of the target process.
Therefore, in the case where only one second target time is included in the target process, the quotient of the time length between the second target time and the termination time of the target process and the total time length of the target process can be calculated as the vegetation coverage of the target area by numerical calculation.
Fig. 5 is a schematic diagram of the distribution of the second target time and the third target time in the target process in the evaluation data acquisition method provided by the invention. As shown in fig. 5, the first third target time and the last one in the target process are both the third target time.
As shown in fig. 5, in the duration between the starting time of the target process and the first third target time in the target process, the ground objects seen by the target personnel are vegetation; and in the time period between any second target time and the next third target time in any second target time in the target process, the ground objects seen by the target personnel are vegetation.
Therefore, under the condition that the first target time and the last first target time in the target process are both the third target time, the sum of the time length between the starting time of the target process and the first third target time in the target process and the time length between any second target time in the target process and the next third target time in any second target time in the target process can be calculated through numerical calculation to serve as an intermediate result, and then the quotient of the intermediate result and the total time length of the target process can be calculated through numerical calculation to serve as the vegetation coverage of the target area.
Fig. 6 is a schematic diagram showing the distribution of the second target time and the third target time in the target process in the evaluation data acquisition method according to the present invention. As shown in fig. 6, the first target time in the target process is the third target time, and the last first target time in the target process is the second target time.
As shown in fig. 6, in the duration between the starting time of the target process and the first third target time in the target process, the ground objects seen by the target personnel are all vegetation; in the time between any second target time and the next third target time in any second target time in the target process, the ground objects seen by the target personnel are vegetation; and in the time period between the last second target time in the target process and the ending time of the target process, the ground objects seen by the target personnel are vegetation.
Therefore, under the condition that the first target time and the last first target time in the target process are both the third target time, the sum of the time length between the starting time of the target process and the first third target time in the target process, the time length between any second target time in the target process and the next third target time of any second target time in the target process and the time length between the last second target time in the target process and the ending time of the target process can be calculated to obtain the sum of the time length between the first target time and the first third target time in the target process as an intermediate result through numerical calculation, and the quotient of the intermediate result and the total time length of the target process can be calculated to obtain the vegetation coverage of the target area through numerical calculation.
Fig. 7 is a schematic diagram showing the distribution of the second target time and the third target time in the target process in the evaluation data acquisition method according to the present invention. As shown in fig. 7, only the first third target time is included in the target process.
As shown in fig. 7, the ground objects seen by the target person are vegetation during the period between the starting time of the target process and the first third target time in the target process.
Under the condition that only the first third target time is included in the target process, the quotient of the duration between the starting time of the target process and the third target time in the target process and the total duration of the target process can be calculated through numerical calculation, and the quotient is used as the vegetation coverage of the target area.
After the vegetation coverage of the target area is calculated, the vegetation coverage of the target area can be used as evaluation data of the target area.
According to the method and the device, the vegetation coverage of the target area can be calculated more accurately, more efficiently and more objectively in a numerical calculation mode based on at least two of the starting time, the ending time, the second target time and the third target time of the target process, and the vegetation coverage can be used as evaluation data of the target area.
As an alternative embodiment, after calculating the vegetation coverage of the target area, the method further comprises: and acquiring the greenness of the target area based on the vegetation coverage of the target area, the target pupil data and the target optical data, and taking the greenness of the target area as evaluation data of the target area.
Specifically, after the vegetation coverage of the target area is obtained, the greenness of the target area may be obtained through a numerical calculation or a deep learning technique based on the vegetation coverage of the target area, the target pupil data, and the target optical data.
As an alternative embodiment, acquiring the greenness of the target area as the evaluation data of the target area based on the vegetation coverage of the target area, the target pupil data, and the target optical data, includes: acquiring an average value of pupil diameters of target personnel in a target process based on target pupil data, and acquiring an average value of ambient light brightness in a target area in the target process based on target optical data;
inputting an average value of pupil diameters of target personnel in a target process and an average value of ambient brightness in a target area in the target process into a green degree estimation model to obtain original green degree of the target area output by the green degree estimation model;
Wherein the green degree estimation model is obtained based on the third sample data; the third sample data includes pupil diameters of the target person looking at vegetation of different greenness at different ambient light levels.
Specifically, in the embodiment of the invention, the pupil diameter of a target person when seeing vegetation with different greenness under different environmental light brightness can be obtained by using the head-mounted pupil diameter acquisition device and the environmental light brightness sensor.
After the second sample data is acquired, a green degree estimation model may be constructed through data analysis or a deep learning technique based on the second sample data.
It should be noted that, after the third sample data is obtained, a one-to-one correspondence relationship between the pupil diameter of the target person, the ambient light level, and the greenness of the vegetation seen by the target person may be obtained based on the third sample data.
Optionally, based on a one-to-one correspondence between the pupil diameter of the target person, the ambient light level, and the greenness of the vegetation seen by the target person, regression analysis may be performed on the third sample data with the ambient light level and the pupil diameter of the target person as dependent variables, and the greenness of the vegetation seen by the target person as dependent variables.
By performing regression analysis on the third sample data, a green degree estimation model for describing the correspondence between different pupil diameters and different ambient light levels of the target person and the green degree of vegetation seen by the target person can be obtained.
It will be appreciated that the above-described green degree estimation model is a mathematical model.
Optionally, based on the one-to-one correspondence between the pupil diameter of the target person, the environmental light brightness and the green degree of the vegetation seen by the target person, the pupil diameter of the target person and the environmental light brightness may be taken as samples, the green degree of the ground object seen by the target person corresponding to the pupil diameter of the target person and the environmental light brightness may be taken as a sample label, and the convolutional neural network model may be trained to obtain the green degree estimation model.
The embodiment of the invention can acquire the green degree estimation model more accurately and more efficiently based on the third sample data in a data regression or model training mode, and can improve the calculation accuracy of the green degree estimation model.
In the embodiment of the invention, the average value of the pupil diameters of the target personnel at all times in the target process can be obtained by calculation in a numerical calculation mode, the average value of the pupil diameters of the target personnel in the target process is obtained, the average value of the ambient light brightness in the target area at all times in the target process is obtained by calculation, and the average value of the ambient light brightness in the target area in the target process is obtained.
After the average value of the pupil diameters of the target personnel in the target process and the average value of the ambient light brightness in the target area in the target process are obtained, the average value of the pupil diameters of the target personnel in the target process and the average value of the ambient light brightness in the target area in the target process can be input into a green degree estimation model, and then the original green degree of the target area output by the green degree estimation model can be obtained.
And correcting the original green degree of the target area based on the vegetation coverage of the target area to obtain the green degree of the target area.
Specifically, after the original greenness of the target area output by the greenness estimation model is obtained, the original greenness of the target area can be corrected in a numerical calculation mode based on the vegetation coverage of the target area, so as to obtain the greenness of the target area.
The method and the device can calculate the greenness of the target area more accurately, more efficiently and objectively based on the greenness estimation model, and serve as evaluation data of the target area.
As an alternative embodiment, in the case that the evaluation data acquisition task corresponding to the target area has been triggered, the method further includes: and performing voice broadcasting so that a target person can move the gaze point of the sight in the target area according to the voice broadcasting and the preset path at the preset speed.
Specifically, the voice broadcast in the embodiment of the invention can be timing and reading second broadcast, and can also be voice prompt broadcast of each preset period, so that a target person can move the gaze point of the sight in the target area according to the preset path and the preset speed according to the voice broadcast.
Fig. 8 is a schematic diagram of the structure of the evaluation data acquisition apparatus provided by the present invention. The evaluation data acquisition apparatus provided by the present invention will be described below with reference to fig. 8, and the evaluation data acquisition apparatus described below and the evaluation data acquisition method provided by the present invention described above may be referred to correspondingly to each other. As shown in fig. 8, a data acquisition module 801, a numerical calculation module 802, and a data conversion module 803.
A data acquisition module 801, configured to acquire target pupil data and target optical data in a case where an evaluation data acquisition task corresponding to a target area has been triggered, where the target pupil data includes change data of a pupil diameter of a target person in a target process, the target process includes a process in which a gaze point of a line of sight of the target person moves in the target area at a preset speed according to a preset path, and the target optical data includes change data of an environmental light brightness in the target area in the target process;
A numerical calculation module 802, configured to obtain, based on the target pupil data, a value of a variation amplitude of a pupil diameter of the target person in the target process, and obtain, based on the target optical data, a value of a variation amplitude of an ambient light brightness in the target area in the target process;
the data conversion module 803 is configured to input the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquire uniformity of ground object distribution in the target area output by the evaluation data acquisition model as evaluation data of the target area;
the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprise a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area according to the preset path at the preset speed; the sample optical data includes a magnitude of change in ambient light brightness within the sample region during the sample; the first sample tag includes uniformity of distribution of features within the sample region.
Specifically, the data acquisition module 801, the numerical calculation module 802, and the data conversion module 803 are electrically connected.
In the evaluation data acquisition device of the embodiment of the invention, under the condition that the evaluation data acquisition task corresponding to the target area is triggered, the change data of the pupil diameter of the target person in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed is acquired as target pupil data, the change data of the ambient light brightness in the target area in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed is acquired as target optical data, then the change amplitude value of the pupil diameter of the target person in the process that the gaze point of the target person moves in the target area according to the preset path at the preset speed and the change amplitude value of the ambient light brightness in the target area are acquired based on the target pupil data and the target optical data, further, the change amplitude value of the pupil diameter of the target person and the change amplitude value of the ambient light brightness in the target area are input into an evaluation data acquisition model, the uniformity of the ground object distribution in the target area output by the evaluation data acquisition model is acquired, as the evaluation data of the target area, the evaluation data for judging whether the area is suitable for carrying out the remote sensing product authenticity check can be acquired more objectively under the condition of not using an unmanned aerial vehicle technology based on the correlation between the pupil diameter of the human eye and the reflectivity of the ground object seen by the human eye and the ambient light brightness where the human eye is located, the defect that the subjectivity is stronger when judging whether a certain area is suitable for carrying out the remote sensing product authenticity check under the condition of not using the unmanned aerial vehicle technology can be overcome, the equipment cost and the time cost required for acquiring the evaluation data can be reduced, the method can improve the efficiency of the authenticity verification of the remote sensing product, reduce the cost investment of the authenticity verification of the remote sensing product, simplify the operation required by technicians, and lower the operation technical level requirements of the technicians.
Fig. 9 illustrates a physical schematic diagram of an electronic device, as shown in fig. 9, which may include: processor 910, communication interface (Communications Interface), memory 930, and communication bus 940, wherein processor 910, communication interface 920, and memory 930 communicate with each other via communication bus 940. Processor 910 may call logic instructions in memory 930 to perform an evaluation data acquisition method comprising: under the condition that an evaluation data acquisition task corresponding to a target area is triggered, acquiring target pupil data and target optical data, wherein the target pupil data comprise change data of pupil diameters of target personnel in a target process, the target process comprises a process that a gaze point of a sight of the target personnel moves in the target area at a preset speed according to a preset path, and the target optical data comprise change data of ambient brightness in the target area in the target process; acquiring a change amplitude value of pupil diameter of a target person in a target process based on target pupil data, and acquiring a change amplitude value of ambient light brightness in a target area in the target process based on target optical data; inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of the ground object distribution in a target area output by the evaluation data acquisition model as the evaluation data of the target area; the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprises a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area at a preset speed according to a preset path; the sample optical data comprises a variation amplitude value of the brightness of the environment in the sample area in the sample process; the first sample tag includes uniformity of distribution of features within the sample region.
Further, the logic instructions in the memory 930 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Based on the content of the above embodiments, an evaluation data acquisition system includes: a head-mounted pupil diameter acquisition device, an ambient light level sensor, and an electronic device as described above; the head-mounted pupil diameter acquisition device and the ambient light brightness sensor are electrically connected with the electronic device; the ambient light brightness sensor is arranged in the target area;
The head-mounted pupil diameter acquisition device is used for responding to the control of the electronic device, starting to acquire the pupil diameter of the target person and sending the acquired pupil diameter of the target person to the electronic device;
the environment light brightness sensor is used for responding to the control of the electronic equipment, starting to collect the environment light brightness in the target area and sending the collected environment light brightness in the target area to the electronic equipment;
the target person is a person wearing the head-mounted pupil diameter acquisition device.
As an alternative embodiment, the evaluation data acquisition system further comprises: a user interaction device; the user interaction device is electrically connected with the electronic device;
the user interaction device is configured to receive a first input indicative of triggering the ratings data acquisition task and to send the first input to the electronic device, to respond to the first input by the electronic device,
the user interaction device is further used for receiving a second input used for indicating that the target process is completed, and sending the second input to the electronic device, so that the electronic device responds to the second input to control the head-mounted pupil diameter acquisition device to stop acquiring the pupil diameter of the target person;
the electronic device is further configured to trigger an evaluation data acquisition task in response to the first input, and to control the head-mounted pupil diameter acquisition device to begin acquiring the pupil diameter of the target person and to control the ambient light level sensor to begin acquiring the ambient light level within the target area,
The electronic device is further configured to, in response to the second input, control the head-mounted pupil diameter collection device to stop collecting the pupil diameter of the target person and control the ambient light level sensor to stop collecting the ambient light level within the target area in response to the second input.
As an alternative embodiment, the evaluation data acquisition system further comprises: a timing device and a voice broadcasting device; the timing device and the voice broadcasting device are electrically connected with the electronic equipment;
the timing device is used for responding to the control of the electronic equipment, recording the starting time and the ending time of the target process and sending the starting time and the ending time of the target process to the electronic equipment,
the timing device is also used for responding to the control of the electronic equipment, returning a timing signal to the electronic equipment at intervals of preset time length, and controlling the voice broadcasting device to conduct voice broadcasting under the condition that the timing signal is received by the electronic equipment;
the voice broadcasting device is used for responding to the control of the electronic equipment to conduct voice broadcasting, so that a target person can move the fixation point of the sight in the target area according to the voice broadcasting and the preset path at the preset speed.
It should be noted that the evaluation data acquisition system provided by the present invention includes the head-mounted pupil diameter acquisition device, the ambient light level sensor, and the electronic device as described above. The evaluation data acquisition method provided by the invention can be realized based on the head-mounted pupil diameter acquisition device, the ambient light level sensor and the electronic device.
The specific structure of the evaluation data acquisition system, the interaction between the devices and the specific steps for implementing the evaluation data acquisition method provided by the invention can be referred to the content of the above embodiments, and the embodiments of the invention are not repeated.
In the evaluation data acquisition system of the embodiment of the invention, under the condition that the evaluation data acquisition task corresponding to the target area is triggered, the change data of the pupil diameter of the target person in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed is acquired as target pupil data, the change data of the ambient light brightness in the target area in the process that the gaze point of the gaze of the target person moves in the target area according to the preset path at the preset speed is acquired as target optical data, then the change amplitude value of the pupil diameter of the target person in the process that the gaze point of the target person moves in the target area according to the preset path at the preset speed and the change amplitude value of the ambient light brightness in the target area are acquired based on the target pupil data and the target optical data, further, the change amplitude value of the pupil diameter of the target person and the change amplitude value of the ambient light brightness in the target area are input into an evaluation data acquisition model, the uniformity of the ground object distribution in the target area output by the evaluation data acquisition model is acquired, as the evaluation data of the target area, the evaluation data for judging whether the area is suitable for carrying out the remote sensing product authenticity check can be acquired more objectively under the condition of not using an unmanned aerial vehicle technology based on the correlation between the pupil diameter of the human eye and the reflectivity of the ground object seen by the human eye and the ambient light brightness where the human eye is located, the defect that the subjectivity is stronger when judging whether a certain area is suitable for carrying out the remote sensing product authenticity check under the condition of not using the unmanned aerial vehicle technology can be overcome, the equipment cost and the time cost required for acquiring the evaluation data can be reduced, the method can improve the efficiency of the authenticity verification of the remote sensing product, reduce the cost investment of the authenticity verification of the remote sensing product, simplify the operation required by technicians, and lower the operation technical level requirements of the technicians.
In another aspect, the present invention also provides a computer program product, the computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing the evaluation data acquisition method provided by the above methods, the method comprising: under the condition that an evaluation data acquisition task corresponding to a target area is triggered, acquiring target pupil data and target optical data, wherein the target pupil data comprise change data of pupil diameters of target personnel in a target process, the target process comprises a process that a gaze point of a sight of the target personnel moves in the target area at a preset speed according to a preset path, and the target optical data comprise change data of ambient brightness in the target area in the target process; acquiring a change amplitude value of pupil diameter of a target person in a target process based on target pupil data, and acquiring a change amplitude value of ambient light brightness in a target area in the target process based on target optical data; inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of the ground object distribution in a target area output by the evaluation data acquisition model as the evaluation data of the target area; the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprises a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area at a preset speed according to a preset path; the sample optical data comprises a variation amplitude value of the brightness of the environment in the sample area in the sample process; the first sample tag includes uniformity of distribution of features within the sample region.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method of obtaining evaluation data provided by the above methods, the method comprising: under the condition that an evaluation data acquisition task corresponding to a target area is triggered, acquiring target pupil data and target optical data, wherein the target pupil data comprise change data of pupil diameters of target personnel in a target process, the target process comprises a process that a gaze point of a sight of the target personnel moves in the target area at a preset speed according to a preset path, and the target optical data comprise change data of ambient brightness in the target area in the target process; acquiring a change amplitude value of pupil diameter of a target person in a target process based on target pupil data, and acquiring a change amplitude value of ambient light brightness in a target area in the target process based on target optical data; inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of the ground object distribution in a target area output by the evaluation data acquisition model as the evaluation data of the target area; the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprises a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area at a preset speed according to a preset path; the sample optical data comprises a variation amplitude value of the brightness of the environment in the sample area in the sample process; the first sample tag includes uniformity of distribution of features within the sample region.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. An evaluation data acquisition method, characterized by comprising:
under the condition that an evaluation data acquisition task corresponding to a target area is triggered, acquiring target pupil data and target optical data, wherein the target pupil data comprise change data of pupil diameters of target personnel in a target process, the target process comprises a process that a gaze point of a sight of the target personnel moves in the target area according to a preset path at a preset speed, and the target optical data comprise change data of ambient light brightness in the target area in the target process;
acquiring a change amplitude value of the pupil diameter of the target person in the target process based on the target pupil data, and acquiring a change amplitude value of the ambient light brightness in the target area in the target process based on the target optical data;
Inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of ground object distribution in the target area output by the evaluation data acquisition model as evaluation data of the target area;
the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprise a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area according to the preset path at the preset speed; the sample optical data includes a magnitude of change in ambient light brightness within the sample region during the sample; the first sample tag includes uniformity of distribution of features within the sample region.
2. The evaluation data acquisition method according to claim 1, wherein after the acquisition of the target pupil data and the target optical data, the method further comprises:
For any moment in the target process, determining any moment as a first target moment when the change rate between the pupil diameter of the target person at the any moment and the pupil diameter of the target person at the moment before the any moment exceeds a change rate threshold value based on the target pupil data;
for any first target moment, determining any first target moment as a second target moment when the ground object seen by the target person at any first target moment is vegetation based on the pupil diameter of the target person at any first target moment and the ambient light brightness in the target area at any first target moment, and determining any first target moment as a third target moment when the ground object seen by the target person at any first target moment is not vegetation based on the pupil diameter of the target person at any first target moment and the ambient light brightness in the target area at any first target moment;
in the case that the first target time in the target process is the second target time, determining vegetation coverage of the target area as evaluation data of the target area based on the total time length of the target process and the time length between any second target time in the target process and the next third target time in any second target time and/or the time length between the last second target time in the target process and the termination time of the target process,
Under the condition that the first target time and the last target time in the target process are both second target times, calculating vegetation coverage of the target area based on total time length of the target process, time length between any second target time in the target process and the next third target time of any second target time and time length between the last second target time in the target process and the ending time of the target process, and taking the vegetation coverage as evaluation data of the target area,
calculating vegetation coverage of the target area based on the total duration of the target process and the duration between any one of the second target time and the next third target time in the target process when the first target time in the target process is the second target time and the last first target time in the target process is the third target time, as evaluation data of the target area,
under the condition that only one second target time is included in the target process, calculating the vegetation coverage of the target area based on the total time length of the target process and the time length between the second target time and the ending time of the target process, and taking the vegetation coverage as evaluation data of the target area,
In the case where the first target time and the last first target time in the target process are both third target times, determining vegetation coverage of the target area as evaluation data of the target area based on a total time length of the target process, a time length between the first third target time in the target process and a start time of the target process, and a time length between any second target time in the target process and a next third target time of any second target time,
determining vegetation coverage of the target area as evaluation data of the target area based on a total time length of the target process, a time length between a first third target time length in the target process and a start time length of the target process, a time length between any second target time length in the target process and a next third target time length in the target process and any second target time length in the target process and a time length between a last second target time length in the target process and a termination time length of the target process, in a case where the first target time length in the target process is the third target time length and the last first target time length is the second target time length,
And under the condition that the target process only comprises a third target time, calculating to obtain vegetation coverage of the target area as evaluation data of the target area based on the total duration of the target process and the duration between the third target time and the starting time of the target process.
3. The evaluation data acquisition method according to claim 1, wherein the evaluation data acquisition model is obtained based on:
performing regression analysis on the first sample data and the first sample tag to obtain an evaluation data acquisition model for describing the corresponding relation between the first sample data and the first sample tag;
alternatively, the evaluation data acquisition model is obtained based on the following:
and training a convolutional neural network model based on the first sample data and the first sample label to obtain the evaluation data acquisition model.
4. The evaluation data acquisition method according to claim 2, wherein determining whether or not the ground object seen by the target person at any one of the first target times is vegetation based on the pupil diameter of the target person at any one of the first target times and the ambient light level in the target area at any one of the first target times, comprises:
Inputting the pupil diameter of the target person at any first target moment and the ambient light intensity in the target area at any first target moment into a ground object type determining model, and obtaining a ground object type determining result at any first target moment output by the ground object type determining model;
determining whether the ground object seen by the target person at any first target moment is vegetation or not based on the ground object type determination result at any first target moment;
wherein the ground object type determination model is obtained based on second sample data; the second sample data includes a pupil diameter of the target person when viewing vegetation at different ambient light levels and a pupil diameter of the target person when viewing non-vegetation at different ambient light levels.
5. The evaluation data acquisition method according to claim 4, wherein the feature type determination model is obtained based on:
performing regression analysis on the second sample data to obtain a ground object type determination model for describing the correspondence between different pupil diameters and different ambient light levels of the target person and whether the ground object seen by the target person is vegetation or not;
Alternatively, the feature type determination model is obtained based on the following manner:
and training a convolutional neural network by taking the pupil diameter of the target person and the ambient light brightness as samples and taking whether the ground object seen by the target person corresponding to the pupil diameter of the target person and the ambient light brightness is vegetation as a sample label to obtain the ground object type determining model.
6. The method of claim 2, wherein after the calculating the vegetation coverage of the target area, the method further comprises:
and acquiring the greenness of the target area based on the vegetation coverage of the target area, the target pupil data and the target optical data, and taking the greenness of the target area as evaluation data of the target area.
7. The evaluation data acquisition method according to claim 6, wherein the acquiring the greenness of the target area based on the vegetation coverage of the target area, the target pupil data, and the target optical data as the evaluation data of the target area includes:
acquiring an average value of pupil diameters of the target personnel in the target process based on the target pupil data, and acquiring an average value of ambient light brightness in the target area in the target process based on the target optical data;
Inputting an average value of pupil diameters of the target personnel in the target process and an average value of the brightness of the environment in the target area in the target process into a green degree estimation model, and obtaining the original green degree of the target area output by the green degree estimation model;
wherein the green degree estimation model is obtained based on third sample data; the third sample data includes pupil diameters of the target person when the target person sees vegetation of different greenness at different ambient light levels.
8. The evaluation data acquisition method according to any one of claims 1 to 7, characterized in that in the case where an evaluation data acquisition task corresponding to a target area has been triggered, the method further comprises: and performing voice broadcasting so that the target personnel can move the gaze point of the sight in the target area according to the voice broadcasting and a preset path and at a preset speed.
9. An evaluation data acquisition apparatus, comprising:
the system comprises a data acquisition module, a target processing module and a target processing module, wherein the data acquisition module is used for acquiring target pupil data and target optical data under the condition that an evaluation data acquisition task corresponding to a target area is triggered, the target pupil data comprise change data of pupil diameter of a target person in a target process, the target process comprises a process that a gaze point of a sight of the target person moves in the target area according to a preset path at a preset speed, and the target optical data comprise change data of brightness in the target area in the target process;
The numerical calculation module is used for acquiring a change amplitude value of the pupil diameter of the target person in the target process based on the target pupil data and acquiring a change amplitude value of the ambient light brightness in the target area in the target process based on the target optical data;
the data conversion module is used for inputting the change amplitude value of the pupil diameter and the change amplitude value of the ambient light brightness into an evaluation data acquisition model, and acquiring the uniformity of the ground object distribution in the target area output by the evaluation data acquisition model as the evaluation data of the target area;
the evaluation data acquisition model is obtained based on first sample data and a first sample label corresponding to the first sample data; the first sample data includes sample pupil data and sample optical data; the sample pupil data comprise a variation amplitude value of the pupil diameter of a sample person in a sample process, and the sample process comprises a process that a fixation point of the sight of the sample person moves in a sample area according to the preset path at the preset speed; the sample optical data includes a magnitude of change in ambient light brightness within the sample region during the sample; the first sample tag includes uniformity of distribution of features within the sample region.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the evaluation data acquisition method according to any one of claims 1 to 8 when executing the program.
11. An evaluation data acquisition system, comprising: a head-mounted pupil diameter acquisition device, an ambient light level sensor, and an electronic device as claimed in claim 10; the head-mounted pupil diameter acquisition device and the ambient light brightness sensor are electrically connected with the electronic device; the ambient light brightness sensor is arranged in the target area;
the head-mounted pupil diameter acquisition device is used for responding to the control of the electronic device, starting to acquire the pupil diameter of a target person and sending the acquired pupil diameter of the target person to the electronic device;
the environment light brightness sensor is used for responding to the control of the electronic equipment, starting to collect the environment light brightness in a target area and sending the collected environment light brightness in the target area to the electronic equipment;
wherein the target person is a person wearing the head-mounted pupil diameter acquisition device.
12. The evaluation data acquisition system according to claim 11, characterized by further comprising: a user interaction device; the user interaction device is electrically connected with the electronic device;
the user interaction device is used for receiving a first input for representing triggering of the evaluation data acquisition task and sending the first input to the electronic device, so that the electronic device responds to the first input,
the user interaction device is further used for receiving a second input used for indicating that a target process is completed, and sending the second input to the electronic device, so that the electronic device responds to the second input to control the head-mounted pupil diameter acquisition device to stop acquiring the pupil diameter of the target person;
the electronic device is further configured to trigger the evaluation data acquisition task in response to the first input and control the head-mounted pupil diameter acquisition device to start acquiring the pupil diameter of the target person and the ambient light level sensor to start acquiring the ambient light level in the target area,
the electronic device is further configured to, in response to the second input, control the head-mounted pupil diameter collection device to stop collecting a pupil diameter of a target person and control the ambient light level sensor to stop collecting ambient light levels in a target area if the second input is received.
13. The evaluation data acquisition system according to claim 11 or 12, characterized by further comprising: a timing device and a voice broadcasting device; the timing device and the voice broadcasting device are electrically connected with the electronic equipment;
the timing device is used for responding to the control of the electronic equipment, recording the starting time and the ending time of a target process, sending the starting time and the ending time of the target process to the electronic equipment,
the timing device is also used for responding to the control of the electronic equipment, returning a timing signal to the electronic equipment every preset time length so that the electronic equipment can control the voice broadcasting device to conduct voice broadcasting under the condition that the timing signal is received;
the voice broadcasting device is used for responding to the control of the electronic equipment to conduct voice broadcasting, so that the target personnel can move the gaze point of the sight in the target area according to the voice broadcasting and a preset path and a preset speed.
14. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements the evaluation data acquisition method according to any one of claims 1 to 8.
CN202311509109.3A 2023-11-14 2023-11-14 Evaluation data acquisition method, device, system, electronic equipment and storage medium Active CN117237786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311509109.3A CN117237786B (en) 2023-11-14 2023-11-14 Evaluation data acquisition method, device, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311509109.3A CN117237786B (en) 2023-11-14 2023-11-14 Evaluation data acquisition method, device, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117237786A CN117237786A (en) 2023-12-15
CN117237786B true CN117237786B (en) 2024-01-30

Family

ID=89096997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311509109.3A Active CN117237786B (en) 2023-11-14 2023-11-14 Evaluation data acquisition method, device, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117237786B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949131A (en) * 2020-08-17 2020-11-17 陈涛 Eye movement interaction method, system and equipment based on eye movement tracking technology
CN113342161A (en) * 2021-05-27 2021-09-03 常州工学院 Sight tracking method based on near-to-eye camera
CN115113733A (en) * 2022-07-07 2022-09-27 三星电子(中国)研发中心 Information generation method and device
CN116413915A (en) * 2022-03-24 2023-07-11 北京蜂巢世纪科技有限公司 Luminance adjusting method of near-eye display device, near-eye display device and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102170864B1 (en) * 2012-05-02 2020-10-28 가부시키가이샤 니콘 Method for evaluating and improving pupil luminance distribution, illumination optical system and adjustment method thereof, exposure device, exposure method, and device manufacturing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949131A (en) * 2020-08-17 2020-11-17 陈涛 Eye movement interaction method, system and equipment based on eye movement tracking technology
CN113342161A (en) * 2021-05-27 2021-09-03 常州工学院 Sight tracking method based on near-to-eye camera
CN116413915A (en) * 2022-03-24 2023-07-11 北京蜂巢世纪科技有限公司 Luminance adjusting method of near-eye display device, near-eye display device and medium
CN115113733A (en) * 2022-07-07 2022-09-27 三星电子(中国)研发中心 Information generation method and device

Also Published As

Publication number Publication date
CN117237786A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN103318023B (en) Vehicle-mounted real-time intelligent fatigue monitoring and auxiliary device
CN104142583A (en) Intelligent glasses with blinking detection function and implementation method thereof
Li et al. Ultra-low power gaze tracking for virtual reality
CN105105738A (en) Information processing method and device and electronic device
CN108281197B (en) Method for analyzing relationship between environmental factors and teenager myopia
CN105049596B (en) A kind of mobile terminal and its fatigue state detection method
CN106910322B (en) A kind of pre- myopia prevention device of wear-type based on stereoscopic vision and behavioural analysis
CN115547497B (en) Myopia prevention and control system and method based on multi-source data
CN114648354A (en) Advertisement evaluation method and system based on eye movement tracking and emotional state
CN110568930A (en) Method for calibrating fixation point and related equipment
CN107016324B (en) Fingerprint image processing method and fingerprint detection equipment
CN104333709A (en) Method, device and electronic equipment for controlling flash lamp
CN112084965A (en) Scalp hair detection device and system
CN106333643B (en) User health monitoring method, monitoring device and monitoring terminal
CN112084813A (en) Abnormal target detection method and device and storage medium
CN102761360A (en) Optical signal processing method and device
CN117237786B (en) Evaluation data acquisition method, device, system, electronic equipment and storage medium
CN113792587A (en) Method and device for acquiring and identifying image, storage medium and electronic equipment
CN116881853B (en) Attention assessment method, system, equipment and medium based on multi-mode fusion
CN113143193A (en) Intelligent vision testing method, device and system
CN112633080A (en) Automatic detection system and detection method for passenger virus exposure in vehicle for highway access
CN106326672B (en) Sleep detection method and system
CN111896119A (en) Infrared temperature measurement method and electronic equipment
CN115601712B (en) Image data processing method and system suitable for site safety measures
CN116433029A (en) Power operation risk assessment method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant