CN107333025B - Image data processing method and device, electronic equipment and storage medium - Google Patents

Image data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN107333025B
CN107333025B CN201710531661.0A CN201710531661A CN107333025B CN 107333025 B CN107333025 B CN 107333025B CN 201710531661 A CN201710531661 A CN 201710531661A CN 107333025 B CN107333025 B CN 107333025B
Authority
CN
China
Prior art keywords
image data
user
point abscissa
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710531661.0A
Other languages
Chinese (zh)
Other versions
CN107333025A (en
Inventor
张启峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jupiter Technology Co ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201710531661.0A priority Critical patent/CN107333025B/en
Publication of CN107333025A publication Critical patent/CN107333025A/en
Application granted granted Critical
Publication of CN107333025B publication Critical patent/CN107333025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an image data processing method, an image data processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps that a plurality of first user images corresponding to a target user are collected regularly within a preset first collection duration, and first position image data in each first user image are extracted; recording position information of first position image data in each first user image in a display interface; if the target user is determined to be in a preset stable state according to the position information of the first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction. By adopting the invention, the filter function of the current user image can be automatically switched, and the complicated manual operation can be avoided, thereby improving the display effect of the image data.

Description

Image data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer application programs, and in particular, to an image data processing method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of mobile communication technology, more and more users choose to load more and more application programs (for example, loading application programs such as a beauty camera and a charm with a beauty function) in electronic equipment, and for the application programs, the application programs often have a plurality of camera filter functions, but in the shooting process of the users, the default filter function is usually adopted to process image data, and if the filter function of the current image data needs to be adjusted, the users often need to manually switch the filter function of the current image data through a manual control in a terminal interface.
For example, when a user uses the electronic device to perform live video or self-shooting, the image data acquired at present may be subjected to preliminary image processing, that is, the electronic device may perform fine adjustment on the color saturation in the image data, so that the user can immediately see the image processing result after filtering out a part of natural light in a preview interface. However, due to the influence of the shooting scene, the image data processed by the fine adjustment method often cannot present the processing effect according with the color saturation of the user, because the users in different shooting scenes need different color saturations. If the color saturation of the current image data needs to be adjusted, filter selection needs to be performed through a manual control in the preview interface to change the color saturation of the current image data, at this time, the users need to select the manual control in the current preview interface first, and then select a corresponding filter function on the manual control to switch the color saturation of the current image data, so that the operation becomes cumbersome, and in the current preview interface, the manual control occupies a certain display area, and further the display effect of the image data is affected.
Disclosure of Invention
Embodiments of the present invention provide an image data processing method and apparatus, an electronic device, and a storage medium, which solve the problem how to avoid tedious manual operations and improve the display effect of image data.
A first aspect of an embodiment of the present invention provides an image data processing method, including:
the method comprises the steps that a plurality of first user images corresponding to a target user are collected regularly within a preset first collection duration, and first position image data in each first user image are extracted;
recording position information of first position image data in each first user image in a display interface;
if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration;
and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction.
Optionally, before counting, within a preset counting time period, a total number of movements corresponding to each first portion image data if it is determined that the target user is in a preset stable state according to the position information of each first portion image data in the display interface, the method further includes:
performing difference analysis on the position information of every two adjacent first part image data according to the position information of the first part image data in each first user image in the display interface to obtain difference analysis results respectively corresponding to every two adjacent first part image data; the two adjacent first position image data refer to two adjacent first position image data in acquisition time;
and if all the difference analysis results meet the preset image stabilization condition, determining that the target user is in a preset stable state.
Wherein, the performing difference analysis on the position information of every two adjacent first portion image data according to the position information of the first portion image data in each first user image in the display interface to obtain difference analysis results respectively corresponding to every two adjacent first portion image data includes:
selecting two adjacent first part image data from the first part image data in each first user image as two target image data;
acquiring first position information and second position information of the two target image data in the display interface respectively; the first position information and the second position information comprise a center position coordinate and a first part transverse axis distance;
calculating a center position distance between the center position coordinate of the first position information and the center position coordinate of the second position information, and calculating a first difference ratio between the center position distance and a first position transverse axis distance in the first position information;
calculating an absolute value of a difference between a first location cross-axis distance in the first location information and a first location cross-axis distance in the second location information, and calculating a second difference ratio between the absolute value of the difference and the first location cross-axis distance in the first location information;
determining the first difference ratio and the second difference ratio as difference analysis results corresponding to the two target image data;
when every two adjacent first region image data in the first region image data are selected as two target image data, obtaining difference analysis results respectively corresponding to every two adjacent first region image data.
Optionally, before determining that the target user is in a preset stable state if each difference analysis result meets a preset image stability condition, the method further includes:
judging whether the first difference ratios in the difference analysis results are all smaller than or equal to a preset first ratio threshold value or not;
if the first difference ratios are smaller than or equal to the first ratio threshold, judging whether second difference ratios in the difference analysis results are smaller than or equal to a preset second ratio threshold;
and if the second difference ratios are smaller than or equal to the second ratio threshold, determining that the difference analysis results meet the preset image stabilization condition.
If the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration, including:
when the target user is in a preset stable state, collecting a plurality of second user images corresponding to the target user within a preset identification duration, and extracting first part image data and second part image data in each second user image;
calculating a first central point abscissa of the first part image data in each second user image in the display interface and a second central point abscissa of the second part image data in each second user image in the display interface; in a second user image, the first center point abscissa is smaller than the second center point abscissa;
and identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image.
The preset identification time length comprises a plurality of second acquisition time lengths;
the identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image includes:
acquiring a first central point abscissa and a second central point abscissa in the first user image when the target user is in the stable state, and taking the first central point abscissa and the second central point abscissa in the first user image when the target user is in the stable state as a first reference point abscissa and a second reference point abscissa respectively;
selecting a minimum second central point abscissa and a maximum first central point abscissa from the first central point abscissa and the second central point abscissa of each second user image acquired within the first second acquisition duration;
if the minimum second central point abscissa is less than or equal to the first reference point abscissa and the maximum first central point abscissa is greater than or equal to the second reference point abscissa, accumulating the shaking head movement times of the target user once, and detecting the shaking head movement times of the target user in the next second acquisition time length so as to detect the total shaking head movement times of the target user in the preset identification time length;
and if the total shaking motion times of the target user reaches a preset time threshold value within the preset identification duration, determining that the motion trail of the target user is shaking motion.
Optionally, after selecting the smallest second center point abscissa and the largest first center point abscissa from the first center point abscissa and the second center point abscissa of each second user image acquired in the first second acquisition duration, the method further includes:
if the minimum second central point abscissa is less than or equal to the first reference point abscissa and the maximum first central point abscissa is equal to the first reference point abscissa, determining that the motion trajectory of the target user rotates towards a first direction;
and if the minimum second central point abscissa is equal to the second reference point abscissa and the maximum first central point abscissa is greater than or equal to the second reference point abscissa, determining that the motion trail of the target user rotates towards a second direction.
A second aspect of an embodiment of the present invention provides an image data processing apparatus, including:
the acquisition and extraction module is used for regularly acquiring a plurality of first user images corresponding to a target user within a preset first acquisition time length and extracting first position image data in each first user image;
the position recording module is used for recording the position information of the first position image data in each first user image in the display interface;
the track recognition module is used for recognizing the motion track of the target user within a preset recognition duration if the target user is determined to be in a preset stable state according to the position information of the first position image data in the display interface;
and the image processing module is used for searching an image operation instruction corresponding to the motion track in a preset mapping relation table and carrying out image processing on the current user image in the display interface according to the image operation instruction.
Optionally, the apparatus further comprises:
the difference analysis module is used for carrying out difference analysis on the position information of every two adjacent first position image data according to the position information of the first position image data in each first user image in the display interface to obtain difference analysis results corresponding to every two adjacent first position image data; the two adjacent first position image data refer to two adjacent first position image data in acquisition time;
and the state determining module is used for determining that the target user is in a preset stable state if each difference analysis result meets a preset image stability condition.
Wherein the variance analysis module comprises:
a target data selection unit configured to select two adjacent first region image data among the first region image data in each of the first user images as two target image data;
the position information acquisition unit is used for acquiring first position information and second position information of the two target image data in the display interface respectively; the first position information and the second position information comprise a center position coordinate and a first part transverse axis distance;
a first ratio calculation unit, configured to calculate a center position distance between a center position coordinate of the first position information and a center position coordinate of the second position information, and calculate a first difference ratio between the center position distance and a first position horizontal axis distance in the first position information;
a second ratio calculation unit configured to calculate an absolute value of a difference between a first-part horizontal-axis distance in the first position information and a first-part horizontal-axis distance in the second position information, and calculate a second difference ratio between the absolute value of the difference and the first-part horizontal-axis distance in the first position information;
an analysis result determining unit, configured to determine the first difference ratio and the second difference ratio as difference analysis results corresponding to the two target image data;
an analysis result acquisition unit configured to obtain difference analysis results corresponding to each of two adjacent first region image data when each of the two adjacent first region image data is selected as two target image data.
Optionally, the apparatus further comprises:
the first judgment module is used for judging whether the first difference ratios in the difference analysis results are all smaller than or equal to a preset first ratio threshold value;
the second judging module is used for judging whether the second difference ratios in the difference analysis results are all smaller than or equal to a preset second ratio threshold value or not if the first difference ratios are all smaller than or equal to the first ratio threshold value;
and the condition satisfying module is used for determining that each difference analysis result satisfies a preset image stabilization condition if each second difference ratio is smaller than or equal to the second ratio threshold.
Wherein the trajectory recognition module comprises:
the image data extraction unit is used for collecting a plurality of second user images corresponding to the target user within a preset identification duration when the target user is in a preset stable state, and extracting first part image data and second part image data in each second user image;
the horizontal coordinate calculation unit is used for calculating a first central point horizontal coordinate of the first part image data in each second user image in the display interface and a second central point horizontal coordinate of the second part image data in each second user image in the display interface; in a second user image, the first center point abscissa is smaller than the second center point abscissa;
and the motion track identification unit is used for identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image.
The preset identification time length comprises a plurality of second acquisition time lengths; the motion trajectory recognition unit includes:
a reference point coordinate obtaining subunit, configured to obtain a first central point abscissa and a second central point abscissa in the first user image when the target user is in the stable state, and take the first central point abscissa and the second central point abscissa in the first user image when the target user is in the stable state as a first reference point abscissa and a second reference point abscissa, respectively;
the abscissa selecting subunit is configured to select a minimum second center point abscissa and a maximum first center point abscissa from among first center point abscissas and second center point abscissas of each second user image acquired within the first second acquisition duration;
a motion frequency accumulating subunit, configured to accumulate the head shaking motion frequency of the target user for one time if the minimum second center point abscissa is less than or equal to the first reference point abscissa and the maximum first center point abscissa is greater than or equal to the second reference point abscissa, and detect the head shaking motion frequency of the target user in a next second acquisition duration, so as to detect the total head shaking motion frequency of the target user in the preset identification duration;
and the first motion determining subunit is used for determining that the motion track of the target user is shaking head motion if the total shaking head motion frequency of the target user reaches a preset frequency threshold value within the preset identification duration.
Optionally, the motion trajectory identification unit further includes:
a second motion determining subunit, configured to determine that the motion trajectory of the target user rotates in the first direction if the minimum second center point abscissa is less than or equal to the first reference point abscissa and the maximum first center point abscissa is equal to the first reference point abscissa;
and the third motion determination subunit is configured to determine that the motion trajectory of the target user rotates in a second direction if the minimum second center point abscissa is equal to the second reference point abscissa and the maximum first center point abscissa is greater than or equal to the second reference point abscissa.
A third aspect of an embodiment of the present invention provides an electronic device, including: a processor and a memory, the processor being connected to the memory, wherein the memory is configured to store program code for enabling the electronic device to perform the method of the first aspect of the embodiments of the present invention, and the processor is configured to perform the method of the first aspect of the embodiments of the present invention.
A fourth aspect of the embodiments of the present invention provides a computer storage medium, which is characterized by storing a computer program, where the computer program includes program instructions, and when the processor executes the program instructions, the method in the first aspect of the embodiments of the present invention is executed.
A fifth aspect of the embodiments of the present invention provides a computer program product, where instructions of the computer program product, when executed by a processor, perform the method of the first aspect of the embodiments of the present invention.
As can be seen from the above, in the embodiment of the present invention, a plurality of first user images corresponding to a target user are regularly acquired within a preset first acquisition duration, and first position image data in each first user image is extracted; recording position information of first position image data in each first user image in a display interface; if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction. By adopting the method and the device, the motion track of the target user in the preset identification duration can be identified when the target user is in a stable state, the image operation instruction corresponding to the motion track is searched in the mapping relation table, and then the filter function of the current user image in the display interface can be switched according to the image operation instruction, so that complicated manual operation is avoided, and when the current user image is processed, an additional manual control is not required to be displayed, so that the display area of the image data is increased, and the display effect of the image data is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an image data processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for calculating coordinates of a center position according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another calculation of center position coordinates provided by an embodiment of the present invention;
fig. 4 is a schematic diagram of a method for identifying a motion trajectory according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating another image data processing method according to an embodiment of the present invention;
FIGS. 6a and 6b are schematic diagrams of position information of two target image data in a display interface according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an image data processing apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of another image data processing apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a difference analysis module according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a trajectory recognition module according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a motion trajectory identification unit according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "including" and "having," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The image data processing method according to the embodiment of the present invention is executed by a computer program that runs on a computer system of von-vuhmann system. The computer program may be integrated into the application or may run as a separate tool-like application. The computer system can be terminal equipment such as a personal computer, a tablet computer, a notebook computer, a smart phone and the like.
The following are detailed below.
Referring to fig. 1, a flowchart of an image data processing method according to an embodiment of the present invention is shown, where as shown in fig. 1, the image data processing method at least includes:
step S101, regularly collecting a plurality of first user images corresponding to a target user within a preset first collecting time length, and extracting first position image data in each first user image;
specifically, the electronic device may periodically acquire, within a preset first acquisition duration, a plurality of first user images corresponding to a target user based on a preset acquisition time interval, where the first user images may be images including a facial contour region of the target user acquired within the first acquisition duration, and then the electronic device may further extract first position image data in each first user image; the acquisition time interval is less than the first acquisition duration;
the electronic equipment comprises a user terminal with a camera shooting function, such as a personal computer, a tablet computer, a notebook computer, an intelligent television and an intelligent mobile phone;
the acquisition time interval is a time interval for acquiring each first user image at regular time, for example, the first user image is acquired every time interval 1s, and the time interval 1s at this time is the acquisition time interval.
The first acquisition duration is a duration in which the electronic device continuously acquires a plurality of first user images, for example, 3 first user images are acquired in the first acquisition duration (within 5 seconds).
Wherein the first partial image data may be left-eye image data or right-eye image data; optionally, the first position image data may also be left eyebrow image data or right eyebrow image data;
in view of this, to better understand the present disclosure, in the embodiment of the present disclosure, only the collected first position image data is taken as the left eye image data, so as to further perform steps S102 to S103, and further recognize the motion trajectory of the target user within the preset recognition duration when the target user is in the preset stable state, so as to continue to perform the subsequent step S104, so as to automatically switch the filter function of the current user image, and adjust the color saturation of the current user image.
Step S102, recording the position information of the first part image data in each first user image in a display interface;
wherein the position information may include a center position coordinate and a first location cross-axis distance;
the central position coordinate is a central position coordinate of a region occupied by the first part in the first user image in the display interface, and the central position coordinate mainly comprises a central abscissa (X) value and a central ordinate (Y) value.
Further, please refer to fig. 2, which is a schematic diagram illustrating a method for calculating coordinates of a center position according to an embodiment of the present invention. As shown in fig. 2, the electronic device may acquire a first user image corresponding to the target user every 1 second, and may further extract first portion image data from the first user image as shown in fig. 2; the first portion image data is left-eye image data as shown in fig. 2, and at this time, the electronic device describes a specific process of calculating the center position coordinates in the area occupied by the left eye as follows. In the first place as shown in FIG. 2In the user image, the point A, the point B, the point C and the point D form an eye range of a region occupied by the left eye image data of the target user in the display interface; wherein, the coordinate of the point A in the display interface is marked as (P)Ax,PAy), the coordinate of the point B in the display interface is marked as (P)Bx,PBy), the coordinate of the point C in the display interface is marked as (P)Cx,PCy), the coordinate of the D point in the display interface is marked as (P)Dx,PDy), therefore, the electronic device can calculate the center position coordinates of the area occupied by the left eye of the target user in the display interface according to the coordinates of the point a and the point B in the display interface. Thus, the center abscissa L of the left eye of the target userXAnd the central ordinate LYCan be expressed as follows:
LX=(PAx+PBx)/2 (1.1);
LY=(PAy+PBy)/2 (1.2);
in addition, the electronic equipment can also calculate the distance of the horizontal axis of the first part of the left eye of the target user according to the coordinates corresponding to the point A and the point B respectively. Thus, in the first user image shown in fig. 2, the horizontal axis distance of the first portion of the left eye of the target user can be expressed as:
L=sqrt((PBx-PAx)*(PBx-PAx)+(PBy-PAy)*(PBy-PAy)) (1.3)。
in view of this, the position information of the left-eye image data in the display interface in the other first user image data acquired by the electronic device in the first acquisition duration may refer to the above-mentioned publications (1.1) - (1.3) to perform similar calculation, so as to obtain the position information of the left-eye image data in each first user image in the first acquisition duration in the display interface, and therefore, the description will not be repeated here.
Further, please refer to fig. 3, which is a schematic diagram of another method for calculating coordinates of a center position according to an embodiment of the present invention. As shown in fig. 3, the electronic device can acquire the object every 1 secondMarking a first user image corresponding to the user, and further extracting first portion image data from the first user image as shown in fig. 3; the first partial image data is right-eye image data as shown in fig. 3. At this time, the electronic device regards an eye region (the eye region is formed by points E, F, G, and H in fig. 3) that constitutes an area occupied by the right-eye image data of the target user in the display interface. The specific process of calculating the center position coordinate of the area occupied by the right-eye image data in the display interface by the electronic device may refer to the description of the specific process of calculating the center position coordinate of the left-eye image data. Thus, the center abscissa R of the right eye of the target userXAnd central ordinate RYCan be expressed as follows:
RX=(PEx+PFx)/2 (1.4)
RY=(PEy+PFy)/2 (1.5)
in addition, the electronic equipment can also calculate the first part horizontal axis distance of the right eye of the target user according to the coordinates corresponding to the point E and the point H respectively. Thus, in the first user image shown in fig. 3, the horizontal axis distance of the first portion of the right eye of the target user can be expressed as:
R=sqrt((PFx-PEx)*(PFx-PEx)+(PFy-PEy)*(PFy-PEy)) (1.6)
in view of this, the position information of the right-eye image data in the display interface in the other first user image data acquired by the electronic device within the preset first acquisition duration may be calculated similarly with reference to the above formulas (1.4) - (1.6) to obtain the position information of the right-eye image data in each first user image within the first acquisition duration in the display interface, and therefore, the description will not be repeated here.
Step S103, if the target user is determined to be in a preset stable state according to the position information of the first position image data in the display interface, identifying the motion track of the target user within a preset identification duration;
specifically, the electronic device may perform difference analysis on the position information of each two adjacent first portion image data according to the position information of each first portion image data recorded in the first acquisition duration in the display interface to obtain difference analysis results corresponding to each two adjacent first portion image data, if each difference analysis result meets a preset image stabilization condition, it is determined that the target user is in a preset stable state, then, the electronic device may further acquire a plurality of second user images corresponding to the target user within a preset identification duration, extract the first portion image data and the second portion image data in each second user image, and calculate a first center point abscissa of the first portion image data in each second user image in the display interface, and a second central point abscissa of the second portion image data in each second user image in the display interface, and identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image.
The two adjacent first position image data refer to two adjacent first position image data in acquisition time;
the step of performing difference analysis on the position information of every two adjacent first portion image data means that the electronic device can compare two position information of any two first portion image data with adjacent acquisition time to further judge whether the target user is in a stable state within the first acquisition time, that is, the electronic device can obtain a difference analysis result by comparing the difference between two position information with adjacent acquisition time to further determine whether the difference analysis result meets a preset image stabilization condition, and further determine that the target user is in a stable state when the image stabilization condition is met, and then, the electronic device can further recognize the motion trajectory of the target user within a preset recognition time.
When the first portion image data in the second user image is left-eye image data, the second portion image data is right-eye image data; optionally, when the first portion image data in the second user image is right-eye image data, the second portion image data is left-eye image data.
In a second user image, the abscissa of the first central point is smaller than the abscissa of the second central point; therefore, when the first portion image data in the second user image is left eye image data and the second portion image data is right eye image data, the abscissa of the first center point of the left eye is smaller than the abscissa of the second center point of the right eye. Correspondingly, when the first portion image data in the second user image is right-eye image data and the second portion image data is left-eye image data, the abscissa of the first center point of the right eye is smaller than the abscissa of the second center point of the left eye.
The motion trail can be one of a shaking motion, a rotation in a first direction and a rotation in a second direction, each motion trail corresponds to different image operation instructions, and the filter function of the current user image can be correspondingly switched according to each image operation instruction;
when the first position image data is left-eye image data, the rotation in the first direction is left rotation, and the rotation in the second direction is right rotation;
optionally, when the first position image data is right-eye image data, the rotation in the first direction is rightward rotation, and the rotation in the second direction is leftward rotation.
And step S104, searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and performing image processing on the current user image in the display interface according to the image operation instruction.
Specifically, the electronic device may obtain a preset mapping relationship table after identifying the motion trajectory of the target user, where the mapping relationship table includes a plurality of motion trajectories, and each motion trajectory corresponds to a different image operation instruction; then, the electronic device may search an image operation instruction corresponding to the cloud track in the mapping relation table, and perform image processing on the current user image in the display interface according to the image operation instruction.
For example, if the first position image data is left-eye image data and the first position image data is right-eye image data, for example, in the mapping relationship table, when the motion trajectory rotates in the first direction, the image operation instruction may be to switch the current filter to the next filter according to a preset sequence, and at this time, a plurality of control interfaces occupying the current image display area do not need to appear in the display interface, and only the filter switching operation needs to be triggered according to the image operation instruction; therefore, during the filter switching process, the rotation in the first direction is a left rotation, and at this time, the filter switching operation is similar to that of the target user sliding left in a plurality of filter control interfaces in the display interface by pointing to the left to switch the current filter to the next filter.
For another example, if the first position image data is left-eye image data, and the first position image data is right-eye image data, for example, in the mapping relationship table, when the motion trajectory rotates in the second direction, the image operation instruction may be to switch the current filter to the previous filter according to a preset sequence, at this time, a plurality of control interfaces occupying the current image display area still do not need to appear in the display interface, and the filter switching operation is triggered only according to the image operation instruction; therefore, in the process of switching the filters, the rotation in the second direction is a right rotation, and at this time, the filter switching operation is similar to that of the target user sliding right in the multiple filter control interfaces in the display interface by pointing to the right with a hand, so as to switch the current filter to the previous filter.
Optionally, in the mapping relationship table, the motion trajectory may also be a panning motion, and the panning motion may be used to indicate that the target user is dissatisfied with the current filter function and needs to switch the current filter operation.
Therefore, through the direct one-to-one correspondence relationship between each motion track and each image operation instruction, when the corresponding image operation instruction is obtained, the display interface can be controlled to respond to the image operation instruction, so that the filter function of the current user image is intelligently switched, the complicated manual operation is avoided, and further, when the target user holds the electronic equipment by one hand, the current image can be processed without controlling the image processing interface by the other hand, so that the convenience of image processing is improved, and the display effect of image data is improved,
further, please refer to fig. 4, which is a schematic diagram of a method for identifying a motion trajectory according to an embodiment of the present invention. As shown in fig. 4, the specific process of identifying the motion trajectory includes the following steps S201 to S206, and the steps S201 to S206 are a specific embodiment of the step S103 in the embodiment corresponding to fig. 1:
step S201, acquiring a first central point abscissa and a second central point abscissa in the first user image when the target user is in the stable state, and taking the first central point abscissa and the second central point abscissa in the first user image when the target user is in the stable state as a first reference point abscissa and a second reference point abscissa respectively;
specifically, the electronic device may obtain a current first user image in the stable state when the target user is in the stable state, calculate a first central point abscissa and a second central point abscissa in the first user image in the stable state, cache the first central point abscissa and the second central point abscissa in the first user image in the stable state, and then obtain the cached first central point abscissa and second central point abscissa from a local database, and use the first central point abscissa and the second central point abscissa as a first reference point abscissa and a second reference point abscissa, respectively.
The first reference point abscissa and the second reference point abscissa may be used to describe an image of the same user, and the initial positions of the first portion image data and the second portion image data when the target user is in a stable state, at this time, the electronic device may store the first portion image data and the second portion image data together as eye reference image data, that is, the eye reference image data is left eye image data and right eye image data when the eye reference image data is in a stable state, so that when the electronic device detects that the coordinates of the central position point of the area occupied by the eyes of the target user are shifted in real time, the electronic device triggers the countdown of the preset identification duration to further perform step S202 within the preset identification duration. Therefore, the recognition rate of the motion trail of the target user is improved.
Wherein the preset identification duration may include a plurality of second acquisition durations. For example, within a preset time period of 5 seconds, the motion trajectory of the target user is identified every 2 seconds, and at this time, the time interval of 2 seconds is a second acquisition time period.
Step S202, selecting the minimum second central point abscissa and the maximum first central point abscissa from the first central point abscissa and the second central point abscissa of each second user image acquired within the first second acquisition duration;
the first and second acquisition time periods include a plurality of acquisition time intervals to continuously acquire second user images of a plurality of target users, and then the first and second center point abscissas of each second user image can be obtained, so that the electronic device can select a minimum second center point abscissa and a maximum first center point abscissa from the first and second center point abscissas of each second user image. Subsequently, the electronic device may further perform steps S203-S204 according to coordinate relationships between the minimum second center point abscissa and the maximum first center point abscissa and the first reference point abscissa and the second reference point abscissa, respectively. Optionally, after the electronic device has performed step S202, step S205 or step S206 may be further performed.
For example, within the preset identification duration (6s), there are two second acquisition durations (i.e. one second acquisition duration every 3 seconds), and within each second acquisition duration, there are three acquisition time intervals (i.e. each acquisition time is detected to be 1s), and further, please refer to table 1, for a statistical table of the first central point abscissa (X1) and the second central point abscissa (X2) corresponding to the 6 user images within the preset identification duration respectively: for convenience of understanding, the first region image data is taken as left-eye image data, and the second region image data is taken as right-eye image data for example, wherein the first center-point abscissa (X1) is the center-point abscissa of the left-eye image data, and the second center-point abscissa (X2) is the center-point abscissa of the left-eye image data.
Figure BDA0001338256390000101
TABLE 1
As shown in table 1, the second user images are acquired every 1 second within the first second acquisition duration (3 seconds), so that 3 second user images can be acquired within 3 seconds, and therefore, the first central point abscissa and the second central point abscissa corresponding to the 3 second user images shown in table 1 can be acquired. At this time, the electronic device may select a minimum second center point abscissa (e.g., Rx3) and a maximum first center point abscissa (e.g., Lx1) among the first center point abscissa and the second center point abscissa of the respective second user images. At this time, the electronic device may respectively perform subsequent corresponding steps according to the acquired relationship between the minimum second center point abscissa and the maximum first center point abscissa and the first reference point abscissa and the second reference point abscissa, for example, may further perform step S205.
The abscissa of the first reference point of the left-eye image data obtained by the electronic device from the local database when the target user is in the steady state may be recorded as Lxc, and the abscissa of the second reference point of the right-eye image data may be recorded as Rxc.
Step S203, if the abscissa of the minimum second center point is less than or equal to the abscissa of the first reference point and the abscissa of the maximum first center point is greater than or equal to the abscissa of the second reference point, accumulating the shaking head movement times of the target user for one time, and detecting the shaking head movement times of the target user in the next second acquisition time length so as to detect the total shaking head movement times of the target user in the preset identification time length;
for example, still taking as an example the first center point abscissa and the second center point abscissa corresponding to each second user image distribution obtained in table 1 above, if at this time, the minimum second center point abscissa in the first second acquisition time period is Rx1(Rx1 ═ 2cm), the maximum first center point abscissa Lx3(Lx3 ═ 4cm), and the first reference point abscissa (Lxc ═ 2.5cm) and the second reference point abscissa (Rxc ═ 3.5cm) of the target user in the steady state, the electronic device may determine that the minimum second center point abscissa (Rx1 ═ 2cm) is smaller than the first reference point abscissa (Lxc ═ 2.5cm), and the maximum first center point abscissa (Lx3 ═ 4cm) is larger than the second reference point abscissa (Rxc ═ 3.5 cm). The electronic device may accumulate the number of shaking motions of the target user once, that is, in the first and second acquisition periods, the electronic device may detect that the head of the target user touches the abscissa of the first reference point once and then touches the abscissa of the second reference point once during the offset motion of the head of the target user, so that the number of shaking motions of the target user may be accumulated for 1 time. Subsequently, the electronic device may further detect the number of shaking head movements of the target user in a next second acquisition duration, so as to detect the total number of shaking head movements of the target user in the preset recognition duration, and further perform step S204 when the total number of shaking head movements reaches a preset number threshold (e.g., twice).
Step S204, if the total shaking motion frequency of the target user reaches a preset frequency threshold value within the preset identification duration, determining that the motion track of the target user is shaking motion.
Optionally, in step S205, if the minimum second center point abscissa is less than or equal to the first reference point abscissa, and the maximum first center point abscissa is equal to the first reference point abscissa, it is determined that the motion trajectory of the target user rotates in the first direction;
for example, still taking as an example the first center point abscissa and the second center point abscissa corresponding to each second user image distribution obtained in table 1 above, if at this time, the minimum second center point abscissa in the first second acquisition time period is Rx1(Rx1 ═ 2cm), the maximum first center point abscissa Lx3(Lx3 ═ 2.5cm) and the first reference point abscissa (Lxc ═ 2.5cm) of the target user in the steady state, the second reference point abscissa (Rxc ═ 3.5cm), then the electronic device may determine that the minimum second center point abscissa (Rx1 ═ 2cm) is smaller than the first reference point abscissa (Lxc ═ 2.5cm), and the maximum first center point abscissa (Lx3 ═ 2.5cm) is equal to the first abscissa (Lxc ═ 2.5cm), and may determine that the target user's movement trajectory is a left-turn reference point;
optionally, when the first region image data is right-eye image data and the second region image data is left-eye image data, the motion trajectory of the target user may be further determined to be a rightward turning motion according to the coordinate relationship between the minimum second central point abscissa and the maximum first central point abscissa and the first reference point abscissa.
Optionally, in step S206, if the minimum second central point abscissa is equal to the second reference point abscissa and the maximum first central point abscissa is greater than or equal to the second reference point abscissa, it is determined that the motion trajectory of the target user rotates in a second direction.
For example, still taking as an example the first center point abscissa and the second center point abscissa corresponding to each second user image distribution obtained in table 1 above, if at this time, the minimum second center point abscissa in the first second acquisition time period is Rx1(Rx1 ═ 3.5cm), the maximum first center point abscissa Lx3(Lx3 ═ 4cm) and the first reference point abscissa (Lxc ═ 2.5cm) when the target user is in a stable state, the second reference point abscissa (Rxc ═ 3.5cm), then the electronic device may determine that the minimum second center point abscissa (Rx1 ═ 3.5cm) is equal to the second reference point abscissa (Rxc ═ 3.5cm), and the maximum first center point abscissa (Lx3 ═ 4cm) is greater than the second abscissa (Rxc ═ 3.5cm), and the trajectory of the target user is determined to be the movement toward the reference point.
Optionally, when the first region image data is right-eye image data and the second region image data is left-eye image data, the motion trajectory of the target user may be further determined to be a leftward rotation motion according to the coordinate relationship between the minimum second central point abscissa and the maximum first central point abscissa and the first reference point abscissa.
Optionally, the electronic device may further obtain, within the preset identification duration, a plurality of central points for describing when the head of the target user is offset (for example, a first central point of a first central point abscissa of the first portion image data and a second central point of a second central point abscissa of the second portion image data, where the first central point abscissa is smaller than the second central point abscissa), and obtain target distances of the consecutive central points in the display interface with respect to the coordinate origin of the display interface (that is, the target distances include the first central point abscissa and the second central point abscissa), if a central point with a smallest second central point abscissa smaller than or equal to the first reference point abscissa and a central point with a largest first central point abscissa larger than or equal to the second reference point abscissa are detected in the consecutive central points first, then, after a preset time interval, the existence of the two central points can be further detected, and then the motion trail of the target user can be determined to be a shaking motion. Of course, for the central points where the contacts are obtained, it may be further determined that the motion trajectory of the target user rotates to the first direction or rotates to the second direction within a preset time period.
The method comprises the steps that a plurality of first user images corresponding to a target user are collected at regular time within a preset first collection duration, and first position image data in each first user image are extracted; recording position information of first position image data in each first user image in a display interface; if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction. By adopting the method and the device, the motion track of the target user in the preset identification duration can be identified when the target user is in a stable state, the image operation instruction corresponding to the motion track is searched in the mapping relation table, and then the filter function of the current user image in the display interface can be switched according to the image operation instruction, so that complicated manual operation is avoided, and when the current user image is processed, an additional manual control is not required to be displayed, so that the display area of the image data is increased, and the display effect of the image data is improved.
Further, please refer to fig. 5, which is a flowchart illustrating another image data processing method according to an embodiment of the present invention. As shown in fig. 5, the image data processing method at least includes:
step S301, regularly acquiring a plurality of first user images corresponding to a target user within a preset first acquisition time length, and extracting first position image data in each first user image;
step S302, recording the position information of the first part image data in each first user image in a display interface;
for specific implementation manners of steps S301 to S302, reference may be made to the description of steps S101 to S102 in the embodiment corresponding to fig. 1, and details will not be further described here.
Step S303, according to the position information of the first part image data in each first user image in the display interface, performing difference analysis on the position information of every two adjacent first part image data to obtain difference analysis results respectively corresponding to every two adjacent first part image data;
specifically, the electronic device may select two adjacent first position image data from the first position image data in each first user image as two target image data, acquire first position information and second position information of the two target image data in the display interface respectively, calculate a center position distance between a center position coordinate of the first position information and a center position coordinate of the second position information, calculate a first difference ratio between the center position distance and a first position horizontal axis distance in the first position information, further calculate an absolute value of a difference between a first position horizontal axis distance in the first position information and a first position horizontal axis distance in the second position information, and calculate a second difference ratio between the absolute value of the difference and the first position horizontal axis distance in the first position information, and determining the first difference ratio and the second difference ratio as the difference analysis results corresponding to the two target image data, and obtaining the difference analysis results corresponding to each two adjacent first region image data when each two adjacent first region image data in each first region image data are selected as the two target image data.
The two adjacent first position image data refer to two adjacent first position image data in acquisition time;
the first position information and the second position information comprise a center position coordinate and a first position transverse axis distance;
for example, within a first acquisition duration (5 seconds), a first user image corresponding to a target user is acquired every 1 second; the first user image is an image including a facial contour region of the target user, so that the electronic device may obtain 5 first user images corresponding to the target user within 5 seconds (the 5 first user images are the user image 100A, the user image 100B, the user image 100C, the user image 100D, and the user image 100E), and extract first location image data (for example, left-eye image data) from the 5 first user images, so that the 5 first location image data are the left-eye image data 200A, the left-eye image data 200B, the left-eye image data 200C, the left-eye image data 200D, and the left-eye image data 200E, respectively, and record location information of the 5 left-eye image data in the same display interface.
The first position image data in the user image 100A is left-eye image data 200A, the first position image data in the user image 100B is left-eye image data 200B, the first position image data in the user image 100C is left-eye image data 200C, the first position image data in the user image 100D is left-eye image data 200D, and the first position image data in the user image 100E is left-eye image data 200E. Further, please refer to table 2, which is a statistical table of position information of each first portion image data recorded by the electronic device in a display interface within a preset first acquisition duration;
first user image First part image data Coordinates of center position Distance of transverse axis of first part
User image 100A Left eye image data 200A (LX1,LY1) L1
User image 100B Left eye image data 200B (LX2,LY2) L2
User image 100C Left eye image data 200C (LX3,LY3) L3
User image 100D Left eye image data 200D (LX4,LY4) L4
User image 100E Left eye image data 200E (LX5,LY5) L5
TABLE 2
As shown in table 2, left-eye image data 200A and left-eye image data 200B are two adjacent first partial image data, left-eye image data 200B and left-eye image data 200C are two adjacent first partial image data, left-eye image data 200C and left-eye image data 200D are two adjacent first partial image data, and left-eye image data 200D and left-eye image data 200E are two adjacent first partial image data, and thus, the electronic device can obtain four sets of two target image data among the 5 first partial image data. For example, taking the left-eye image data 200A and the left-eye image data 200B as two target image data as an example, the electronic apparatus may acquire first position information (a center position coordinate (LX1, LY1) and a first part horizontal axis distance L1) and second position information (a center position coordinate (LX2, LY2) and a first part horizontal axis distance L2) of the two target image data in the display interface, respectively.
Further, please refer to fig. 6a and fig. 6b, which are schematic diagrams illustrating position information of two target image data in a display interface according to an embodiment of the present invention. Fig. 6a is a schematic diagram of a first user image 100A acquired by a target user at a first acquisition interval (first second), and fig. 6B is a schematic diagram of a first user image 100B acquired by the target user at a second acquisition interval (second). The formula for calculating the center position coordinates of the left eye of the target user in the display interface and the distance between the first portion and the horizontal axis may refer to formula (1.1), formula (1.2) and formula (1.3) in the embodiment corresponding to fig. 1.
Therefore, the electronic apparatus may further perform disparity analysis of the position information at the 2 adjacent acquisition intervals, that is, the electronic apparatus may calculate the center position distance M at the two adjacent acquisition intervals based on the center position coordinates (LX1, LY1) of the left eye image data 200A in the display interface and the center position coordinates (LX2, LY2) of the left eye image data 200B in the display interface; wherein the center position distance M may be expressed as:
M=sqrt((X2-X1)*(X2-X1)+(Y2-Y1)*(Y2-Y1)) (1.9);
subsequently, the electronic device may further calculate a first difference ratio (M/L1) between the center position distance M and a first location transverse axis distance L1 in the first location information.
Meanwhile, the electronic device may further calculate an absolute value of a difference between the first location horizontal axis distance L1 of the left-eye image data 200A in the display interface and the first location horizontal axis distance L2 of the left-eye image data 200B in the display interface, that is, the electronic device may express the absolute value N of the difference between the first location horizontal axis distances at the two adjacent acquisition intervals by equation (1.10):
N=sqrt((L1-L2)*(L1-L2) (1.10);
then, the electronic device may further calculate a second difference ratio (N/L1) between the absolute value N of the difference and the first position abscissa-axis distance L1 in the first position information, and determine the calculated first difference ratio (M/L1) and second difference ratio (N/L1) as the difference analysis results corresponding to the two target image data.
The first difference ratio is used to determine whether the center position coordinate of the target user is shifted, for example, when the head of the target user is shifted to the left, the center position coordinate of the left eye of the target user is shifted to the center compared with the center position coordinate calculated at the previous acquisition time interval.
The second difference ratio is used for judging the distance between the target user and the display interface, because the distances of the first part horizontal axes of the first part image data in the area occupied by the first part image data in the display interface at different distances are different. For example, when the target user is closer to the display interface (e.g., 10cm), a larger first user image may be obtained, and only the head area of the target user is displayed in the display interface as shown in fig. 6 a; however, when the target user is far away from the display interface (for example, 20cm), the head area and the body area of the target user are both displayed in the display interface as the first user image, so that the head area of the target user occupies a relatively reduced area in the display interface, specifically, please refer to the first user image 100B shown in fig. 6B.
Similarly, for a specific implementation manner of obtaining the difference analysis results corresponding to the remaining three sets of two target image data, reference may be made to the description of obtaining the corresponding difference analysis results (the first difference ratio (M/L1) and the second difference ratio (N/L1)) when the left eye image data 200A and the left eye image data 200B are taken as two target image data, and details will not be further described here.
Step S304, judging whether the first difference ratios in the difference analysis results are all smaller than or equal to a preset first ratio threshold;
specifically, when the electronic device finishes performing step S304, it may further determine whether to perform step S305 or repeatedly perform step S301 according to a magnitude relationship between a first difference ratio in each difference analysis result and the first ratio threshold.
Wherein, the first proportional threshold may be 5/100 ═ 0.05.
Step S305, if each first difference ratio is less than or equal to the first ratio threshold, determining whether a second difference ratio in each difference analysis result is less than or equal to a preset second ratio threshold;
specifically, after the electronic device performs step S305, when each of the second difference ratios is smaller than or equal to the second ratio threshold, the electronic device may further perform step S306 to step S308; optionally, the electronic device may further perform step S309 if at least one of the second difference ratio values is greater than the second ratio threshold, where the second ratio threshold is 1/100 ═ 0.01.
Step S306, if the second difference ratios are less than or equal to the second ratio threshold, determining that the difference analysis results all satisfy a preset image stabilization condition.
Wherein the image stabilization condition is used for describing that the center position coordinate of the target user and the horizontal axis coordinate of the first part of the target user are both in a small offset range or almost no offset occurs.
Step S307, determining that the target user is in a preset stable state, and identifying the motion track of the target user within a preset identification duration;
for a specific process of identifying the motion trajectory, reference may be made to the description of step S201 to step S206 in the embodiment corresponding to fig. 1, and details will not be further described here.
And step S308, searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and performing image processing on the current user image in the display interface according to the image operation instruction.
The specific implementation manner of step S308 may refer to the description of step S104 in the embodiment corresponding to fig. 1, and will not be described again.
Step S309, if at least one difference ratio in the second difference ratios is greater than the second ratio threshold, determining that each difference analysis result does not satisfy a preset image stabilization condition.
The method comprises the steps that a plurality of first user images corresponding to a target user are collected at regular time within a preset first collection duration, and first position image data in each first user image are extracted; recording position information of first position image data in each first user image in a display interface; if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction. By adopting the method and the device, the motion track of the target user in the preset identification duration can be identified when the target user is in a stable state, the image operation instruction corresponding to the motion track is searched in the mapping relation table, and then the filter function of the current user image in the display interface can be switched according to the image operation instruction, so that complicated manual operation is avoided, and when the current user image is processed, an additional manual control is not required to be displayed, so that the display area of the image data is increased, and the display effect of the image data is improved.
Further, please refer to fig. 7, which is a schematic structural diagram of an image data processing apparatus according to an embodiment of the present invention. As shown in fig. 7, the image data processing apparatus 1 can be applied to the electronic device in the embodiment corresponding to fig. 1, and the image data processing apparatus 1 at least includes: the system comprises an acquisition and extraction module 10, a position recording module 20, a track recognition module 30 and an image processing module 40;
the acquisition and extraction module 10 is configured to acquire a plurality of first user images corresponding to a target user at regular time within a preset first acquisition duration, and extract first position image data in each first user image;
the position recording module 20 is configured to record position information of the first position image data in each first user image in the display interface;
the trajectory recognition module 30 is configured to, if it is determined that the target user is in a preset stable state according to the position information of each first position image data in the display interface, recognize a motion trajectory of the target user within a preset recognition duration;
the image processing module 40 is configured to search an image operation instruction corresponding to the motion trajectory in a preset mapping relation table, and perform image processing on the current user image in the display interface according to the image operation instruction.
For specific implementation manners of the acquisition and extraction module 10, the position recording module 20, the trajectory identification module 30, and the image processing module 40, reference may be made to the description of step S101 to step S104 in the embodiment corresponding to fig. 1, and details will not be further described here.
The method comprises the steps that a plurality of first user images corresponding to a target user are collected at regular time within a preset first collection duration, and first position image data in each first user image are extracted; recording position information of first position image data in each first user image in a display interface; if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction. By adopting the method and the device, the motion track of the target user in the preset identification duration can be identified when the target user is in a stable state, the image operation instruction corresponding to the motion track is searched in the mapping relation table, and then the filter function of the current user image in the display interface can be switched according to the image operation instruction, so that complicated manual operation is avoided, and when the current user image is processed, an additional manual control is not required to be displayed, so that the display area of the image data is increased, and the display effect of the image data is improved.
Further, please refer to fig. 8, which is a schematic structural diagram of another image data processing apparatus according to an embodiment of the present invention. As shown in fig. 8, the image data processing apparatus 1 may be applied to the electronic device in the embodiment corresponding to fig. 1, and the image data processing apparatus 1 may include the acquisition and extraction module 10, the position recording module 20, the track identification module 30, and the image processing module 40 in the embodiment corresponding to fig. 7; further, the image data processing apparatus 1 may further include: a difference analysis module 50, a first determination module 60, a second determination module 70, a condition satisfaction module 80, a status determination module 90,
the difference analysis module 50 is configured to perform difference analysis on the position information of every two adjacent first position image data according to the position information of the first position image data in each first user image in the display interface, so as to obtain difference analysis results corresponding to every two adjacent first position image data; the two adjacent first position image data refer to two adjacent first position image data in acquisition time;
the first determining module 60 is configured to determine whether the first difference ratios in the difference analysis results are all smaller than or equal to a preset first ratio threshold;
the second determining module 70 is configured to determine whether the second difference ratios in the difference analysis results are all less than or equal to a preset second ratio threshold if the first difference ratios are all less than or equal to the first ratio threshold;
the condition satisfying module 80 is configured to determine that each difference analysis result satisfies a preset image stabilization condition if each second difference ratio is smaller than or equal to the second ratio threshold.
The state determining module 90 is configured to determine that the target user is in a preset stable state if each difference analysis result meets a preset image stability condition.
For specific implementation manners of the difference analysis module 50, the first determination module 60, the second determination module 70, the condition satisfaction module 80, and the state determination module 90, reference may be made to the description of step S303 to step S307 in the embodiment corresponding to fig. 5, which will not be further described here.
Further, please refer to fig. 9, which is a schematic structural diagram of a difference analysis module according to an embodiment of the present invention. As shown in fig. 9, the variance analysis module 50 may include: a target data selecting unit 501, a position information acquiring unit 502, a first ratio calculating unit 503, a second ratio calculating unit 504, an analysis result determining unit 505, and an analysis result acquiring unit 506;
the target data selecting unit 501 is configured to select two adjacent first region image data from the first region image data in each first user image as two target image data;
the position information acquiring unit 502 is configured to acquire first position information and second position information of the two target image data in the display interface respectively; the first position information and the second position information comprise a center position coordinate and a first part transverse axis distance;
the first ratio calculating unit 503 is configured to calculate a center position distance between the center position coordinate of the first position information and the center position coordinate of the second position information, and calculate a first difference ratio between the center position distance and a first position horizontal axis distance in the first position information;
the second ratio calculation unit 504 is configured to calculate an absolute value of a difference between a first location horizontal axis distance in the first position information and a first location horizontal axis distance in the second position information, and calculate a second difference ratio between the absolute value of the difference and the first location horizontal axis distance in the first position information;
the analysis result determining unit 505 is configured to determine the first difference ratio and the second difference ratio as the difference analysis results corresponding to the two target image data;
the analysis result obtaining unit 506 is configured to obtain difference analysis results respectively corresponding to each two adjacent first region image data when each two adjacent first region image data in each first region image data are selected as two target image data.
The specific implementation manners of the target data selecting unit 501, the position information obtaining unit 502, the first ratio calculating unit 503, the second ratio calculating unit 504, the analysis result determining unit 505, and the analysis result obtaining unit 506 may refer to the description of step S303 in the embodiment corresponding to fig. 5, and will not be described again.
Further, please refer to fig. 10, which is a schematic structural diagram of a trajectory recognition module according to an embodiment of the present invention. As shown in fig. 10, the trajectory recognition module 30 may include: an image data extraction unit 301, an abscissa calculation unit 302, a motion trajectory identification unit 303;
the image data extracting unit 301 is configured to, when the target user is in a preset stable state and within a preset identification duration, acquire a plurality of second user images corresponding to the target user, and extract first portion image data and second portion image data in each second user image;
the abscissa calculating unit 302 is configured to calculate a first center abscissa of the first portion image data in each second user image in the display interface and a second center abscissa of the second portion image data in each second user image in the display interface; in a second user image, the first center point abscissa is smaller than the second center point abscissa;
the motion trajectory identification unit 303 is configured to identify the motion trajectory of the target user according to a first central point abscissa and a second central point abscissa in each second user image.
For specific implementation manners of the image data extracting unit 301, the abscissa calculating unit 302, and the motion trajectory identifying unit 303, reference may be made to the description of step S103 in the embodiment corresponding to fig. 1, and details will not be further described here.
Further, please refer to fig. 11, which is a schematic structural diagram of a motion trajectory identification unit according to an embodiment of the present invention. As shown in fig. 11, the motion trajectory identification unit 303 may include: a reference point coordinate obtaining sub-unit 3031, an abscissa selecting sub-unit 3032, a movement number accumulating sub-unit 3033, a first movement determining sub-unit 3034, a second movement determining sub-unit 3035 and a third movement determining sub-unit 3036;
the reference point coordinate obtaining subunit 3031 is configured to obtain a first central point abscissa and a second central point abscissa in the first user image when the target user is in the stable state, and take the first central point abscissa and the second central point abscissa in the first user image when the target user is in the stable state as the first reference point abscissa and the second reference point abscissa, respectively;
the abscissa selecting subunit 3032 is configured to select the smallest second center point abscissa and the largest first center point abscissa among the first center point abscissa and the second center point abscissa of each second user image acquired in the first second acquisition duration;
the motion frequency accumulating subunit 3033 is configured to accumulate the shaking motion frequency of the target user for one time if the minimum second center point abscissa is less than or equal to the first reference point abscissa and the maximum first center point abscissa is greater than or equal to the second reference point abscissa, and detect the shaking motion frequency of the target user in the next second acquisition duration, so as to detect the total shaking motion frequency of the target user in the preset identification duration;
the first motion determining subunit 3034 is configured to determine that the motion trajectory of the target user is a panning motion if it is detected that the total panning motion frequency of the target user reaches a preset frequency threshold within the preset identification duration;
the second motion determining subunit 3035 is configured to determine that the motion trajectory of the target user rotates in the first direction if the minimum second center point abscissa is less than or equal to the first reference point abscissa and the maximum first center point abscissa is equal to the first reference point abscissa;
the third motion determining subunit 3036 is configured to determine that the motion trajectory of the target user is rotating towards a second direction if the minimum second center point abscissa is equal to the second reference point abscissa and the maximum first center point abscissa is greater than or equal to the second reference point abscissa.
The specific implementation manners of the reference point coordinate obtaining subunit 3031, the abscissa selecting subunit 3032, the movement number accumulating subunit 3033, the first movement determining subunit 3034, the second movement determining subunit 3035, and the third movement determining subunit 3036 may refer to the descriptions of steps S201 to S206 in the embodiment corresponding to fig. 4, and the description thereof will not be repeated here.
The method comprises the steps that a plurality of first user images corresponding to a target user are collected at regular time within a preset first collection duration, and first position image data in each first user image are extracted; recording position information of first position image data in each first user image in a display interface; if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction. By adopting the method and the device, the motion track of the target user in the preset identification duration can be identified when the target user is in a stable state, the image operation instruction corresponding to the motion track is searched in the mapping relation table, and then the filter function of the current user image in the display interface can be switched according to the image operation instruction, so that complicated manual operation is avoided, and when the current user image is processed, an additional manual control is not required to be displayed, so that the display area of the image data is increased, and the display effect of the image data is improved.
Further, please refer to fig. 12, which is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 12, the electronic device 1000 may be the electronic device in the embodiment corresponding to fig. 1, where the electronic device 1000 may include: a processor and memory 1001 and a memory 1005, and further, the electronic device 1000 may further include: at least one network interface 1004, a user interface 1003, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 12, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the electronic device 1000 shown in fig. 12, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data output by the user; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
the method comprises the steps that a plurality of first user images corresponding to a target user are collected regularly within a preset first collection duration, and first position image data in each first user image are extracted;
recording position information of first position image data in each first user image in a display interface;
if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration;
and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction.
In an embodiment, before the processor 1001 performs the following steps before counting the total number of movements corresponding to each first position image data within a preset counting time period if it is determined that the target user is in a preset stable state according to the position information of each first position image data in the display interface:
performing difference analysis on the position information of every two adjacent first part image data according to the position information of the first part image data in each first user image in the display interface to obtain difference analysis results respectively corresponding to every two adjacent first part image data; the two adjacent first position image data refer to two adjacent first position image data in acquisition time;
and if all the difference analysis results meet the preset image stabilization condition, determining that the target user is in a preset stable state.
In an embodiment, when the processor 1001 performs the difference analysis on the position information of every two adjacent first portion image data according to the position information of the first portion image data in each first user image in the display interface to obtain a difference analysis result corresponding to each two adjacent first portion image data, specifically performs the following steps:
selecting two adjacent first part image data from the first part image data in each first user image as two target image data;
acquiring first position information and second position information of the two target image data in the display interface respectively; the first position information and the second position information comprise a center position coordinate and a first part transverse axis distance;
calculating a center position distance between the center position coordinate of the first position information and the center position coordinate of the second position information, and calculating a first difference ratio between the center position distance and a first position transverse axis distance in the first position information;
calculating an absolute value of a difference between a first location cross-axis distance in the first location information and a first location cross-axis distance in the second location information, and calculating a second difference ratio between the absolute value of the difference and the first location cross-axis distance in the first location information;
determining the first difference ratio and the second difference ratio as difference analysis results corresponding to the two target image data;
when every two adjacent first region image data in the first region image data are selected as two target image data, obtaining difference analysis results respectively corresponding to every two adjacent first region image data.
In an embodiment, before the processor 1001 determines that the target user is in a preset stable state if each difference analysis result satisfies a preset image stabilization condition, the following steps are further performed:
judging whether the first difference ratios in the difference analysis results are all smaller than or equal to a preset first ratio threshold value or not;
if the first difference ratios are smaller than or equal to the first ratio threshold, judging whether second difference ratios in the difference analysis results are smaller than or equal to a preset second ratio threshold;
and if the second difference ratios are smaller than or equal to the second ratio threshold, determining that the difference analysis results meet the preset image stabilization condition.
In an embodiment, when the processor 1001 determines that the target user is in a preset stable state according to the position information of each first position image data in the display interface, and identifies the motion trajectory of the target user within a preset identification duration, the following steps are specifically performed:
when the target user is in a preset stable state, collecting a plurality of second user images corresponding to the target user within a preset identification duration, and extracting first part image data and second part image data in each second user image;
calculating a first central point abscissa of the first part image data in each second user image in the display interface and a second central point abscissa of the second part image data in each second user image in the display interface; in a second user image, the first center point abscissa is smaller than the second center point abscissa;
and identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image.
In an embodiment, the preset identification duration includes a plurality of second acquisition durations, and when the processor 1001 identifies the motion trajectory of the target user according to the first central point abscissa and the second central point abscissa in each second user image, the following steps are specifically performed:
acquiring a first central point abscissa and a second central point abscissa in the first user image when the target user is in the stable state, and taking the first central point abscissa and the second central point abscissa in the first user image when the target user is in the stable state as a first reference point abscissa and a second reference point abscissa respectively;
selecting a minimum second central point abscissa and a maximum first central point abscissa from the first central point abscissa and the second central point abscissa of each second user image acquired within the first second acquisition duration;
if the minimum second central point abscissa is less than or equal to the first reference point abscissa and the maximum first central point abscissa is greater than or equal to the second reference point abscissa, accumulating the shaking head movement times of the target user once, and detecting the shaking head movement times of the target user in the next second acquisition time length so as to detect the total shaking head movement times of the target user in the preset identification time length;
and if the total shaking motion times of the target user reaches a preset time threshold value within the preset identification duration, determining that the motion trail of the target user is shaking motion.
In one embodiment, the processor 1001 further performs the following steps after performing the selection of the minimum second center point abscissa and the maximum first center point abscissa, from among the first center point abscissa and the second center point abscissa of each second user image acquired in the first second acquisition period:
if the minimum second central point abscissa is less than or equal to the first reference point abscissa and the maximum first central point abscissa is equal to the first reference point abscissa, determining that the motion trajectory of the target user rotates towards a first direction;
and if the minimum second central point abscissa is equal to the second reference point abscissa and the maximum first central point abscissa is greater than or equal to the second reference point abscissa, determining that the motion trail of the target user rotates towards a second direction.
The method comprises the steps that a plurality of first user images corresponding to a target user are collected at regular time within a preset first collection duration, and first position image data in each first user image are extracted; recording position information of first position image data in each first user image in a display interface; if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction. By adopting the method and the device, the motion track of the target user in the preset identification duration can be identified when the target user is in a stable state, the image operation instruction corresponding to the motion track is searched in the mapping relation table, and then the filter function of the current user image in the display interface can be switched according to the image operation instruction, so that complicated manual operation is avoided, and when the current user image is processed, an additional manual control is not required to be displayed, so that the display area of the image data is increased, and the display effect of the image data is improved.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores the aforementioned computer program executed by the image data processing apparatus 1, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the image data processing method in the embodiment corresponding to fig. 1 or fig. 5 can be executed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium to which the present invention relates, reference is made to the description of the method embodiments of the present invention.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer program product, and when an instruction in the computer program product is executed by a processor, the description of the image data processing method in the embodiment corresponding to fig. 1 or fig. 5 can be performed, and therefore, details will not be repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium to which the present invention relates, reference is made to the description of the method embodiments of the present invention.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the above-described apparatuses and units, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. An image data processing method characterized by comprising:
the method comprises the steps that a plurality of first user images corresponding to a target user are collected regularly within a preset first collection duration, and first position image data in each first user image are extracted;
recording position information of first position image data in each first user image in a display interface;
if the target user is determined to be in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion track of the target user within a preset identification duration; the motion trajectory comprises a shaking motion; the oscillating motion is obtained by identifying a first reference point abscissa, a second reference point abscissa, a minimum second central point abscissa and a maximum first central point abscissa; the first reference point abscissa refers to position information of first position image data in the first user image in a stable state; the second reference point abscissa refers to position information of second position image data in the first user image in a stable state; the maximum first central point abscissa identification refers to the maximum value of the first central point abscissas in the first region image data in the plurality of second user images; the minimum second center point abscissa refers to a minimum value of second center point abscissas in second region image data in the plurality of second user images; the second user images are acquired within a preset identification time;
and searching an image operation instruction corresponding to the motion track in a preset mapping relation table, and carrying out image processing on the current user image in the display interface according to the image operation instruction.
2. The method according to claim 1, wherein before counting the total number of movements corresponding to each first portion image data within a preset counting time period if it is determined that the target user is in a preset stable state according to the position information of each first portion image data in the display interface, the method further comprises:
performing difference analysis on the position information of every two adjacent first part image data according to the position information of the first part image data in each first user image in the display interface to obtain difference analysis results respectively corresponding to every two adjacent first part image data; the two adjacent first position image data refer to two adjacent first position image data in acquisition time;
and if all the difference analysis results meet the preset image stabilization condition, determining that the target user is in a preset stable state.
3. The method according to claim 2, wherein performing difference analysis on the position information of every two adjacent first portion image data according to the position information of the first portion image data in each first user image in the display interface to obtain difference analysis results respectively corresponding to every two adjacent first portion image data comprises:
selecting two adjacent first part image data from the first part image data in each first user image as two target image data;
acquiring first position information and second position information of the two target image data in the display interface respectively; the first position information and the second position information comprise a center position coordinate and a first part transverse axis distance;
calculating a center position distance between the center position coordinate of the first position information and the center position coordinate of the second position information, and calculating a first difference ratio between the center position distance and a first position transverse axis distance in the first position information;
calculating an absolute value of a difference between a first location cross-axis distance in the first location information and a first location cross-axis distance in the second location information, and calculating a second difference ratio between the absolute value of the difference and the first location cross-axis distance in the first location information;
determining the first difference ratio and the second difference ratio as difference analysis results corresponding to the two target image data;
when every two adjacent first region image data in the first region image data are selected as two target image data, obtaining difference analysis results respectively corresponding to every two adjacent first region image data.
4. The method according to claim 2, wherein before determining that the target user is in a preset stable state if each difference analysis result satisfies a preset image stabilization condition, the method further comprises:
judging whether the first difference ratios in the difference analysis results are all smaller than or equal to a preset first ratio threshold value or not;
if the first difference ratios are smaller than or equal to the first ratio threshold, judging whether second difference ratios in the difference analysis results are smaller than or equal to a preset second ratio threshold;
and if the second difference ratios are smaller than or equal to the second ratio threshold, determining that the difference analysis results meet the preset image stabilization condition.
5. The method according to claim 1, wherein if it is determined that the target user is in a preset stable state according to the position information of each first position image data in the display interface, identifying the motion trajectory of the target user within a preset identification duration includes:
when the target user is in a preset stable state, collecting a plurality of second user images corresponding to the target user within a preset identification duration, and extracting first part image data and second part image data in each second user image;
calculating a first central point abscissa of the first part image data in each second user image in the display interface and a second central point abscissa of the second part image data in each second user image in the display interface; in a second user image, the first center point abscissa is smaller than the second center point abscissa;
and identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image.
6. The method of claim 5, wherein the preset identification duration comprises a plurality of second acquisition durations;
the identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image includes:
acquiring a first central point abscissa and a second central point abscissa in the first user image when the target user is in the stable state, and taking the first central point abscissa and the second central point abscissa in the first user image when the target user is in the stable state as a first reference point abscissa and a second reference point abscissa respectively;
selecting a minimum second central point abscissa and a maximum first central point abscissa from the first central point abscissa and the second central point abscissa of each second user image acquired within the first second acquisition duration;
if the minimum second central point abscissa is less than or equal to the first reference point abscissa and the maximum first central point abscissa is greater than or equal to the second reference point abscissa, accumulating the shaking head movement times of the target user once, and detecting the shaking head movement times of the target user in the next second acquisition time length so as to detect the total shaking head movement times of the target user in the preset identification time length;
and if the total shaking motion times of the target user reaches a preset time threshold value within the preset identification duration, determining that the motion trail of the target user is shaking motion.
7. The method of claim 6, further comprising, after selecting a minimum second center point abscissa and a maximum first center point abscissa, among the first center point abscissas and the second center point abscissas of the second user images acquired during the first second acquisition duration:
if the minimum second central point abscissa is less than or equal to the first reference point abscissa and the maximum first central point abscissa is equal to the first reference point abscissa, determining that the motion trajectory of the target user rotates towards a first direction;
and if the minimum second central point abscissa is equal to the second reference point abscissa and the maximum first central point abscissa is greater than or equal to the second reference point abscissa, determining that the motion trail of the target user rotates towards a second direction.
8. An image data processing apparatus characterized by comprising:
the acquisition and extraction module is used for regularly acquiring a plurality of first user images corresponding to a target user within a preset first acquisition time length and extracting first position image data in each first user image;
the position recording module is used for recording the position information of the first position image data in each first user image in the display interface;
the track recognition module is used for recognizing the motion track of the target user within a preset recognition duration if the target user is determined to be in a preset stable state according to the position information of the first position image data in the display interface; the motion trajectory comprises a shaking motion; the oscillating motion is obtained by identifying a first reference point abscissa, a second reference point abscissa, a minimum second central point abscissa and a maximum first central point abscissa; the first reference point abscissa refers to position information of first position image data in the first user image in a stable state; the second reference point abscissa refers to position information of second position image data in the first user image in a stable state; the maximum first central point abscissa identification refers to the maximum value of the first central point abscissas in the first region image data in the plurality of second user images; the minimum second center point abscissa refers to a minimum value of second center point abscissas in second region image data in the plurality of second user images; the second user images are acquired within a preset identification time;
and the image processing module is used for searching an image operation instruction corresponding to the motion track in a preset mapping relation table and carrying out image processing on the current user image in the display interface according to the image operation instruction.
9. The apparatus of claim 8, further comprising:
the difference analysis module is used for carrying out difference analysis on the position information of every two adjacent first position image data according to the position information of the first position image data in each first user image in the display interface to obtain difference analysis results corresponding to every two adjacent first position image data; the two adjacent first position image data refer to two adjacent first position image data in acquisition time;
and the state determining module is used for determining that the target user is in a preset stable state if each difference analysis result meets a preset image stability condition.
10. The apparatus of claim 9, wherein the variance analysis module comprises:
a target data selection unit configured to select two adjacent first region image data among the first region image data in each of the first user images as two target image data;
the position information acquisition unit is used for acquiring first position information and second position information of the two target image data in the display interface respectively; the first position information and the second position information comprise a center position coordinate and a first part transverse axis distance;
a first ratio calculation unit, configured to calculate a center position distance between a center position coordinate of the first position information and a center position coordinate of the second position information, and calculate a first difference ratio between the center position distance and a first position horizontal axis distance in the first position information;
a second ratio calculation unit configured to calculate an absolute value of a difference between a first-part horizontal-axis distance in the first position information and a first-part horizontal-axis distance in the second position information, and calculate a second difference ratio between the absolute value of the difference and the first-part horizontal-axis distance in the first position information;
an analysis result determining unit, configured to determine the first difference ratio and the second difference ratio as difference analysis results corresponding to the two target image data;
an analysis result acquisition unit configured to obtain difference analysis results corresponding to each of two adjacent first region image data when each of the two adjacent first region image data is selected as two target image data.
11. The apparatus of claim 9, further comprising:
the first judgment module is used for judging whether the first difference ratios in the difference analysis results are all smaller than or equal to a preset first ratio threshold value;
the second judging module is used for judging whether the second difference ratios in the difference analysis results are all smaller than or equal to a preset second ratio threshold value or not if the first difference ratios are all smaller than or equal to the first ratio threshold value;
and the condition satisfying module is used for determining that each difference analysis result satisfies a preset image stabilization condition if each second difference ratio is smaller than or equal to the second ratio threshold.
12. The apparatus of claim 8, wherein the trajectory identification module comprises:
the image data extraction unit is used for collecting a plurality of second user images corresponding to the target user within a preset identification duration when the target user is in a preset stable state, and extracting first part image data and second part image data in each second user image;
the horizontal coordinate calculation unit is used for calculating a first central point horizontal coordinate of the first part image data in each second user image in the display interface and a second central point horizontal coordinate of the second part image data in each second user image in the display interface; in a second user image, the first center point abscissa is smaller than the second center point abscissa;
and the motion track identification unit is used for identifying the motion track of the target user according to the first central point abscissa and the second central point abscissa in each second user image.
13. The apparatus of claim 12, wherein the preset identification duration comprises a plurality of second acquisition durations;
the motion trajectory recognition unit includes:
a reference point coordinate obtaining subunit, configured to obtain a first central point abscissa and a second central point abscissa in the first user image when the target user is in the stable state, and take the first central point abscissa and the second central point abscissa in the first user image when the target user is in the stable state as a first reference point abscissa and a second reference point abscissa, respectively;
the abscissa selecting subunit is configured to select a minimum second center point abscissa and a maximum first center point abscissa from among first center point abscissas and second center point abscissas of each second user image acquired within the first second acquisition duration;
a motion frequency accumulating subunit, configured to accumulate the head shaking motion frequency of the target user for one time if the minimum second center point abscissa is less than or equal to the first reference point abscissa and the maximum first center point abscissa is greater than or equal to the second reference point abscissa, and detect the head shaking motion frequency of the target user in a next second acquisition duration, so as to detect the total head shaking motion frequency of the target user in the preset identification duration;
and the first motion determining subunit is used for determining that the motion track of the target user is shaking head motion if the total shaking head motion frequency of the target user reaches a preset frequency threshold value within the preset identification duration.
14. The apparatus of claim 13, wherein the motion trajectory recognition unit further comprises:
a second motion determining subunit, configured to determine that the motion trajectory of the target user rotates in the first direction if the minimum second center point abscissa is less than or equal to the first reference point abscissa and the maximum first center point abscissa is equal to the first reference point abscissa;
and the third motion determination subunit is configured to determine that the motion trajectory of the target user rotates in a second direction if the minimum second center point abscissa is equal to the second reference point abscissa and the maximum first center point abscissa is greater than or equal to the second reference point abscissa.
15. An electronic device, comprising: a processor and a memory, the processor coupled to the memory, wherein the memory is configured to store program code, and the processor is configured to invoke the program code to perform the method of any of claims 1-7.
16. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by the processor, perform the method according to any one of claims 1-7.
CN201710531661.0A 2017-06-30 2017-06-30 Image data processing method and device, electronic equipment and storage medium Active CN107333025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710531661.0A CN107333025B (en) 2017-06-30 2017-06-30 Image data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710531661.0A CN107333025B (en) 2017-06-30 2017-06-30 Image data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107333025A CN107333025A (en) 2017-11-07
CN107333025B true CN107333025B (en) 2020-04-03

Family

ID=60198372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710531661.0A Active CN107333025B (en) 2017-06-30 2017-06-30 Image data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107333025B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760915B (en) * 2021-09-07 2024-07-19 百果园技术(新加坡)有限公司 Data processing method, device, equipment and medium
CN115185381A (en) * 2022-09-15 2022-10-14 北京航天奥祥通风科技股份有限公司 Method and device for controlling terminal based on motion trail of head

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279253A (en) * 2013-05-23 2013-09-04 广东欧珀移动通信有限公司 Method and terminal device for theme setting
CN106101529A (en) * 2016-06-07 2016-11-09 广东欧珀移动通信有限公司 A kind of camera control method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140061266A (en) * 2012-11-11 2014-05-21 삼성전자주식회사 Apparartus and method for video object tracking using multi-path trajectory analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279253A (en) * 2013-05-23 2013-09-04 广东欧珀移动通信有限公司 Method and terminal device for theme setting
CN106101529A (en) * 2016-06-07 2016-11-09 广东欧珀移动通信有限公司 A kind of camera control method and mobile terminal

Also Published As

Publication number Publication date
CN107333025A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN106170978B (en) Depth map generation device, method and non-transitory computer-readable medium
US11671712B2 (en) Apparatus and methods for image encoding using spatially weighted encoding quality parameters
CN110175514B (en) Face brushing payment prompting method, device and equipment
CN106709932B (en) Face position tracking method and device and electronic equipment
US9179071B2 (en) Electronic device and image selection method thereof
US11070728B2 (en) Methods and systems of multi-camera with multi-mode monitoring
KR101739245B1 (en) Selection and tracking of objects for display partitioning and clustering of video frames
US10062010B2 (en) System for building a map and subsequent localization
US11503205B2 (en) Photographing method and device, and related electronic apparatus
US11074451B2 (en) Environment-based application presentation
CN110533694B (en) Image processing method, device, terminal and storage medium
CN103353935A (en) 3D dynamic gesture identification method for intelligent home system
CN111757175A (en) Video processing method and device
US20150172634A1 (en) Dynamic POV Composite 3D Video System
CN103295028A (en) Gesture operation control method, gesture operation control device and intelligent display terminal
TWI489326B (en) Operating area determination method and system
CN110345610B (en) Control method and device of air conditioner and air conditioning equipment
CN107333025B (en) Image data processing method and device, electronic equipment and storage medium
CN105247567A (en) Image refocusing
CN111629242B (en) Image rendering method, device, system, equipment and storage medium
EP4090000A1 (en) Method and device for image processing, electronic device, and storage medium
CN108140124B (en) Prompt message determination method and device and electronic equipment
CN110858409A (en) Animation generation method and device
JP6305856B2 (en) Image processing apparatus, image processing method, and program
CN107197161B (en) Image data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201127

Address after: Room 115, area C, 1 / F, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing

Patentee after: Beijing LEMI Technology Co.,Ltd.

Address before: 100085 Beijing City, Haidian District Road 33, two floor East Xiaoying

Patentee before: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230825

Address after: 100000 3870A, 3rd Floor, Building 4, No. 49 Badachu Road, Shijingshan District, Beijing

Patentee after: Beijing Jupiter Technology Co.,Ltd.

Address before: Room 115, area C, 1 / F, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing

Patentee before: Beijing LEMI Technology Co.,Ltd.

TR01 Transfer of patent right