CN117668298B - Artificial intelligence method and system for application data analysis - Google Patents

Artificial intelligence method and system for application data analysis Download PDF

Info

Publication number
CN117668298B
CN117668298B CN202311725556.2A CN202311725556A CN117668298B CN 117668298 B CN117668298 B CN 117668298B CN 202311725556 A CN202311725556 A CN 202311725556A CN 117668298 B CN117668298 B CN 117668298B
Authority
CN
China
Prior art keywords
behavior
target
image
image frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311725556.2A
Other languages
Chinese (zh)
Other versions
CN117668298A (en
Inventor
孙振华
邢嘉
赵咪
马琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Vocational And Technical College Of Hotel Management
Original Assignee
Qingdao Vocational And Technical College Of Hotel Management
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Vocational And Technical College Of Hotel Management filed Critical Qingdao Vocational And Technical College Of Hotel Management
Priority to CN202311725556.2A priority Critical patent/CN117668298B/en
Publication of CN117668298A publication Critical patent/CN117668298A/en
Application granted granted Critical
Publication of CN117668298B publication Critical patent/CN117668298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of big data, and particularly discloses an artificial intelligence method and system for analyzing application data, wherein the method comprises the following steps: converting the video to be analyzed into an image frame set; reading an evaluation rule according to the video type, and determining behaviors in each image frame according to the evaluation rule to obtain a behavior sequence corresponding to the image frame set; receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and determining target moments; selecting a subset from the image frame set by taking the target moment as a center, and recording and displaying the time period; the system comprises a video conversion module, a behavior determination module, a target moment determination module and a subset selection module; the method and the device not only realize quick inquiry of the target moment, but also can acquire the target time period including the target moment, save the time of a user, effectively improve the experience of the user and facilitate popularization of television drama and short-term video.

Description

Artificial intelligence method and system for application data analysis
Technical Field
The invention relates to the technical field of big data, in particular to an artificial intelligence method and system for analyzing application data.
Background
The data is a representation form and carrier of information, and can be symbols, words, numbers, voice, images, videos, pictures, increasingly more geographic position information and the like. Along with the rapid development of the Internet, the data generation speed is increased, the scale is increased, and the analysis processing is urgently needed by using a large data means, so that the effective target information is rapidly extracted.
At present, the dramas and the short views are usually segmented according to a time sequence, when a user needs to search target segments to be watched, the existing dramas and the short views are usually stored and played in sequence according to the time sequence, and when the user cannot determine the display time period of the target segments, the user needs to watch videos in a longer time period to search, or pull a progress bar to search, so that more time is consumed, the user experience is reduced, and popularization of the dramas and the short views is not facilitated.
Disclosure of Invention
The invention aims to provide an artificial intelligence method and system for analyzing application data, which are used for solving the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
An artificial intelligence method for applying data analysis, comprising the steps of:
receiving a video to be analyzed containing a video type, and converting the video to be analyzed into an image frame set;
Reading an evaluation rule according to the video type, and determining behaviors in each image frame according to the evaluation rule to obtain a behavior sequence corresponding to the image frame set;
receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and determining target moments;
Selecting a subset from the image frame set by taking the target moment as a center, and recording and displaying the time period;
wherein the data structure of the behavior is text.
As a further technical solution of the present invention, the step of receiving a video to be analyzed including a video type, and converting the video to be analyzed into an image frame set includes:
Receiving a video to be analyzed containing a video type; the video type is used for representing video classification information of the video to be analyzed;
Extracting image tracks of videos to be analyzed, and sorting according to time to obtain an image group;
Sequentially calculating the similarity of adjacent images, and carrying out data screening on the image group according to the similarity to obtain an image frame set;
The similarity of any adjacent elements in the image frame set is smaller than a preset threshold value.
As a further technical solution of the present invention, the step of reading an evaluation rule according to a video type, determining behaviors in each image frame according to the evaluation rule, and obtaining a behavior sequence corresponding to an image frame set includes:
Reading the video type, and acquiring a behavior table corresponding to the video type according to a big data technology; the behavior table comprises behavior items and evaluation rule items; the evaluation rule is used for representing the spatial position relation of each contour in the image;
sequentially reading image frames from the image frame set, comparing and identifying the image frames according to the behavior table, and determining a matching behavior; the matching behavior comprises a null behavior;
and counting the matching behaviors according to the time sequence of the image frames to obtain a behavior sequence.
As a further technical solution of the present invention, the step of receiving, in real time, the target behavior uploaded by the user, traversing the behavior sequence according to the target behavior, and determining the target time includes:
Receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and calculating the matching degree;
when the matching degree reaches a preset matching threshold value, marking a behavior in a behavior sequence;
the corresponding time of the behavior of the marker is queried as the target time.
As a further technical solution of the present invention, the step of receiving, in real time, the target behavior uploaded by the user, traversing the behavior sequence according to the target behavior, and determining the target time further includes:
When the target moment is empty, opening the image receiving port;
Acquiring a reference image uploaded by a user based on an image receiving port, and inquiring an image frame corresponding to a moment point of a behavior sequence;
inputting the reference image and all the image frames into a trained image comparison model, and outputting a comparison result;
and determining the target moment according to the comparison result.
As still further technical solution of the present invention, the step of selecting a subset from the image frame sets with the target time as a center, and recording a period includes:
reading an image frame at a target moment, and determining a background contour of the image frame based on a background recognition model;
acquiring image parameters of a background contour; the image parameters include contrast, saturation, color temperature, and hue;
determining a parameter range according to the image parameters and a preset fluctuation proportion;
Selecting an image frame with a preset time span by taking a target moment as a center, and performing range limiting on the image frame according to a parameter range to obtain a subset;
the corresponding time of the image frames in the subset is recorded, and the time period is calculated and displayed.
It is another object of the present invention to provide an artificial intelligence system for application data analysis, the system comprising:
the video conversion module is used for receiving the video to be analyzed containing the video type and converting the video to be analyzed into an image frame set;
The behavior determining module is used for reading the evaluation rule according to the video type, determining the behaviors in each image frame according to the evaluation rule, and obtaining a behavior sequence corresponding to the image frame set;
The target moment determining module is used for receiving target behaviors uploaded by the user in real time, traversing the behavior sequence according to the target behaviors and determining target moment;
the sub-set selecting module is used for selecting a sub-set from the image frame set by taking the target moment as a center, and recording and displaying the time period;
wherein the data structure of the behavior is text.
As a further technical solution of the present invention, the behavior determining module includes:
The behavior table acquisition unit is used for reading the video type and acquiring a behavior table corresponding to the video type according to the big data technology; the behavior table comprises behavior items and evaluation rule items; the evaluation rule is used for representing the spatial position relation of each contour in the image;
the comparison and identification unit is used for sequentially reading the image frames from the image frame set, comparing and identifying the image frames according to the behavior table and determining the matching behavior; the matching behavior comprises a null behavior;
And the matching behavior statistics unit is used for counting the matching behaviors according to the time sequence of the image frames to obtain a behavior sequence.
As a further technical solution of the present invention, the target time determining module includes:
The matching degree calculation unit is used for receiving target behaviors uploaded by the user in real time, traversing the behavior sequence according to the target behaviors, and calculating the matching degree;
The marking behavior unit is used for marking behaviors in the behavior sequence when the matching degree reaches a preset matching threshold value;
and the query target time unit is used for querying the corresponding time of the marked behavior as the target time.
As a further technical solution of the present invention, the subset selection module includes:
The background contour recognition unit is used for reading the image frames at the target moment and determining the background contour of the image frames based on the background recognition model;
an image parameter obtaining unit for obtaining image parameters of the background profile; the image parameters include contrast, saturation, color temperature, and hue;
The parameter range determining unit is used for determining a parameter range according to the image parameters and a preset fluctuation proportion;
the range limiting unit is used for selecting image frames with preset time spans by taking the target moment as a center, and limiting the range of the image frames according to the parameter range to obtain a subset;
And the record display unit is used for recording the corresponding time of the image frames in the subset, and calculating and displaying the time period.
Compared with the prior art, the invention has the beneficial effects that: the method and the system can convert the video to be analyzed to obtain an image frame set, compare and identify the image frames according to the behavior table, obtain a matched behavior sequence, receive target behaviors uploaded by a user in real time, traverse the behavior sequence according to the target behaviors, calculate the matching degree, obtain target time, realize quick and accurate query on the target time, and select a subset from the image frame set and record time intervals by taking the target time as the center; the method and the device not only realize quick inquiry of the target moment, but also can acquire the target time period including the target moment, save the time of a user, effectively improve the experience of the user and facilitate popularization of television drama and short-term video.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
FIG. 1 is a flow diagram of an artificial intelligence method employing data analysis.
FIG. 2 is a flow chart of the steps of converting video into a set of image frames in an artificial intelligence method employing data analysis.
FIG. 3 is a flow chart of the steps for converting an image frame set into a behavior sequence in an artificial intelligence method employing data analysis.
FIG. 4 is a flow chart of the steps for determining a target time in an artificial intelligence method employing data analysis.
FIG. 5 is a flow chart of the steps in an artificial intelligence method for applying data analysis to select subsets.
FIG. 6 is a block diagram of an artificial intelligence system employing data analysis.
FIG. 7 is a block diagram of a behavior determination module in an artificial intelligence system employing data analysis.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, an embodiment of the present invention provides an artificial intelligence method for applying data analysis, including the following steps:
Step S100, receiving a video to be analyzed containing a video type, and converting the video to be analyzed into an image frame set; if the video to be analyzed is a television play or a movie, the video types comprise love, armed, horror and the like; if the video to be analyzed is a short video, the video types are classified into food video, lover daily video and the like.
Step S200, reading an evaluation rule according to the video type, and determining behaviors in each image frame according to the evaluation rule to obtain a behavior sequence corresponding to the image frame set;
step S300, receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and determining target moments;
step S400, selecting a subset from the image frame sets by taking the target moment as the center, and recording and displaying the time period;
The data structure of the behavior is text, such as kissing and hugging in love sheets.
In the embodiment of the invention, a set or a fragment in a television play, a movie or a short video can be identified and analyzed, a video to be analyzed containing a video type is received, and the video to be analyzed is converted into an image frame set; if the video to be analyzed is a television play or a movie, the video types comprise love, armed, horror and the like; if the video to be analyzed is a short video, the video types are classified into food video, lover daily video and the like, and each type of video corresponds to an evaluation rule; reading an evaluation rule according to the video type, and determining behaviors in each image frame according to the evaluation rule, wherein the data structure of the behaviors is a text, for example, the behaviors in love sheets comprise kissing, hugging and the like, so as to obtain a behavior sequence corresponding to the image frame set; receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and determining target moments; and selecting a subset from the image frame set by taking the target moment as the center, and recording and displaying the time period, so that the target fragment video comprising the target moment can be obtained.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of receiving a video to be analyzed including a video type, and converting the video to be analyzed into a set of image frames includes:
step S101, receiving a video to be analyzed containing a video type; the video type is used for representing video classification information of the video to be analyzed;
Step S102, extracting an image track of a video to be analyzed, and sorting according to time to obtain an image group;
Step S103, sequentially calculating the similarity of adjacent images, and carrying out data screening on the image group according to the similarity to obtain an image frame set;
The similarity of any adjacent elements in the image frame set is smaller than a preset threshold value.
In the embodiment of the invention, a video to be analyzed containing a video type is received; the video type is used for representing video classification information of the video to be analyzed; removing sound tracks of the video to be analyzed, extracting only image tracks of the video to be analyzed, and sorting according to time to obtain an image group; and sequentially calculating the similarity of adjacent images, and carrying out data screening on the image groups according to the similarity to obtain an image frame set, so that the subsequent analysis quantity can be greatly reduced, wherein the similarity of any adjacent elements in the image frame set is smaller than a preset threshold value.
As shown in fig. 3, as a preferred embodiment of the present invention, the step of reading an evaluation rule according to a video type, determining behaviors in each image frame according to the evaluation rule, and obtaining a behavior sequence corresponding to a set of image frames includes:
Step S201, reading video types, and acquiring a behavior table corresponding to the video types according to a big data technology; the behavior table comprises behavior items and evaluation rule items; the evaluation rule is used for representing the spatial position relation of each contour in the image;
step S202, sequentially reading image frames from an image frame set, comparing and identifying the image frames according to a behavior table, and determining a matching behavior; the matching behavior comprises a null behavior;
step S203, statistics of matching behaviors is carried out according to the time sequence of the image frames, and a behavior sequence is obtained.
In the embodiment of the invention, video types are read, each video type corresponds to a behavior table, and the behavior table corresponding to the video type is obtained according to a big data technology; the behavior table comprises behavior items and evaluation rule items; the evaluation rule is used for representing the spatial position relation of each contour in the image; sequentially reading image frames from an image frame set, comparing and identifying the image frames according to a behavior table, identifying behaviors in the image frames, respectively comparing and identifying the identified behaviors in the image frames with each behavior item in the behavior table, and determining matching behaviors when the similarity reaches a set threshold value; the matching behavior comprises a null behavior, and the unsuccessful matching to the corresponding behavior item is indicated; and counting the matching behaviors according to the time sequence of the image frames to obtain a behavior sequence, wherein the behavior sequence is a set of a plurality of behavior item texts.
As shown in fig. 4, as a preferred embodiment of the present invention, the step of receiving, in real time, the target behavior uploaded by the user, traversing the behavior sequence according to the target behavior, and determining the target time includes:
Step S301, receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and calculating the matching degree;
step S302, marking a behavior in a behavior sequence when the matching degree reaches a preset matching threshold;
in step S303, the corresponding time of the behavior of the marker is queried as the target time.
In the embodiment of the invention, the target behavior uploaded by the user is received in real time, the behavior sequence is traversed according to the target behavior, the target behavior text uploaded by the user is matched with the behavior item text in the behavior sequence, and the text matching degree is calculated; when the matching degree reaches a preset matching threshold value, marking a behavior in a behavior sequence; the corresponding time of the behavior of the marker is queried as the target time.
As a preferred embodiment of the present invention, the step of receiving, in real time, the target behavior uploaded by the user, traversing the behavior sequence according to the target behavior, and determining the target time further includes:
step S304, when the target moment is empty, opening the image receiving port;
Step S305, acquiring a reference image uploaded by a user based on an image receiving port, and inquiring an image frame corresponding to a time point of a behavior sequence;
Step S306, inputting the reference image and all the image frames into a trained image comparison model, and outputting a comparison result;
step S307, determining the target moment according to the comparison result.
In the embodiment of the invention, when the target moment is empty, the text indicating the target behavior uploaded by the user is not successfully matched with the text in the behavior sequence, the target moment cannot be determined through the target behavior text uploaded by the user, and at the moment, an image receiving port is required to be opened, and the user uploads a reference image at the target moment; acquiring a reference image uploaded by a user based on an image receiving port, and inquiring an image frame corresponding to a moment point of a behavior sequence; inputting the reference image and all the image frames into a trained image comparison model, and outputting a comparison result; and determining the target moment according to the comparison result.
As shown in fig. 5, as a preferred embodiment of the present invention, the step of selecting a subset from the image frame set with the target time as the center, and recording a period includes:
Step S401, reading an image frame at a target moment, and determining a background contour of the image frame based on a background recognition model;
step S402, obtaining image parameters of a background contour; the image parameters include contrast, saturation, color temperature, and hue;
step S403, determining a parameter range according to the image parameters and a preset fluctuation proportion;
Step S404, selecting an image frame with a preset time span by taking a target moment as a center, and performing range limiting on the image frame according to a parameter range to obtain a subset;
In step S405, the corresponding time of the image frame in the subset is recorded, and the period is calculated and displayed.
In the embodiment of the invention, an image frame at a target moment is read, and the background outline of the image frame at the target moment is determined based on a background recognition model; acquiring image parameters of a background contour at a target moment; the image parameters comprise contrast, saturation, color temperature, hue and the like; determining a parameter range according to the image parameters and a preset fluctuation proportion; selecting an image frame with a preset time span by taking a target moment as a center, and performing range limiting on the image frame according to a parameter range to obtain a subset; and recording the corresponding time of the image frames in the subset, and calculating and displaying the time period, so that the target fragment video comprising the target time can be acquired.
Another object of the present invention, as shown in fig. 6, is to provide an artificial intelligence system for application data analysis, the system comprising:
The video conversion module 100 is configured to receive a video to be analyzed containing a video type, and convert the video to be analyzed into an image frame set;
The behavior determining module 200 is configured to read an evaluation rule according to a video type, determine behaviors in each image frame according to the evaluation rule, and obtain a behavior sequence corresponding to the image frame set;
the target moment determining module 300 is configured to receive, in real time, a target behavior uploaded by a user, traverse the behavior sequence according to the target behavior, and determine a target moment;
a subset selecting module 400, configured to select a subset from the image frame sets with the target time as a center, and record and display a period;
wherein the data structure of the behavior is text.
In the embodiment of the invention, an artificial intelligence system for applying data analysis is provided, a video conversion module 100 receives a video to be analyzed containing a video type, and converts the video to be analyzed into an image frame set; the behavior determination module 200 reads the evaluation rule according to the video type, determines the behaviors in each image frame according to the evaluation rule, and obtains a behavior sequence corresponding to the image frame set; the target moment determining module 300 receives the target behaviors uploaded by the user in real time, traverses the behavior sequence according to the target behaviors, and determines the target moment; the subset selecting module 400 selects a subset from the image frame sets by taking the target moment as a center, and records and displays the time period; the user can then acquire the target segment video including the target time
As shown in fig. 7, as a preferred embodiment of the present invention, the behavior determining module 200 includes:
A behavior table obtaining unit 201, configured to read a video type, and obtain a behavior table corresponding to the video type according to a big data technology; the behavior table comprises behavior items and evaluation rule items; the evaluation rule is used for representing the spatial position relation of each contour in the image;
the comparison and identification unit 202 is configured to sequentially read image frames from the image frame set, perform comparison and identification on the image frames according to the behavior table, and determine a matching behavior; the matching behavior comprises a null behavior;
The matching behavior statistics unit 203 is configured to count matching behaviors according to a time sequence of the image frames, and obtain a behavior sequence.
As a preferred embodiment of the present invention, the target time determining module 300 includes:
The matching degree calculating unit 301 is configured to receive, in real time, a target behavior uploaded by a user, traverse the behavior sequence according to the target behavior, and calculate a matching degree;
A marking behavior unit 302, configured to mark a behavior in a behavior sequence when the matching degree reaches a preset matching threshold;
and a query target time unit 303, configured to query the corresponding time of the marked behavior as a target time.
In the embodiment of the present invention, the target time determining module 300 includes a matching degree calculating unit 301, a marking behavior unit 302 and a target time inquiring unit 303, where the matching degree calculating unit 301 receives, in real time, a target behavior uploaded by a user, and content uploaded by the user is in a text format, and calculates a matching degree by traversing the behavior sequence according to the target behavior; when the degree of matching reaches a preset matching threshold, the marked behavior unit 302 queries the corresponding time of the marked behavior in the behavior sequence by the marked behavior query target time unit 303 as a target time.
As a preferred embodiment of the present invention, the subset selection module 400 includes:
A background contour recognition unit 401, configured to read an image frame at a target time, and determine a background contour of the image frame based on a background recognition model;
an image parameter acquiring unit 402, configured to acquire an image parameter of a background contour; the image parameters include contrast, saturation, color temperature, and hue;
a parameter range determining unit 403, configured to determine a parameter range according to the image parameter and a preset fluctuation ratio;
The range limiting unit 404 is configured to select an image frame with a preset time span with the target moment as a center, and perform range limiting on the image frame according to the parameter range to obtain a subset;
a record display unit 405 for recording the corresponding time of the image frames in the subset, and calculating and displaying the time period.
The functions that can be achieved by the artificial intelligence method for application data analysis described above are all accomplished by a computer device comprising one or more processors and one or more memories, wherein at least one program code is stored in the one or more memories, and the program code is loaded and executed by the one or more processors to achieve the functions of the artificial intelligence method for application data analysis.
The processor takes out instructions from the memory one by one, analyzes the instructions, then completes corresponding operation according to the instruction requirement, generates a series of control commands, enables all parts of the computer to automatically, continuously and cooperatively act to form an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit; the Memory comprises a Read-Only Memory (ROM) for storing a computer program, and a protection device is arranged outside the Memory.
For example, a computer program may be split into one or more modules, one or more modules stored in memory and executed by a processor to perform the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the terminal device.
It will be appreciated by those skilled in the art that the foregoing description of the service device is merely an example and is not meant to be limiting, and may include more or fewer components than the foregoing description, or may combine certain components, or different components, such as may include input-output devices, network access devices, buses, etc.
The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device described above, and which connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used for storing computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as an information acquisition template display function, a product information release function, etc.), and the like; the storage data area may store data created according to the use of the berth status display system (e.g., product information acquisition templates corresponding to different product types, product information required to be released by different product providers, etc.), and so on. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The modules/units integrated in the terminal device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may implement all or part of the modules/units in the system of the above-described embodiments, or may be implemented by instructing the relevant hardware by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the functions of the respective system embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. An artificial intelligence method for applying data analysis, comprising the steps of:
receiving a video to be analyzed containing a video type, and converting the video to be analyzed into an image frame set;
Reading an evaluation rule according to the video type, and determining behaviors in each image frame according to the evaluation rule to obtain a behavior sequence corresponding to the image frame set;
receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and determining target moments;
Selecting a subset from the image frame set by taking the target moment as a center, and recording and displaying the time period;
wherein, the data structure of the behavior is text;
The step of selecting a subset from the image frame set by taking the target moment as a center and recording the time period comprises the following steps:
reading an image frame at a target moment, and determining a background contour of the image frame based on a background recognition model;
acquiring image parameters of a background contour; the image parameters include contrast, saturation, color temperature, and hue;
determining a parameter range according to the image parameters and a preset fluctuation proportion;
Selecting an image frame with a preset time span by taking a target moment as a center, and performing range limiting on the image frame according to a parameter range to obtain a subset;
recording corresponding moments of image frames in the subset, and calculating and displaying time periods;
The step of receiving a video to be analyzed containing a video type and converting the video to be analyzed into an image frame set comprises the following steps:
Receiving a video to be analyzed containing a video type; the video type is used for representing video classification information of the video to be analyzed;
Extracting image tracks of videos to be analyzed, and sorting according to time to obtain an image group;
Sequentially calculating the similarity of adjacent images, and carrying out data screening on the image group according to the similarity to obtain an image frame set;
The similarity of any adjacent elements in the image frame set is smaller than a preset threshold value.
2. The artificial intelligence method for applying data analysis according to claim 1, wherein the step of reading the evaluation rule according to the video type, determining the behavior in each image frame according to the evaluation rule, and obtaining the behavior sequence corresponding to the image frame set comprises:
Reading the video type, and acquiring a behavior table corresponding to the video type according to a big data technology; the behavior table comprises behavior items and evaluation rule items; the evaluation rule is used for representing the spatial position relation of each contour in the image;
sequentially reading image frames from the image frame set, comparing and identifying the image frames according to the behavior table, and determining a matching behavior; the matching behavior comprises a null behavior;
and counting the matching behaviors according to the time sequence of the image frames to obtain a behavior sequence.
3. The artificial intelligence method for data analysis according to claim 1, wherein the step of receiving the target behavior uploaded by the user in real time, traversing the behavior sequence according to the target behavior, and determining the target time comprises:
Receiving target behaviors uploaded by a user in real time, traversing the behavior sequence according to the target behaviors, and calculating the matching degree;
when the matching degree reaches a preset matching threshold value, marking a behavior in a behavior sequence;
the corresponding time of the behavior of the marker is queried as the target time.
4. An artificial intelligence method for applying data analysis according to claim 3, wherein said step of receiving in real time a target behavior uploaded by a user, traversing said behavior sequence based on the target behavior, and determining a target time further comprises:
When the target moment is empty, opening the image receiving port;
Acquiring a reference image uploaded by a user based on an image receiving port, and inquiring an image frame corresponding to a moment point of a behavior sequence;
inputting the reference image and all the image frames into a trained image comparison model, and outputting a comparison result;
and determining the target moment according to the comparison result.
5. An artificial intelligence system for applying data analysis, the system comprising:
the video conversion module is used for receiving the video to be analyzed containing the video type and converting the video to be analyzed into an image frame set;
The behavior determining module is used for reading the evaluation rule according to the video type, determining the behaviors in each image frame according to the evaluation rule, and obtaining a behavior sequence corresponding to the image frame set;
The target moment determining module is used for receiving target behaviors uploaded by the user in real time, traversing the behavior sequence according to the target behaviors and determining target moment;
the sub-set selecting module is used for selecting a sub-set from the image frame set by taking the target moment as a center, and recording and displaying the time period;
wherein, the data structure of the behavior is text;
The step of selecting a subset from the image frame set by taking the target moment as a center and recording the time period comprises the following steps:
reading an image frame at a target moment, and determining a background contour of the image frame based on a background recognition model;
acquiring image parameters of a background contour; the image parameters include contrast, saturation, color temperature, and hue;
determining a parameter range according to the image parameters and a preset fluctuation proportion;
Selecting an image frame with a preset time span by taking a target moment as a center, and performing range limiting on the image frame according to a parameter range to obtain a subset;
recording corresponding moments of image frames in the subset, and calculating and displaying time periods;
The step of receiving a video to be analyzed containing a video type and converting the video to be analyzed into an image frame set comprises the following steps:
Receiving a video to be analyzed containing a video type; the video type is used for representing video classification information of the video to be analyzed;
Extracting image tracks of videos to be analyzed, and sorting according to time to obtain an image group;
Sequentially calculating the similarity of adjacent images, and carrying out data screening on the image group according to the similarity to obtain an image frame set;
The similarity of any adjacent elements in the image frame set is smaller than a preset threshold value.
6. The artificial intelligence system for application data analysis of claim 5, wherein the behavior determination module comprises:
The behavior table acquisition unit is used for reading the video type and acquiring a behavior table corresponding to the video type according to the big data technology; the behavior table comprises behavior items and evaluation rule items; the evaluation rule is used for representing the spatial position relation of each contour in the image;
the comparison and identification unit is used for sequentially reading the image frames from the image frame set, comparing and identifying the image frames according to the behavior table and determining the matching behavior; the matching behavior comprises a null behavior;
And the matching behavior statistics unit is used for counting the matching behaviors according to the time sequence of the image frames to obtain a behavior sequence.
7. The artificial intelligence system for applying data analysis according to claim 6, wherein the target time determination module comprises:
The matching degree calculation unit is used for receiving target behaviors uploaded by the user in real time, traversing the behavior sequence according to the target behaviors, and calculating the matching degree;
The marking behavior unit is used for marking behaviors in the behavior sequence when the matching degree reaches a preset matching threshold value;
and the query target time unit is used for querying the corresponding time of the marked behavior as the target time.
CN202311725556.2A 2023-12-15 2023-12-15 Artificial intelligence method and system for application data analysis Active CN117668298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311725556.2A CN117668298B (en) 2023-12-15 2023-12-15 Artificial intelligence method and system for application data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311725556.2A CN117668298B (en) 2023-12-15 2023-12-15 Artificial intelligence method and system for application data analysis

Publications (2)

Publication Number Publication Date
CN117668298A CN117668298A (en) 2024-03-08
CN117668298B true CN117668298B (en) 2024-05-07

Family

ID=90063995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311725556.2A Active CN117668298B (en) 2023-12-15 2023-12-15 Artificial intelligence method and system for application data analysis

Country Status (1)

Country Link
CN (1) CN117668298B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state
US9652534B1 (en) * 2014-03-26 2017-05-16 Amazon Technologies, Inc. Video-based search engine
CN109194913A (en) * 2018-08-21 2019-01-11 平安科技(深圳)有限公司 Processing method, device, equipment and the medium of monitor video data
CN109905772A (en) * 2019-03-12 2019-06-18 腾讯科技(深圳)有限公司 Video clip querying method, device, computer equipment and storage medium
CN109977262A (en) * 2019-03-25 2019-07-05 北京旷视科技有限公司 The method, apparatus and processing equipment of candidate segment are obtained from video
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN114866788A (en) * 2021-02-03 2022-08-05 阿里巴巴集团控股有限公司 Video processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230140369A1 (en) * 2021-10-28 2023-05-04 Adobe Inc. Customizable framework to extract moments of interest

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state
US9652534B1 (en) * 2014-03-26 2017-05-16 Amazon Technologies, Inc. Video-based search engine
CN109194913A (en) * 2018-08-21 2019-01-11 平安科技(深圳)有限公司 Processing method, device, equipment and the medium of monitor video data
CN109905772A (en) * 2019-03-12 2019-06-18 腾讯科技(深圳)有限公司 Video clip querying method, device, computer equipment and storage medium
CN109977262A (en) * 2019-03-25 2019-07-05 北京旷视科技有限公司 The method, apparatus and processing equipment of candidate segment are obtained from video
CN114866788A (en) * 2021-02-03 2022-08-05 阿里巴巴集团控股有限公司 Video processing method and device
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN117668298A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN106649316B (en) Video pushing method and device
CN110267119B (en) Video precision and chroma evaluation method and related equipment
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN112036295B (en) Bill image processing method and device, storage medium and electronic equipment
CN113761253A (en) Video tag determination method, device, equipment and storage medium
CN112507095A (en) Information identification method based on weak supervised learning and related equipment
CN112995690A (en) Live content item identification method and device, electronic equipment and readable storage medium
CN114125389B (en) Wisdom gardens cloud supervisory systems based on big data
CN116052848B (en) Data coding method and system for medical imaging quality control
CN117668298B (en) Artificial intelligence method and system for application data analysis
CN116127105B (en) Data collection method and device for big data platform
CN114491134B (en) Trademark registration success rate analysis method and system
CN115620317A (en) Method and system for verifying authenticity of electronic engineering document
CN115734072A (en) Internet of things centralized monitoring method and device for industrial automation equipment
CN112418215A (en) Video classification identification method and device, storage medium and equipment
CN112434965A (en) Expert label generation method, device and terminal based on word frequency
CN115265620B (en) Acquisition and entry method and device for instrument display data and storage medium
CN115909345B (en) Touch and talk pen information interaction method and system
CN113535951B (en) Method, device, terminal equipment and storage medium for information classification
CN117555428B (en) Artificial intelligent interaction method, system, computer equipment and storage medium thereof
CN117830731B (en) Multidimensional parallel scheduling method
CN113486204A (en) Picture marking method, device, medium and equipment
CN114329157A (en) Related material acquisition method and device and storage medium
CN118276744A (en) Training scene interaction method, system and storage medium based on mobile recording and broadcasting terminal
CN115131808A (en) Accurate data pushing system and method based on problem type literacy of wrong questions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240415

Address after: 266000 No. 599 Jiushui East Road, Licang District, Qingdao City, Shandong Province

Applicant after: QINGDAO VOCATIONAL AND TECHNICAL College OF HOTEL MANAGEMENT

Country or region after: China

Address before: 266000 room 311, Jiaotong Valley maker workshop, 163 Shenzhen road, Laoshan District, Qingdao, Shandong

Applicant before: Qingdao Haichuan Chuangzhi Information Technology Co.,Ltd.

Country or region before: China

Applicant before: QINGDAO VOCATIONAL AND TECHNICAL College OF HOTEL MANAGEMENT

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant