CN113949881B - Business processing method and system based on smart city data - Google Patents

Business processing method and system based on smart city data Download PDF

Info

Publication number
CN113949881B
CN113949881B CN202111346837.8A CN202111346837A CN113949881B CN 113949881 B CN113949881 B CN 113949881B CN 202111346837 A CN202111346837 A CN 202111346837A CN 113949881 B CN113949881 B CN 113949881B
Authority
CN
China
Prior art keywords
video frame
frame
video
target
user request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111346837.8A
Other languages
Chinese (zh)
Other versions
CN113949881A (en
Inventor
赵茜茜
许评
杨万广
张承彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Ruihan Network Technology Co ltd
Original Assignee
Shandong Ruihan Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Ruihan Network Technology Co ltd filed Critical Shandong Ruihan Network Technology Co ltd
Priority to CN202111346837.8A priority Critical patent/CN113949881B/en
Publication of CN113949881A publication Critical patent/CN113949881A/en
Application granted granted Critical
Publication of CN113949881B publication Critical patent/CN113949881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a business processing method and system based on smart city data, and relates to the technical field of data processing. In the invention, a multi-frame user request video frame included in a request video to be processed is subjected to video frame screening processing to obtain at least one corresponding frame of target user request video frame, and a corresponding target request video is constructed based on the at least one frame of target user request video frame; grouping the target request video to obtain at least one corresponding video frame group; and determining the region position information of a target region corresponding to the target request video based on a similarity relation between at least one video frame group and a preset multi-frame region standard video frame, wherein the multi-frame region standard video frame is obtained by image acquisition on a plurality of region positions. Based on the method, the problem of low processing precision of the position determination service in the prior art can be solved.

Description

Business processing method and system based on smart city data
Technical Field
The invention relates to the technical field of data processing, in particular to a business processing method and system based on smart city data.
Background
In the construction and application of smart cities, an important requirement is to perform positioning, however, in the prior art, for some areas with complex environments, there may be a problem that positioning accuracy is not high. In order to solve the corresponding problem, in the prior art, a technical scheme is provided, for example, a user sends an image of a current position to a background server for recognition so as to determine the current position. In the prior art, a background server generally selects a frame of image to match with each standard image, and determines the position corresponding to the matched standard image as the current position of the user, so that the problem of low processing precision of the position determination service is easy to occur.
Disclosure of Invention
Accordingly, the present invention is directed to a method and a system for processing a business based on smart city data, so as to solve the problem of low processing accuracy of a location determination business in the prior art.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical scheme:
a business processing method based on smart city data is applied to a city monitoring server, and the business processing method based on smart city data comprises the following steps:
After obtaining a to-be-processed request video sent by target user terminal equipment of communication connection in response to target request operation performed by a target user corresponding to the target user terminal equipment, performing video frame screening processing on multi-frame user request video frames included in the to-be-processed request video to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, wherein each video frame group comprises at least one frame of target user request video frame;
and determining the region position information of a target region corresponding to the target request video based on the similarity relation between the at least one video frame group and a preset multi-frame region standard video frame, wherein the multi-frame region standard video frame is obtained by image acquisition on a plurality of region positions.
In some preferred embodiments, in the above business processing method based on smart city data, the step of grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video includes:
calculating a pixel difference value between the target user request video frame and the adjacent target user request video frame of the next frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame;
determining a relative magnitude relation between a pixel difference value between the target user request video frame and an adjacent subsequent frame target user request video frame and a pre-configured pixel difference value threshold for each frame of target user request video frame except for the last frame of the at least one frame target user request video frame included in the target request video, and determining a position between the target user request video frame and the adjacent subsequent frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold;
And dividing the at least one frame of target user request video frame included in the target request video based on each determined video dividing position to obtain at least one corresponding video frame group.
In some preferred embodiments, in the smart city data-based service processing method, the step of calculating, for each frame of the target user request video frame except for a last frame of the at least one frame of target user request video frames included in the target request video, a pixel difference value between the target user request video frame and an adjacent subsequent frame of target user request video frame includes:
calculating the pixel absolute difference value of a corresponding pixel point between the target user request video frame and the adjacent next frame target user request video frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame;
and calculating a sum value of pixel absolute differences of corresponding pixel points between the target user request video frame and the adjacent subsequent frame target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame, and taking the sum value as a pixel difference value between the target user request video frame and the adjacent subsequent frame target user request video frame.
In some preferred embodiments, in the above business processing method based on smart city data, the step of grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video includes:
aiming at each two target user request video frames in the at least one target user request video frame included in the target request video, calculating the similarity between the two target user request video frames to obtain the video frame similarity between the two target user request video frames;
and clustering the at least one frame of target user request video frame included in the target request video based on the video frame similarity between every two frames of target user request video frames in the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video.
In some preferred embodiments, in the above business processing method based on smart city data, the step of determining the region location information of the target region corresponding to the target request video based on a similarity relationship between the at least one video frame group and a preconfigured multi-frame region standard video frame includes:
For each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining one frame of the region standard video frame corresponding to the corresponding target first similarity corresponding to the video frame group as a target region standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the region position information of the region position corresponding to the target region standard video frame corresponding to the video frame group, and acquiring the region position information corresponding to the video frame group;
classifying the at least one video frame group based on whether the corresponding region position information is the same, and obtaining at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
And determining the regional position information of the target region corresponding to the target request video based on the regional position information corresponding to each video frame group set in the at least one video frame group set.
In some preferred embodiments, in the above business processing method based on smart city data, the step of calculating, for each video frame group in the at least one video frame group, a similarity between the video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames, to obtain a plurality of first similarities corresponding to the video frame group, and determining a first similarity having a maximum value from the plurality of first similarities corresponding to the video frame group, as a target first similarity corresponding to the video frame group includes:
calculating the video frame similarity between the target user request video frame and the region standard video frame of each frame in the video frame group aiming at each video frame group in the at least one video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frame;
for each video frame group in the at least one video frame group, calculating an average value of video frame similarity between the video frame group and the region standard video of each frame respectively to obtain first similarity between the video frame group and the region standard video of each frame respectively so as to obtain a plurality of first similarities corresponding to the video frame group;
And determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame groups as the target first similarity corresponding to the video frame groups aiming at each video frame group in the at least one video frame group.
In some preferred embodiments, in the above business processing method based on smart city data, the step of determining the area location information of the target area corresponding to the target request video based on the area location information corresponding to each of the at least one video frame group set includes:
counting the number of the video frame groups included in each video frame group set in the at least one video frame group set to obtain the group number corresponding to the video frame group set;
and determining the video frame group set corresponding to the maximum group number as a target video frame group set, and determining the position information corresponding to the target video frame group set as the region position information of the target region corresponding to the target request video.
The embodiment of the invention also provides a business processing system based on the smart city data, which is applied to the city monitoring server and comprises:
The video frame screening module is used for carrying out video frame screening processing on multi-frame user request video frames included in the to-be-processed request video after obtaining to-be-processed request video sent by target user terminal equipment of communication connection in response to target request operation carried out by target user corresponding to the target user terminal equipment, obtaining at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing to obtain a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
the video frame grouping module is used for grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, wherein each video frame group comprises at least one frame of target user request video frame;
the area position determining module is used for determining area position information of a target area corresponding to the target request video based on a similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by image acquisition on a plurality of area positions.
In some preferred embodiments, in the above-mentioned smart city data-based service processing system, the video frame grouping module is specifically configured to:
calculating a pixel difference value between the target user request video frame and the adjacent target user request video frame of the next frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame;
determining a relative magnitude relation between a pixel difference value between the target user request video frame and an adjacent subsequent frame target user request video frame and a pre-configured pixel difference value threshold for each frame of target user request video frame except for the last frame of the at least one frame target user request video frame included in the target request video, and determining a position between the target user request video frame and the adjacent subsequent frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold;
and dividing the at least one frame of target user request video frame included in the target request video based on each determined video dividing position to obtain at least one corresponding video frame group.
In some preferred embodiments, in the above business processing system based on smart city data, the area location determining module is specifically configured to:
for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining one frame of the region standard video frame corresponding to the corresponding target first similarity corresponding to the video frame group as a target region standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the region position information of the region position corresponding to the target region standard video frame corresponding to the video frame group, and acquiring the region position information corresponding to the video frame group;
classifying the at least one video frame group based on whether the corresponding region position information is the same, and obtaining at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
And determining the regional position information of the target region corresponding to the target request video based on the regional position information corresponding to each video frame group set in the at least one video frame group set.
According to the business processing method and system based on smart city data, after multi-frame user request video frames included in a request video to be processed are subjected to video frame screening processing to obtain target request video corresponding to the request video to be processed, the target user request video frames included in the target request video can be firstly grouped to obtain at least one corresponding video frame group, and then the area position information of a target area corresponding to the target request video is determined based on the similarity relationship between the at least one obtained video frame group and the pre-configured multi-frame area standard video frames, so that the accuracy of the determined area position information can be improved to a certain extent through configuration of a video frame grouping mechanism, and the problem that the processing accuracy of position determination business is not high in the prior art is solved.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a city monitoring server according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps involved in a business processing method based on smart city data according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of each module included in the smart city data-based service processing system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, an embodiment of the present invention provides a city monitoring server. Wherein the city monitoring server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize transmission or interaction of data. For example, electrical connection may be made to each other via one or more communication buses or signal lines. The memory may store at least one software functional module (computer program) that may exist in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the business processing method based on smart city data provided by the embodiment of the present invention.
It will be appreciated that as an alternative implementation, the Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), and the like. The processor may be a general purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a System on Chip (SoC), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
It will be appreciated that the architecture shown in fig. 1 is merely illustrative, and that the city monitoring server may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1, for example, may include a communication unit for information interaction with other devices (e.g., user terminal devices such as cell phones).
With reference to fig. 2, the embodiment of the invention further provides a business processing method based on smart city data, which can be applied to the city monitoring server. The method steps defined by the flow related to the business processing method based on the smart city data can be realized by the city monitoring server.
The specific flow shown in fig. 2 will be described in detail.
Step S100, performing video frame screening processing on multi-frame user request video frames included in a request video to be processed to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing a corresponding target request video based on the at least one frame of target user request video frame.
In the embodiment of the present invention, when executing the step S100, the city monitoring server may perform video frame screening processing on a multi-frame user request video frame included in the to-be-processed request video after obtaining the to-be-processed request video sent by the target user terminal device in response to the target request operation performed by the target user corresponding to the target user terminal device, to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and construct a corresponding target request video based on the at least one frame of target user request video frame. The multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area.
Step S200, grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video.
In the embodiment of the present invention, when executing the step S200, the city monitoring server may group the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video. Wherein each of said groups of video frames comprises at least one frame of said target user requested video frame.
Step S300, determining the region position information of the corresponding target region based on the similarity relationship between the at least one video frame group and the preconfigured multi-frame region standard video frame.
In the embodiment of the present invention, when executing the step S300, the city monitoring server may determine the region location information of the target region corresponding to the target request video based on a similarity relationship between the at least one video frame group and a preconfigured multi-frame region standard video frame. The multi-frame region standard video frames are obtained based on image acquisition of a plurality of region positions.
Based on the steps of the service processing method, after the multi-frame user request video frames included in the request video to be processed are subjected to video frame screening processing to obtain the target request video corresponding to the request video to be processed, the target user request video frames included in the target request video can be firstly grouped to obtain at least one corresponding video frame group, and then the area position information of the target area corresponding to the target request video is determined based on the similarity relationship between the at least one obtained video frame group and the pre-configured multi-frame area standard video frames, so that the accuracy of the determined area position information can be improved to a certain extent through the configuration of a video frame grouping mechanism, and the problem that the processing accuracy of the position determination service is not high in the prior art is solved.
It will be appreciated that, as an alternative implementation, the above step S100 may further include the following steps (such as step S110, step S120, and step S130) to obtain the target requested video.
Step S110, a to-be-processed request video sent by target user terminal equipment of communication connection in response to target request operation performed by a target user corresponding to the target user terminal equipment is obtained.
In the embodiment of the present invention, when executing the step S110, the city monitoring server may acquire a to-be-processed request video sent by the target user terminal device in communication connection in response to the target request operation performed by the target user corresponding to the target user terminal device. The request video to be processed comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are continuous video frames based on the plurality of frames obtained by image acquisition of the target area.
Step S120, carrying out video feature analysis processing on the request video to be processed to obtain target video frame screening feature information corresponding to the request video to be processed.
In the embodiment of the present invention, when executing the step S120, the city monitoring server may perform video feature analysis processing on the request video to be processed, so as to obtain target video frame screening feature information corresponding to the request video to be processed.
Step S130, performing video frame screening processing on the multi-frame user request video frames included in the to-be-processed request video based on the target video frame screening feature information to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing a corresponding target request video based on the at least one frame of target user request video frame.
In the embodiment of the present invention, when executing the step S130, the city monitoring server may perform video frame screening processing on the multi-frame user request video frames included in the to-be-processed request video based on the target video frame screening feature information, to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and construct a corresponding target request video based on the at least one frame of target user request video frame.
Based on the above steps S110, S120 and S130, after obtaining the to-be-processed request video sent by the target user terminal device of the communication connection in response to the target request operation performed by the corresponding target user, the to-be-processed request video may be subjected to video feature analysis processing to obtain corresponding target video frame screening feature information, and then, based on the target video frame screening feature information, the to-be-processed request video is subjected to video frame screening processing to obtain at least one corresponding frame of target user request video frame.
It will be appreciated that, as an alternative implementation manner, the above step S110 may further include the following steps to obtain the requested video to be processed:
firstly, judging whether to acquire to-be-processed request information sent by target user terminal equipment of communication connection, and when the to-be-processed request information sent by the target user terminal equipment is received, checking the to-be-processed request information to obtain a corresponding checking result;
secondly, if the verification processing result is that the verification processing fails, corresponding request refusal notification information is generated, and the request refusal notification information is sent to the target user terminal equipment, wherein the target user terminal equipment is used for displaying the request refusal notification information to a target user corresponding to the target user terminal equipment so as to enable the target user to stop performing target request operation;
then, if the verification processing result is that the verification processing is successful, corresponding request success notification information is generated, and the request success notification information is sent to the target user terminal equipment, wherein the target user terminal equipment user displays the request success notification information to a target user corresponding to the target user terminal equipment, so that the target user performs target request operation;
And finally, acquiring a to-be-processed request video sent by the target user terminal equipment in response to the target request operation performed by the target user, wherein the to-be-processed request video is obtained by performing image acquisition on the target area based on the target user terminal equipment in response to the target request operation, or the to-be-processed request video is a video with image information of the target area, which is obtained by selecting a stored video by the target user terminal equipment in response to the target request operation.
It may be understood that, as an alternative implementation manner, the step of determining whether to obtain the to-be-processed request information sent by the target user terminal device of the communication connection, and when receiving the to-be-processed request information sent by the target user terminal device, performing verification processing on the to-be-processed request information to obtain a corresponding verification processing result may include the following steps:
firstly, judging whether to acquire to-be-processed request information sent by target user terminal equipment of communication connection, and analyzing the to-be-processed request information to obtain target identity information carried in the to-be-processed request information when the to-be-processed request information sent by the target user terminal equipment is received, wherein the target identity information is used for representing the identity of the target user terminal equipment or representing the identity of a target user corresponding to the target user terminal equipment;
Secondly, searching in a pre-built target identity database to determine whether the target identity information is stored in the target identity database;
and then, if the target identity information is stored in the target identity database, determining that the verification processing of the request information to be processed is successful, and generating a verification processing result which is the verification processing success, and if the target identity information is not stored in the target identity database, determining that the verification processing of the request information to be processed is failed, and generating a verification processing result which is the verification processing failure.
It will be appreciated that, as an alternative implementation manner, the step S120 may further include the following steps to obtain the target video frame filtering feature information:
firstly, determining the number of multi-frame user request video frames included in the request video to be processed, and obtaining the number of target video frames corresponding to the request video to be processed;
and secondly, obtaining target video frame screening characteristic information corresponding to the request video to be processed based on the target video frame number.
It may be appreciated that, as an alternative implementation manner, the step of obtaining the target video frame screening feature information corresponding to the request video to be processed based on the target video frame number may include the following steps:
Firstly, carrying out first object recognition processing on each frame of user request video frames in the multi-frame user request video frames included in the request video to be processed to obtain the number of first objects in the user request video frames, and carrying out object statistics on the number of first objects in each frame of user request video frames in the multi-frame user request video frames to obtain the number of first objects corresponding to the request video to be processed, wherein the first objects are static objects (such as various buildings, plants and the like);
secondly, aiming at each frame of user request video frames in the multi-frame user request video frames included in the request video to be processed, performing second object recognition processing on the user request video frames to obtain the number of second objects in the user request video frames, and performing object statistics based on the number of second objects in each frame of user request video frames in the multi-frame user request video frames to obtain the number of second objects corresponding to the request video to be processed, wherein the second objects are dynamic objects (such as people, vehicles and the like);
Then, counting the sum of the number of first objects and the number of second objects in each frame of user request video frames included in the multi-frame user request video to be processed to obtain the object counting number corresponding to the user request video frame, and calculating the average value based on the object counting number corresponding to each frame of user request video frame in the multi-frame user request video frame to obtain the object number average value corresponding to the to-be-processed request video;
and finally, carrying out multidimensional feature fusion calculation processing based on the target video frame number, the first object number, the second object number and the object number average value to obtain target video frame screening feature information corresponding to the request video to be processed.
It may be appreciated that, as an alternative implementation manner, the step of performing multidimensional feature fusion calculation processing based on the target video frame number, the first object number, the second object number and the object number average value to obtain target video frame screening feature information corresponding to the request video to be processed may include the following steps:
Firstly, determining a first video frame screening characteristic coefficient corresponding to the request video to be processed based on the target video frame number, wherein the target video frame number and the first video frame screening characteristic coefficient have a positive correlation corresponding relation;
secondly, determining a second video frame screening characteristic coefficient corresponding to the request video to be processed based on the first object number, determining a third video frame screening characteristic coefficient corresponding to the request video to be processed based on the second object number, and determining a fourth video frame screening characteristic coefficient corresponding to the request video to be processed based on the object number average value, wherein the first object number and the second video frame screening characteristic coefficient have a negative correlation, the second object number and the third video frame screening characteristic coefficient have a negative correlation, and the object number average value and the fourth video frame screening characteristic coefficient have a negative correlation;
and then, carrying out weighted summation calculation on the first video frame screening characteristic coefficient, the second video frame screening characteristic coefficient, the third video frame screening characteristic coefficient and the fourth video frame screening characteristic coefficient to obtain target video frame screening characteristic information corresponding to the request video to be processed, wherein the weighted coefficient corresponding to the second video frame screening characteristic coefficient is larger than the weighted coefficient corresponding to the fourth video frame screening characteristic coefficient, the weighted coefficient corresponding to the fourth video frame screening characteristic coefficient is larger than the weighted coefficient corresponding to the third video frame screening characteristic coefficient, and the weighted coefficient corresponding to the third video frame screening characteristic coefficient is larger than the weighted coefficient corresponding to the first video frame screening characteristic coefficient (the sum of the weighted coefficient corresponding to the first video frame screening characteristic coefficient, the weighted coefficient corresponding to the second video frame screening characteristic coefficient, the weighted coefficient corresponding to the third video frame screening characteristic coefficient and the weighted coefficient corresponding to the fourth video frame screening characteristic coefficient is 1).
It will be appreciated that, as an alternative implementation, the above step S130 may further include the following steps to obtain the target request video:
firstly, determining a target screening proportion coefficient with a positive correlation relationship based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportion coefficient is used for representing the maximum proportion value of the screened video frames when video frame screening processing is carried out;
and then, carrying out video frame screening processing on the multi-frame user request video frames included in the to-be-processed request video based on the target screening proportionality coefficient to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing a corresponding target request video based on the at least one frame of target user request video frame.
It may be appreciated that, as an alternative implementation manner, the step of performing video frame filtering processing on the multi-frame user request video frames included in the request video to be processed based on the target filtering scaling factor to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing to obtain a corresponding target request video based on the at least one frame of target user request video frame may include the following steps:
Firstly, acquiring a preset target video frame set, wherein the target video frame set comprises at least one frame of reference video frame with static objects, each frame of reference video frame is provided with one static object, and when the target video frame set comprises a plurality of frames of reference video frames, the static objects of any two frames of reference video frames are different;
secondly, determining a distribution density value of a static object in each user request video frame in the multi-frame user request video frames included in the request video to be processed, and determining a target density interval corresponding to the distribution density value in a plurality of preset density intervals, wherein the distribution density value is used for representing the distribution density (such as the number of the static objects divided by the scene area) of the static object in the corresponding user request video frame;
then, for each frame of user request video frames in the multi-frame user request video frames included in the request video to be processed, if the target density interval corresponding to the user request video frame belongs to a first density interval, determining a first video frame similarity threshold configured in advance for the first density interval as a target video frame similarity threshold corresponding to the user request video frame;
Then, for each frame of user request video frames in the multi-frame user request video frames included in the request video to be processed, if the target density interval corresponding to the user request video frame belongs to a second density interval, determining a second video frame similarity threshold configured in advance for the second density interval as a target video frame similarity threshold corresponding to the user request video frame, wherein a density value corresponding to the second density interval is larger than a density value corresponding to the first density interval, and the second video frame similarity threshold is smaller than the first video frame similarity threshold;
further, for each frame of user request video frames in the multi-frame user request video frames included in the request video to be processed, calculating video frame similarity between the user request video frame and each frame reference video frame in the target video frame set, determining maximum video frame similarity corresponding to the user request video frame, and determining a relative size relationship between the maximum video frame similarity corresponding to the user request video frame and the corresponding target video frame similarity;
still further, for each frame of the multi-frame user request video frames included in the request video to be processed, if the maximum video frame similarity corresponding to the user request video frame is greater than or equal to the corresponding target video frame similarity, determining the user request video frame as a target user request video frame, and if the maximum video frame similarity corresponding to the user request video frame is less than or equal to the corresponding target video frame similarity, determining the user request video frame as a user request video frame to be screened;
And finally, carrying out de-duplication screening on the determined user request video frames to be screened based on the target screening proportion coefficient to obtain at least one frame of target user request video frame, and constructing a corresponding target request video based on the obtained target user request video frame.
It will be appreciated that, as an alternative implementation, the above step S200 may further include the following steps to obtain the at least one video frame group:
firstly, calculating pixel difference values between a target user request video frame and an adjacent subsequent frame target user request video frame aiming at each frame of target user request video frame except for the last frame in at least one frame of target user request video frame included in the target request video;
secondly, determining a relative magnitude relation between a pixel difference value between the target user request video frame and an adjacent subsequent frame target user request video frame and a pre-configured pixel difference value threshold value for each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame included in the target request video, and determining a position between the target user request video frame and the adjacent subsequent frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold value;
Then, dividing the at least one frame of target user request video frame included in the target request video based on each determined video dividing position to obtain at least one corresponding video frame group (for example, based on one video dividing position, two corresponding video frame groups can be obtained).
It may be appreciated that, as an alternative implementation manner, the step of performing video frame filtering processing on the multi-frame user request video frames included in the request video to be processed based on the target filtering scaling factor to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing to obtain a corresponding target request video based on the at least one frame of target user request video frame may include the following steps:
firstly, calculating the absolute difference value of pixels (the absolute value of the difference value of the pixel values of the corresponding pixels) of corresponding pixels between the target user request video frame and the adjacent following frame target user request video frame for each frame of target user request video frame except the last frame in the at least one frame of target user request video frame included in the target request video;
And secondly, calculating a sum value of pixel absolute differences of corresponding pixel points between the target user request video frame and the adjacent subsequent frame target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame included in the target request video, and taking the sum value as a pixel difference value between the target user request video frame and the adjacent subsequent frame target user request video frame.
It will be appreciated that, as an alternative implementation, the above step S200 may further include the following steps to obtain the at least one video frame group:
firstly, calculating the similarity between two target user request video frames in each two target user request video frames in the at least one target user request video frame included in the target request video, and obtaining the video frame similarity between the two target user request video frames;
and secondly, carrying out clustering processing (which can be an existing arbitrary clustering algorithm) on the at least one frame of target user request video frame included in the target request video based on the video frame similarity between every two frames of target user request video frames in the at least one frame of target user request video frame included in the target request video, so as to obtain at least one video frame group corresponding to the target request video.
It will be appreciated that, as an alternative implementation manner, the above step S300 may further include the following steps to determine the area location information of the target area:
firstly, calculating the similarity between each video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames aiming at each video frame group in at least one video frame group to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
secondly, determining a frame of the region standard video frame corresponding to the first similarity of the target corresponding to each video frame group in the at least one video frame group as a target region standard video frame corresponding to the video frame group;
then, for each video frame group in the at least one video frame group, acquiring area position information (which can be obtained by pre-configuration) of an area position corresponding to the target area standard video frame corresponding to the video frame group, and acquiring the area position information corresponding to the video frame group;
Then, classifying the at least one video frame group based on whether the corresponding region position information is the same, and obtaining at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and finally, determining the region position information of the target region corresponding to the target request video based on the region position information corresponding to each video frame group set in the at least one video frame group set.
It may be appreciated that, as an alternative implementation manner, the step of calculating, for each video frame group in the at least one video frame group, a similarity between the video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frame to obtain a plurality of first similarities corresponding to the video frame group, and determining a first similarity having a maximum value from the plurality of first similarities corresponding to the video frame group as the target first similarity corresponding to the video frame group may include the following steps:
Firstly, calculating the video frame similarity between each video frame requested by the target user in each video frame group and each frame region standard video frame in the at least one video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames;
secondly, respectively calculating the average value of the video frame similarity between the video frame group and the region standard video of each frame aiming at each video frame group in at least one video frame group to respectively obtain first similarity between the video frame group and the region standard video of each frame so as to obtain a plurality of first similarities corresponding to the video frame group;
then, for each video frame group in the at least one video frame group, determining a first similarity with the maximum value from the plurality of first similarities corresponding to the video frame group, and taking the first similarity as a target first similarity corresponding to the video frame group.
It may be appreciated that, as an alternative implementation manner, the step of determining, based on the region position information corresponding to each of the at least one video frame group set, the region position information of the target region corresponding to the target request video may include the following steps:
Firstly, counting the number of the video frame groups included in at least one video frame group set aiming at each video frame group set in the video frame group set to obtain the group number corresponding to the video frame group set;
and secondly, determining the video frame group set corresponding to the maximum group number as a target video frame group set, and determining the position information corresponding to the target video frame group set as the region position information of the target region corresponding to the target request video.
With reference to fig. 3, the embodiment of the invention further provides a business processing method based on smart city data, which can be applied to the city monitoring server. Wherein, the business processing system based on the smart city data can comprise:
the video frame screening module is used for carrying out video frame screening processing on multi-frame user request video frames included in the to-be-processed request video after obtaining to-be-processed request video sent by target user terminal equipment of communication connection in response to target request operation carried out by target user corresponding to the target user terminal equipment, obtaining at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing to obtain a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
The video frame grouping module is used for grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, wherein each video frame group comprises at least one frame of target user request video frame;
the area position determining module is used for determining area position information of a target area corresponding to the target request video based on a similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by image acquisition on a plurality of area positions.
It will be appreciated that, as an alternative implementation, the video frame grouping module may be specifically configured to: calculating a pixel difference value between the target user request video frame and the adjacent target user request video frame of the next frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame; determining a relative magnitude relation between a pixel difference value between the target user request video frame and an adjacent subsequent frame target user request video frame and a pre-configured pixel difference value threshold for each frame of target user request video frame except for the last frame of the at least one frame target user request video frame included in the target request video, and determining a position between the target user request video frame and the adjacent subsequent frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold; and dividing the at least one frame of target user request video frame included in the target request video based on each determined video dividing position to obtain at least one corresponding video frame group.
It will be appreciated that, as an alternative implementation, the area location determining module may be specifically configured to: for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group; for each video frame group in the at least one video frame group, determining one frame of the region standard video frame corresponding to the corresponding target first similarity corresponding to the video frame group as a target region standard video frame corresponding to the video frame group; for each video frame group in the at least one video frame group, acquiring the region position information of the region position corresponding to the target region standard video frame corresponding to the video frame group, and acquiring the region position information corresponding to the video frame group; classifying the at least one video frame group based on whether the corresponding region position information is the same, and obtaining at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different; and determining the regional position information of the target region corresponding to the target request video based on the regional position information corresponding to each video frame group set in the at least one video frame group set.
In summary, according to the method and system for processing services based on smart city data provided by the invention, after the multi-frame user request video frames included in the request video to be processed are subjected to video frame screening processing to obtain the target request video corresponding to the request video to be processed, the target user request video frames included in the target request video can be firstly grouped to obtain at least one corresponding video frame group, and then the area position information of the target area corresponding to the target request video is determined based on the similarity relationship between the obtained at least one video frame group and the pre-configured multi-frame area standard video frames, so that the accuracy of the determined area position information can be improved to a certain extent through the configuration of a video frame grouping mechanism, and the problem that the processing accuracy of the position determination service is not high in the prior art is solved.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. The business processing method based on the smart city data is characterized by being applied to a city monitoring server, and comprises the following steps:
after obtaining a to-be-processed request video sent by target user terminal equipment of communication connection in response to target request operation performed by a target user corresponding to the target user terminal equipment, performing video frame screening processing on multi-frame user request video frames included in the to-be-processed request video to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, wherein each video frame group comprises at least one frame of target user request video frame;
determining the region position information of a target region corresponding to the target request video based on the similarity relation between the at least one video frame group and a preset multi-frame region standard video frame, wherein the multi-frame region standard video frame is obtained by image acquisition on a plurality of region positions;
The step of determining the region position information of the target region corresponding to the target request video based on the similarity relationship between the at least one video frame group and the preconfigured multi-frame region standard video frame comprises the following steps:
for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining one frame of the region standard video frame corresponding to the target first similarity corresponding to the video frame group as a target region standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the region position information of the region position corresponding to the target region standard video frame corresponding to the video frame group, and acquiring the region position information corresponding to the video frame group;
Classifying the at least one video frame group based on whether the corresponding region position information is the same, and obtaining at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and determining the regional position information of the target region corresponding to the target request video based on the regional position information corresponding to each video frame group set in the at least one video frame group set.
2. The smart city data-based business processing method of claim 1, wherein said step of grouping said at least one frame of target user request video frames included in said target request video to obtain at least one video frame group corresponding to said target request video comprises:
calculating a pixel difference value between the target user request video frame and the adjacent target user request video frame of the next frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame;
Determining a relative magnitude relation between a pixel difference value between the target user request video frame and an adjacent subsequent frame target user request video frame and a pre-configured pixel difference value threshold for each frame of target user request video frame except for the last frame of the at least one frame target user request video frame included in the target request video, and determining a position between the target user request video frame and the adjacent subsequent frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold;
and dividing the at least one frame of target user request video frame included in the target request video based on each determined video dividing position to obtain at least one corresponding video frame group.
3. The smart city data-based service processing method of claim 2, wherein the step of calculating a pixel difference value between the target user request video frame and an adjacent subsequent target user request video frame for each of the at least one frame of target user request video frame included in the target request video frame except for a last frame of the target user request video frames comprises:
Calculating the pixel absolute difference value of a corresponding pixel point between the target user request video frame and the adjacent next frame target user request video frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame;
and calculating a sum value of pixel absolute differences of corresponding pixel points between the target user request video frame and the adjacent subsequent frame target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame, and taking the sum value as a pixel difference value between the target user request video frame and the adjacent subsequent frame target user request video frame.
4. The smart city data-based business processing method of claim 1, wherein said step of grouping said at least one frame of target user request video frames included in said target request video to obtain at least one video frame group corresponding to said target request video comprises:
aiming at each two target user request video frames in the at least one target user request video frame included in the target request video, calculating the similarity between the two target user request video frames to obtain the video frame similarity between the two target user request video frames;
And clustering the at least one frame of target user request video frame included in the target request video based on the video frame similarity between every two frames of target user request video frames in the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video.
5. The smart city data-based business processing method of claim 1, wherein the step of calculating, for each of the at least one video frame group, a similarity between the video frame group and each of the preconfigured multi-frame region standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining a first similarity having a maximum value among the plurality of first similarities corresponding to the video frame group as the target first similarity corresponding to the video frame group, comprises:
calculating the video frame similarity between the target user request video frame and the region standard video frame of each frame in the video frame group aiming at each video frame group in the at least one video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frame;
For each video frame group in the at least one video frame group, calculating an average value of video frame similarity between the video frame group and the region standard video of each frame respectively to obtain first similarity between the video frame group and the region standard video of each frame respectively so as to obtain a plurality of first similarities corresponding to the video frame group;
and determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame groups as the target first similarity corresponding to the video frame groups aiming at each video frame group in the at least one video frame group.
6. The smart city data-based business processing method of claim 1, wherein the step of determining the region position information of the target region corresponding to the target request video based on the region position information corresponding to each of the at least one video frame group set comprises:
counting the number of the video frame groups included in each video frame group set in the at least one video frame group set to obtain the group number corresponding to the video frame group set;
And determining the video frame group set corresponding to the maximum group number as a target video frame group set, and determining the position information corresponding to the target video frame group set as the region position information of the target region corresponding to the target request video.
7. A business processing system based on smart city data, which is applied to a city monitoring server, comprising:
the video frame screening module is used for carrying out video frame screening processing on multi-frame user request video frames included in the to-be-processed request video after obtaining to-be-processed request video sent by target user terminal equipment of communication connection in response to target request operation carried out by target user corresponding to the target user terminal equipment, obtaining at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing to obtain a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
the video frame grouping module is used for grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, wherein each video frame group comprises at least one frame of target user request video frame;
The area position determining module is used for determining area position information of a target area corresponding to the target request video based on a similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by image acquisition on a plurality of area positions respectively;
the area position determining module is specifically configured to:
for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in the preconfigured multi-frame region standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value from the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining one frame of the region standard video frame corresponding to the target first similarity corresponding to the video frame group as a target region standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the region position information of the region position corresponding to the target region standard video frame corresponding to the video frame group, and acquiring the region position information corresponding to the video frame group;
Classifying the at least one video frame group based on whether the corresponding region position information is the same, and obtaining at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and determining the regional position information of the target region corresponding to the target request video based on the regional position information corresponding to each video frame group set in the at least one video frame group set.
8. The smart city data-based business processing system of claim 7, wherein the video frame grouping module is specifically configured to:
calculating a pixel difference value between the target user request video frame and the adjacent target user request video frame of the next frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame;
Determining a relative magnitude relation between a pixel difference value between the target user request video frame and an adjacent subsequent frame target user request video frame and a pre-configured pixel difference value threshold for each frame of target user request video frame except for the last frame of the at least one frame target user request video frame included in the target request video, and determining a position between the target user request video frame and the adjacent subsequent frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold;
and dividing the at least one frame of target user request video frame included in the target request video based on each determined video dividing position to obtain at least one corresponding video frame group.
CN202111346837.8A 2021-11-15 2021-11-15 Business processing method and system based on smart city data Active CN113949881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111346837.8A CN113949881B (en) 2021-11-15 2021-11-15 Business processing method and system based on smart city data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111346837.8A CN113949881B (en) 2021-11-15 2021-11-15 Business processing method and system based on smart city data

Publications (2)

Publication Number Publication Date
CN113949881A CN113949881A (en) 2022-01-18
CN113949881B true CN113949881B (en) 2023-10-03

Family

ID=79338191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111346837.8A Active CN113949881B (en) 2021-11-15 2021-11-15 Business processing method and system based on smart city data

Country Status (1)

Country Link
CN (1) CN113949881B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115424353B (en) * 2022-09-07 2023-05-05 杭银消费金融股份有限公司 Service user characteristic identification method and system based on AI model
CN117098295A (en) * 2023-09-08 2023-11-21 天津佳安节能科技有限公司 Urban road illumination control method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011146930A (en) * 2010-01-14 2011-07-28 Sony Corp Information processing apparatus, information processing method, and program
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN108537157A (en) * 2018-03-30 2018-09-14 特斯联(北京)科技有限公司 A kind of video scene judgment method and device based on artificial intelligence classification realization
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN111488487A (en) * 2020-03-20 2020-08-04 西南交通大学烟台新一代信息技术研究院 Advertisement detection method and detection system for all-media data
JP2020149641A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Object tracking device and object tracking method
CN112954393A (en) * 2021-01-21 2021-06-11 北京博雅慧视智能技术研究院有限公司 Target tracking method, system, storage medium and terminal based on video coding
CN113259213A (en) * 2021-06-28 2021-08-13 广州市威士丹利智能科技有限公司 Intelligent home information monitoring method based on edge computing intelligent gateway
CN113628073A (en) * 2021-07-23 2021-11-09 续斐 Property management method and system for intelligent cell

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011146930A (en) * 2010-01-14 2011-07-28 Sony Corp Information processing apparatus, information processing method, and program
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN108537157A (en) * 2018-03-30 2018-09-14 特斯联(北京)科技有限公司 A kind of video scene judgment method and device based on artificial intelligence classification realization
JP2020149641A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Object tracking device and object tracking method
CN111488487A (en) * 2020-03-20 2020-08-04 西南交通大学烟台新一代信息技术研究院 Advertisement detection method and detection system for all-media data
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN112954393A (en) * 2021-01-21 2021-06-11 北京博雅慧视智能技术研究院有限公司 Target tracking method, system, storage medium and terminal based on video coding
CN113259213A (en) * 2021-06-28 2021-08-13 广州市威士丹利智能科技有限公司 Intelligent home information monitoring method based on edge computing intelligent gateway
CN113628073A (en) * 2021-07-23 2021-11-09 续斐 Property management method and system for intelligent cell

Also Published As

Publication number Publication date
CN113949881A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN113949881B (en) Business processing method and system based on smart city data
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN114140713A (en) Image recognition system and image recognition method
CN110837582A (en) Data association method and device, electronic equipment and computer-readable storage medium
CN111091106A (en) Image clustering method and device, storage medium and electronic device
CN112258254A (en) Internet advertisement risk monitoring method and system based on big data architecture
CN114140712A (en) Automatic image recognition and distribution system and method
CN116821777B (en) Novel basic mapping data integration method and system
CN114925348A (en) Security verification method and system based on fingerprint identification
CN115273191A (en) Face document gathering method, face recognition method, device, equipment and medium
CN112597880B (en) Passenger flow batch identification method and device, computer equipment and readable storage medium
CN115100541B (en) Satellite remote sensing data processing method, system and cloud platform
CN115375886A (en) Data acquisition method and system based on cloud computing service
CN115049792B (en) High-precision map construction processing method and system
CN115620243B (en) Pollution source monitoring method and system based on artificial intelligence and cloud platform
CN115065842B (en) Panoramic video streaming interaction method and system based on virtual reality
CN111966851B (en) Image recognition method and system based on small number of samples
CN115099838A (en) Interest positioning method and system applied to online advertisement putting
CN114416786A (en) Stream data processing method and device, storage medium and computer equipment
CN113537087A (en) Intelligent traffic information processing method and device and server
CN114140714A (en) Data processing method and system for smart city
CN115082709B (en) Remote sensing big data processing method, system and cloud platform
CN112487082A (en) Biological feature recognition method and related equipment
CN115620031B (en) Natural resource right-determining registration information processing method, system and equipment
CN114173152A (en) Communication video screening method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230906

Address after: No.1 Lanhai Road, hi tech Zone, Yantai City, Shandong Province

Applicant after: Shandong Ruihan Network Technology Co.,Ltd.

Address before: 650101 block B, building 9, Dingyi business center, No. 99, Keyuan Road, high tech Zone, Wuhua District, Kunming, Yunnan Province

Applicant before: Zhao Qianqian

GR01 Patent grant
GR01 Patent grant