CN113949881A - Service processing method and system based on smart city data - Google Patents

Service processing method and system based on smart city data Download PDF

Info

Publication number
CN113949881A
CN113949881A CN202111346837.8A CN202111346837A CN113949881A CN 113949881 A CN113949881 A CN 113949881A CN 202111346837 A CN202111346837 A CN 202111346837A CN 113949881 A CN113949881 A CN 113949881A
Authority
CN
China
Prior art keywords
video frame
video
target
frame
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111346837.8A
Other languages
Chinese (zh)
Other versions
CN113949881B (en
Inventor
赵茜茜
许评
杨万广
张承彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Ruihan Network Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111346837.8A priority Critical patent/CN113949881B/en
Publication of CN113949881A publication Critical patent/CN113949881A/en
Application granted granted Critical
Publication of CN113949881B publication Critical patent/CN113949881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Abstract

The invention provides a service processing method and system based on smart city data, and relates to the technical field of data processing. In the invention, a multi-frame user request video frame included in a request video to be processed is subjected to video frame screening processing to obtain at least one corresponding frame of target user request video frame, and a corresponding target request video is constructed and obtained based on the at least one frame of target user request video frame; grouping the target request videos to obtain at least one corresponding video frame group; and determining the area position information of a target area corresponding to the target request video based on the similarity relation between at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by respectively carrying out image acquisition on a plurality of area positions. Based on the method, the problem of low processing precision of the position determination service in the prior art can be solved.

Description

Service processing method and system based on smart city data
Technical Field
The invention relates to the technical field of data processing, in particular to a service processing method and system based on smart city data.
Background
In the construction and application of smart cities, an important requirement is positioning, but in the prior art, for some areas with complex environments, the problem of low positioning accuracy may exist. In order to solve the corresponding problem, in the prior art, a technical solution is provided, for example, a user sends an image of a current location to a background server for identification, so as to determine the current location. In the prior art, a background server generally selects a frame of image to match with each standard image, and determines a position corresponding to the matched standard image as a current position of a user, so that a problem of low processing accuracy of a position determination service easily occurs.
Disclosure of Invention
In view of the above, the present invention provides a service processing method and system based on smart city data, so as to solve the problem of low processing accuracy of location determination service in the prior art.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a business processing method based on smart city data is applied to a city monitoring server, and comprises the following steps:
after a to-be-processed request video sent by a target user terminal device in communication connection responding to a target request operation performed by a target user corresponding to the target user terminal device is obtained, performing video frame screening processing on a plurality of frames of user request video frames included in the to-be-processed request video to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame, wherein the plurality of frames of user request video frames are a plurality of continuous video frames obtained based on image acquisition of a target area;
grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, wherein each video frame group includes at least one frame of target user request video frame;
and determining the area position information of a target area corresponding to the target request video based on the similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by respectively carrying out image acquisition on a plurality of area positions.
In some preferred embodiments, in the service processing method based on smart city data, the step of grouping the at least one frame of target user request video frames included in the target request video to obtain at least one video frame group corresponding to the target request video includes:
calculating a pixel difference value between the target user request video frame and an adjacent next frame of target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame included in the target request video;
determining a relative size relationship between a pixel difference value between the target user request video frame and an adjacent next frame target user request video frame and a pre-configured pixel difference value threshold value aiming at each frame target user request video frame except for the last frame in the at least one frame target user request video frame included in the target request video, and determining the position between the target user request video frame and the adjacent next frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold value;
and segmenting the at least one frame of target user request video frame included in the target request video based on each determined video segmentation position to obtain at least one corresponding video frame group.
In some preferred embodiments, in the service processing method based on smart city data, the step of calculating a pixel difference value between the target user request video frame and an adjacent next target user request video frame for each frame of target user request video frames other than a last frame of the at least one frame of target user request video frames included in the target request video comprises:
calculating the pixel absolute difference of corresponding pixel points between the target user request video frame and the adjacent next frame of target user request video frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame included in the target request video;
and calculating the sum of pixel absolute differences of corresponding pixel points between the target user request video frame and the adjacent next frame target user request video frame aiming at each frame target user request video frame except the last frame in the at least one frame target user request video frame included in the target request video, and taking the sum as the pixel difference value between the target user request video frame and the adjacent next frame target user request video frame.
In some preferred embodiments, in the service processing method based on smart city data, the step of grouping the at least one frame of target user request video frames included in the target request video to obtain at least one video frame group corresponding to the target request video includes:
calculating the similarity between the two frames of target user request video frames aiming at every two frames of target user request video frames in the at least one frame of target user request video frames included in the target request video to obtain the video frame similarity between the two frames of target user request video frames;
based on the video frame similarity between every two frames of target user request video frames in the at least one frame of target user request video frames included in the target request video, clustering the at least one frame of target user request video frames included in the target request video to obtain at least one video frame group corresponding to the target request video.
In some preferred embodiments, in the service processing method based on smart city data, the step of determining the area location information of the target area corresponding to the target request video based on the similarity relationship between the at least one video frame group and the pre-configured multi-frame area standard video frame includes:
for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in a multi-frame region standard video frame configured in advance to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining a frame of the regional standard video frame corresponding to the target first similarity corresponding to the video frame group as a target regional standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the area position information of the area position corresponding to the target area standard video frame corresponding to the video frame group to obtain the area position information corresponding to the video frame group;
classifying the at least one video frame group based on whether the corresponding region position information is the same or not to obtain at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and determining the area position information of the target area corresponding to the target request video based on the area position information corresponding to each video frame group set in the at least one video frame group set.
In some preferred embodiments, in the service processing method based on smart city data, the step of calculating, for each video frame group in the at least one video frame group, a similarity between the video frame group and each regional standard video frame in a plurality of pre-configured regional standard video frames of a plurality of frames to obtain a plurality of first similarities corresponding to the video frame group, and determining, as a target first similarity corresponding to the video frame group, a first similarity having a maximum value among the plurality of first similarities corresponding to the video frame group includes:
calculating the video frame similarity between the target user request video frame and each regional standard video frame in each video frame group in the at least one video frame group and each regional standard video frame in a pre-configured multi-frame regional standard video frame;
respectively calculating the average value of the video frame similarity between the video frame group and each frame of the regional standard video aiming at each video frame group in the at least one video frame group, and respectively obtaining the first similarity between the video frame group and each frame of the regional standard video so as to obtain a plurality of first similarities corresponding to the video frame group;
and for each video frame group in the at least one video frame group, determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as the target first similarity corresponding to the video frame group.
In some preferred embodiments, in the service processing method based on smart city data, the step of determining, based on the area location information corresponding to each of the at least one video frame set, the area location information of the target area corresponding to the target request video includes:
for each video frame group set in the at least one video frame group set, counting the number of the video frame groups included in the video frame group set to obtain the number of groups corresponding to the video frame group set;
and determining a video frame group set corresponding to the group number with the maximum value as a target video frame group set, and determining the position information corresponding to the target video frame group set as the area position information of a target area corresponding to the target request video.
The embodiment of the invention also provides a service processing system based on the smart city data, which is applied to a city monitoring server, and comprises the following components:
the video frame screening module is used for screening a video frame of a multi-frame user request video frame included in a request video to be processed after the request video to be processed is acquired, wherein the request video to be processed is sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, so as to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and acquiring a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
a video frame grouping module, configured to group the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, where each video frame group includes at least one frame of the target user request video frame;
and the area position determining module is used for determining the area position information of the target area corresponding to the target request video based on the similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by respectively carrying out image acquisition on a plurality of area positions.
In some preferred embodiments, in the service processing system based on smart city data, the video frame grouping module is specifically configured to:
calculating a pixel difference value between the target user request video frame and an adjacent next frame of target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame included in the target request video;
determining a relative size relationship between a pixel difference value between the target user request video frame and an adjacent next frame target user request video frame and a pre-configured pixel difference value threshold value aiming at each frame target user request video frame except for the last frame in the at least one frame target user request video frame included in the target request video, and determining the position between the target user request video frame and the adjacent next frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold value;
and segmenting the at least one frame of target user request video frame included in the target request video based on each determined video segmentation position to obtain at least one corresponding video frame group.
In some preferred embodiments, in the service processing system based on smart city data, the region location determining module is specifically configured to:
for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in a multi-frame region standard video frame configured in advance to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining a frame of the regional standard video frame corresponding to the target first similarity corresponding to the video frame group as a target regional standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the area position information of the area position corresponding to the target area standard video frame corresponding to the video frame group to obtain the area position information corresponding to the video frame group;
classifying the at least one video frame group based on whether the corresponding region position information is the same or not to obtain at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and determining the area position information of the target area corresponding to the target request video based on the area position information corresponding to each video frame group set in the at least one video frame group set.
According to the service processing method and system based on the smart city data, after the multi-frame user request video frames included in the request video to be processed are subjected to video frame screening processing to obtain the target request video corresponding to the request video to be processed, the target user request video frames included in the target request video can be grouped to obtain at least one corresponding video frame group, and then the regional position information of the target region corresponding to the target request video is determined based on the similarity relation between the at least one obtained video frame group and the pre-configured multi-frame region standard video frames.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a city monitoring server according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps included in a smart city data-based service processing method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating modules included in a smart city data-based business processing system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a city monitoring server. Wherein the city monitoring server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have stored therein at least one software function (computer program) which can be present in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the service processing method based on smart city data provided by the embodiment of the present invention.
It is understood that, as an alternative implementation, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It is understood that, as an alternative implementation manner, the structure shown in fig. 1 is only an illustration, and the city monitoring server may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may include a communication unit for information interaction with other devices (e.g., user terminal devices such as mobile phones).
With reference to fig. 2, an embodiment of the present invention further provides a service processing method based on smart city data, which can be applied to the city monitoring server. The method steps defined by the relevant process of the intelligent city data-based business processing method can be realized by the city monitoring server.
The specific process shown in FIG. 2 will be described in detail below.
Step S100, performing video frame screening processing on multiple frames of user request video frames included in a request video to be processed to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame.
In this embodiment of the present invention, when the city monitoring server executes the step S100, after obtaining a to-be-processed request video sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, the city monitoring server may perform video frame screening processing on a multi-frame user request video frame included in the to-be-processed request video, obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and construct and obtain a corresponding target request video based on the at least one frame of target user request video frame. The multi-frame user request video frame is a multi-frame continuous video frame obtained by carrying out image acquisition on a target area.
Step S200, grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video.
In this embodiment of the present invention, when executing the step S200, the city monitoring server may group the at least one frame of target user request video frames included in the target request video to obtain at least one video frame group corresponding to the target request video. Wherein each of the video frame groups includes at least one frame of the target user requested video frame.
Step S300, based on the similarity between the at least one video frame group and the preset multi-frame area standard video frame, determining the area position information of the corresponding target area.
In this embodiment of the present invention, when the city monitoring server executes the step S300, the city monitoring server may determine the area location information of the target area corresponding to the target request video based on the similarity relationship between the at least one video frame group and the pre-configured multi-frame area standard video frame. The multi-frame area standard video frame is obtained by respectively carrying out image acquisition on a plurality of area positions.
Based on the steps included in the service processing method, after the multi-frame user request video frames included in the request video to be processed are subjected to video frame screening processing to obtain the target request video corresponding to the request video to be processed, the target user request video frames included in the target request video may be grouped to obtain at least one corresponding video frame group, and then the region position information of the target region corresponding to the target request video is determined based on the similarity relationship between the obtained at least one video frame group and the pre-configured multi-frame region standard video frames.
It is understood that, as an alternative implementation manner, the step S100 may further include the following steps (such as step S110, step S120, and step S130) to obtain the target requested video.
Step S110, obtaining a to-be-processed request video sent by a target user terminal device in communication connection responding to a target request operation performed by a target user corresponding to the target user terminal device.
In this embodiment of the present invention, when executing step S110, the city monitoring server may obtain a to-be-processed request video sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device. The to-be-processed request video comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are continuous video frames obtained based on image acquisition of a target area.
Step S120, performing video feature analysis processing on the request video to be processed to obtain screening feature information of a target video frame corresponding to the request video to be processed.
In this embodiment of the present invention, when the city monitoring server executes the step S120, the city monitoring server may perform video feature analysis processing on the to-be-processed request video to obtain target video frame screening feature information corresponding to the to-be-processed request video.
Step S130, performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target video frame screening feature information to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame.
In this embodiment of the present invention, when the city monitoring server executes the step S130, the city monitoring server may perform video frame screening processing on the multiple frames of user request video frames included in the to-be-processed request video based on the target video frame screening feature information, obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and construct and obtain a corresponding target request video based on the at least one frame of target user request video frame.
Based on the above steps S110, S120, and S130, after the to-be-processed request video sent by the target user terminal device responding to the target request operation performed by the corresponding target user for communication connection is obtained, video feature analysis processing may be performed on the to-be-processed request video to obtain corresponding target video frame screening feature information, and then video frame screening processing may be performed on the to-be-processed request video based on the target video frame screening feature information to obtain at least one corresponding frame of target user request video frame.
It is to be understood that, as an alternative implementation manner, the step S110 may further include the following steps to obtain the pending request video:
firstly, judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, checking the to-be-processed request information to obtain a corresponding checking result;
secondly, if the verification processing result is that the verification processing fails, generating corresponding request rejection notification information, and sending the request rejection notification information to the target user terminal device, wherein the target user terminal device is used for displaying the request rejection notification information to a target user corresponding to the target user terminal device so as to enable the target user to stop performing target request operation;
then, if the verification processing result is that the verification processing is successful, generating corresponding request success notification information, and sending the request success notification information to the target user terminal device, wherein the target user terminal device user displays the request success notification information to a target user corresponding to the target user terminal device, so that the target user performs a target request operation;
and finally, acquiring a to-be-processed request video sent by the target user terminal device in response to the target request operation performed by the target user, wherein the to-be-processed request video is obtained by performing image acquisition on the target area based on the target user terminal device in response to the target request operation, or the to-be-processed request video is a video with image information of the target area, which is obtained by selecting a stored video by the target user terminal device in response to the target request operation.
It can be understood that, as an alternative implementation manner, the step of determining whether to acquire to-be-processed request information sent by a target user terminal device in communication connection, and when receiving the to-be-processed request information sent by the target user terminal device, performing verification processing on the to-be-processed request information to obtain a corresponding verification processing result may include the following steps:
firstly, judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, analyzing the to-be-processed request information to obtain target identity information carried in the to-be-processed request information, wherein the target identity information is used for representing the identity of the target user terminal equipment or representing the identity of a target user corresponding to the target user terminal equipment;
secondly, searching in a target identity database which is constructed in advance to determine whether the target identity database stores the target identity information;
then, if the target identity information is stored in the target identity database, it is determined that the to-be-processed request information is successfully checked and a checking result that is successful in checking is generated, and if the target identity information is not stored in the target identity database, it is determined that the to-be-processed request information is failed in checking and a checking result that is failed in checking is generated.
It is to be understood that, as an alternative implementation manner, the step S120 may further include the following steps to obtain the target video frame screening feature information:
firstly, determining the number of the multi-frame user request video frames included in the request video to be processed to obtain the number of target video frames corresponding to the request video to be processed;
and secondly, obtaining target video frame screening characteristic information corresponding to the to-be-processed request video based on the number of the target video frames.
It is to be understood that, as an alternative implementation manner, the step of obtaining the target video frame screening feature information corresponding to the to-be-processed request video based on the number of target video frames may include the following steps:
firstly, aiming at each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, performing first object identification processing on the user request video frames to obtain the number of first objects in the user request video frames, and performing object statistics on the number of the first objects in each frame of user request video frames in the multiple frames of user request video frames to obtain the number of the first objects corresponding to the request video to be processed, wherein the first objects are static objects (such as various buildings, plants and the like);
secondly, aiming at each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, carrying out second object identification processing on the user request video frames to obtain the number of second objects in the user request video frames, and carrying out object statistics on the number of the second objects in each frame of user request video frames in the multiple frames of user request video frames to obtain the number of the second objects corresponding to the request video to be processed, wherein the second objects are dynamic objects (such as people, vehicles and the like);
then, counting the sum of the number of first objects and the number of second objects in the user request video frame to obtain the object statistical number corresponding to the user request video frame, and performing mean value calculation based on the object statistical number corresponding to each user request video frame in the user request video frame to obtain the object number mean value corresponding to the request video to be processed;
and finally, performing multi-dimensional feature fusion calculation processing based on the number of the target video frames, the number of the first objects, the number of the second objects and the mean value of the number of the objects to obtain the screening feature information of the target video frames corresponding to the to-be-processed request video.
It is to be understood that, as an alternative implementation manner, the step of performing multidimensional feature fusion calculation processing based on the number of target video frames, the number of first objects, the number of second objects, and the average value of the number of objects to obtain the target video frame screening feature information corresponding to the to-be-processed request video may include the following steps:
firstly, determining a first video frame screening characteristic coefficient corresponding to the request video to be processed based on the number of the target video frames, wherein the number of the target video frames and the first video frame screening characteristic coefficient have a positive correlation mutual corresponding relation;
secondly, determining a second video frame screening feature coefficient corresponding to the to-be-processed request video based on the first object number, determining a third video frame screening feature coefficient corresponding to the to-be-processed request video based on the second object number, and determining a fourth video frame screening feature coefficient corresponding to the to-be-processed request video based on the object number average value, wherein the first object number and the second video frame screening feature coefficient have a negative correlation mutual correspondence, the second object number and the third video frame screening feature coefficient have a negative correlation mutual correspondence, and the object number average value and the fourth video frame screening feature coefficient have a negative correlation mutual correspondence;
then, performing weighted summation calculation on the first video frame screening feature coefficient, the second video frame screening feature coefficient, the third video frame screening feature coefficient and the fourth video frame screening feature coefficient to obtain target video frame screening feature information corresponding to the to-be-processed request video, wherein a weighting coefficient corresponding to the second video frame screening feature coefficient is greater than a weighting coefficient corresponding to the fourth video frame screening feature coefficient, a weighting coefficient corresponding to the fourth video frame screening feature coefficient is greater than a weighting coefficient corresponding to the third video frame screening feature coefficient, a weighting coefficient corresponding to the third video frame screening feature coefficient is greater than a weighting coefficient corresponding to the first video frame screening feature coefficient (the weighting coefficient corresponding to the first video frame screening feature coefficient, the weighting coefficient corresponding to the second video frame screening feature coefficient, the weighting coefficient corresponding to the third video frame screening feature coefficient, the weighting coefficient corresponding to the first video frame screening feature coefficient, the weighting coefficient corresponding to the second video frame screening feature coefficient, the weighting coefficient corresponding to the fourth video frame screening feature coefficient, and the weighting coefficient corresponding to the second video frame screening feature coefficient, The sum of the weighting coefficient corresponding to the third video frame screening feature coefficient and the weighting coefficient corresponding to the fourth video frame screening feature coefficient is 1).
It is understood that, as an alternative implementation manner, the step S130 may further include the following steps to obtain the target requested video:
firstly, determining a target screening proportionality coefficient with positive correlation based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportionality coefficient is used for representing the maximum proportionality value of the screened video frames during video frame screening processing;
then, based on the target screening proportionality coefficient, performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame.
It can be understood that, as an alternative implementation manner, the step of performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening scaling factor to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame may include the following steps:
firstly, acquiring a preset target video frame set, wherein the target video frame set comprises at least one reference video frame with a static object, each reference video frame is provided with one static object, and when the target video frame set comprises a plurality of reference video frames, the static objects of any two reference video frames are different;
secondly, for each user request video frame in the multiple frames of user request video frames included in the request video to be processed, determining a distribution density value of a static object in the user request video frame, and determining a target density interval corresponding to the distribution density value in a plurality of pre-configured density intervals, wherein the distribution density value is used for representing the distribution density of the static object in the corresponding user request video frame (for example, dividing the number of static objects by the area of a scene);
then, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, if the target density interval corresponding to the user request video frame belongs to a first density interval, determining a first video frame similarity threshold value configured in advance for the first density interval as a target video frame similarity threshold value corresponding to the user request video frame;
then, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, if the target density interval corresponding to the user request video frame belongs to a second density interval, determining a second video frame similarity threshold configured in advance for the second density interval as a target video frame similarity threshold corresponding to the user request video frame, where a density value corresponding to the second density interval is greater than a density value corresponding to the first density interval, and the second video frame similarity threshold is less than the first video frame similarity threshold;
further, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, calculating video frame similarity between the user request video frame and each frame of reference video frame in the target video frame set, determining the maximum video frame similarity corresponding to the user request video frame, and determining the relative size relationship between the maximum video frame similarity corresponding to the user request video frame and the corresponding target video frame similarity;
further, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, if the maximum video frame similarity corresponding to the user request video frame is greater than or equal to the corresponding target video frame similarity, determining the user request video frame as a target user request video frame, and if the maximum video frame similarity corresponding to the user request video frame is less than or equal to the corresponding target video frame similarity, determining the user request video frame as a user request video frame to be screened;
and finally, performing duplicate removal screening on the determined user request video frames to be screened based on the target screening proportion coefficient to obtain at least one frame of target user request video frame, and constructing and obtaining a corresponding target request video based on the obtained target user request video frame.
It is to be understood that, as an alternative implementation, the step S200 may further include the following steps to obtain the at least one video frame group:
firstly, aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame included in the target request video, calculating a pixel difference value between the target user request video frame and the adjacent next frame of target user request video frame;
secondly, determining the relative size relationship between a pixel difference value between the target user request video frame and an adjacent next frame target user request video frame and a preset pixel difference value threshold value aiming at each frame target user request video frame except the last frame in the at least one frame target user request video frame included in the target request video, and determining the position between the target user request video frame and the adjacent next frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold value;
then, the at least one frame of target user requested video frame included in the target requested video is segmented based on each determined video segmentation position to obtain at least one corresponding video frame group (e.g., two corresponding video frame groups may be obtained based on one video segmentation position).
It can be understood that, as an alternative implementation manner, the step of performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening scaling factor to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame may include the following steps:
firstly, aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame included in the target request video, calculating the pixel absolute difference value (the absolute value of the difference value of the pixel values of the corresponding pixel points) of the corresponding pixel points between the target user request video frame and the adjacent next frame of target user request video frame;
secondly, calculating the sum of pixel absolute differences of corresponding pixel points between the target user request video frame and the next frame of target user request video frame, and taking the sum as the pixel difference value between the target user request video frame and the next frame of target user request video frame.
It is to be understood that, as an alternative implementation, the step S200 may further include the following steps to obtain the at least one video frame group:
firstly, calculating the similarity between every two frames of target user request video frames in the at least one frame of target user request video frames included in the target request video to obtain the video frame similarity between the two frames of target user request video frames;
secondly, based on the video frame similarity between every two frames of target user request video frames in the at least one frame of target user request video frames included in the target request video, clustering (which may be any existing clustering algorithm) is performed on the at least one frame of target user request video frames included in the target request video, so as to obtain at least one video frame group corresponding to the target request video.
It is to be understood that, as an alternative implementation manner, the step S300 may further include the following steps to determine the area location information of the target area:
firstly, for each video frame group in at least one video frame group, calculating the similarity between the video frame group and each frame regional standard video frame in a multi-frame regional standard video frame configured in advance to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
secondly, determining a frame of the regional standard video frame corresponding to the target first similarity corresponding to the video frame group as a target regional standard video frame corresponding to the video frame group for each of the at least one video frame group;
then, for each video frame group in the at least one video frame group, acquiring region position information (which can be obtained by pre-configuration) of a region position corresponding to the target region standard video frame corresponding to the video frame group to obtain region position information corresponding to the video frame group;
then, classifying the at least one video frame group based on whether the corresponding region position information is the same or not to obtain at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and finally, determining the area position information of the target area corresponding to the target request video based on the area position information corresponding to each video frame group set in the at least one video frame group set.
It is to be understood that, as an alternative implementation manner, the step of calculating, for each of the at least one video frame group, a similarity between the video frame group and each of the preconfigured multi-frame area standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining, as a target first similarity corresponding to the video frame group, a first similarity having a maximum value among the plurality of first similarities corresponding to the video frame group may include the following steps:
firstly, aiming at each video frame group in the at least one video frame group and each frame regional standard video frame in a multi-frame regional standard video frame configured in advance, calculating the video frame similarity between the target user request video frame and the regional standard video frame in each frame in the video frame group;
secondly, respectively calculating the average value of the video frame similarity between the video frame group and each frame of the regional standard video aiming at each video frame group in the at least one video frame group, and respectively obtaining the first similarity between the video frame group and each frame of the regional standard video so as to obtain a plurality of first similarities corresponding to the video frame group;
then, for each of the at least one video frame group, the first similarity with the maximum value is determined from the plurality of first similarities corresponding to the video frame group, and the first similarity is used as the target first similarity corresponding to the video frame group.
It is to be understood that, as an alternative implementation manner, the step of determining the area position information of the target area corresponding to the target request video based on the area position information corresponding to each of the at least one video frame group set may include the following steps:
firstly, counting the number of the video frame groups included in the video frame group set aiming at each video frame group set in at least one video frame group set to obtain the number of groups corresponding to the video frame group set;
secondly, determining a video frame group set corresponding to the group number with the maximum value as a target video frame group set, and determining the position information corresponding to the target video frame group set as the area position information of a target area corresponding to the target request video.
With reference to fig. 3, an embodiment of the present invention further provides a service processing method based on smart city data, which can be applied to the city monitoring server. Wherein, the smart city data-based business processing system may include:
the video frame screening module is used for screening a video frame of a multi-frame user request video frame included in a request video to be processed after the request video to be processed is acquired, wherein the request video to be processed is sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, so as to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and acquiring a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
a video frame grouping module, configured to group the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, where each video frame group includes at least one frame of the target user request video frame;
and the area position determining module is used for determining the area position information of the target area corresponding to the target request video based on the similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by respectively carrying out image acquisition on a plurality of area positions.
It is to be understood that, as an alternative implementation, the video frame grouping module may be specifically configured to: calculating a pixel difference value between the target user request video frame and an adjacent next frame of target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame included in the target request video; determining a relative size relationship between a pixel difference value between the target user request video frame and an adjacent next frame target user request video frame and a pre-configured pixel difference value threshold value aiming at each frame target user request video frame except for the last frame in the at least one frame target user request video frame included in the target request video, and determining the position between the target user request video frame and the adjacent next frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold value; and segmenting the at least one frame of target user request video frame included in the target request video based on each determined video segmentation position to obtain at least one corresponding video frame group.
It is to be understood that, as an alternative implementation, the region location determining module may be specifically configured to: for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in a multi-frame region standard video frame configured in advance to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group; for each video frame group in the at least one video frame group, determining a frame of the regional standard video frame corresponding to the target first similarity corresponding to the video frame group as a target regional standard video frame corresponding to the video frame group; for each video frame group in the at least one video frame group, acquiring the area position information of the area position corresponding to the target area standard video frame corresponding to the video frame group to obtain the area position information corresponding to the video frame group; classifying the at least one video frame group based on whether the corresponding region position information is the same or not to obtain at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different; and determining the area position information of the target area corresponding to the target request video based on the area position information corresponding to each video frame group set in the at least one video frame group set.
In summary, according to the service processing method and system based on smart city data provided by the present invention, after the multi-frame user request video frames included in the request video to be processed are subjected to video frame screening processing to obtain the target request video corresponding to the request video to be processed, the target user request video frames included in the target request video may be grouped to obtain at least one corresponding video frame group, and then the area location information of the target area corresponding to the target request video is determined based on the similarity relationship between the at least one obtained video frame group and the pre-configured multi-frame area standard video frames.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A service processing method based on smart city data is applied to a city monitoring server and comprises the following steps:
after a to-be-processed request video sent by a target user terminal device in communication connection responding to a target request operation performed by a target user corresponding to the target user terminal device is obtained, performing video frame screening processing on a plurality of frames of user request video frames included in the to-be-processed request video to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame, wherein the plurality of frames of user request video frames are a plurality of continuous video frames obtained based on image acquisition of a target area;
grouping the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, wherein each video frame group includes at least one frame of target user request video frame;
and determining the area position information of a target area corresponding to the target request video based on the similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by respectively carrying out image acquisition on a plurality of area positions.
2. The smart city data-based service processing method of claim 1, wherein the step of grouping the at least one frame of target user requested video frames included in the target requested video to obtain at least one video frame group corresponding to the target requested video comprises:
calculating a pixel difference value between the target user request video frame and an adjacent next frame of target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame included in the target request video;
determining a relative size relationship between a pixel difference value between the target user request video frame and an adjacent next frame target user request video frame and a pre-configured pixel difference value threshold value aiming at each frame target user request video frame except for the last frame in the at least one frame target user request video frame included in the target request video, and determining the position between the target user request video frame and the adjacent next frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold value;
and segmenting the at least one frame of target user request video frame included in the target request video based on each determined video segmentation position to obtain at least one corresponding video frame group.
3. The smart city data-based traffic processing method as claimed in claim 2, wherein the step of calculating the pixel difference value between the target user request video frame and the next target user request video frame comprises:
calculating the pixel absolute difference of corresponding pixel points between the target user request video frame and the adjacent next frame of target user request video frame aiming at each frame of target user request video frame except the last frame in the at least one frame of target user request video frame included in the target request video;
and calculating the sum of pixel absolute differences of corresponding pixel points between the target user request video frame and the adjacent next frame target user request video frame aiming at each frame target user request video frame except the last frame in the at least one frame target user request video frame included in the target request video, and taking the sum as the pixel difference value between the target user request video frame and the adjacent next frame target user request video frame.
4. The smart city data-based service processing method of claim 1, wherein the step of grouping the at least one frame of target user requested video frames included in the target requested video to obtain at least one video frame group corresponding to the target requested video comprises:
calculating the similarity between the two frames of target user request video frames aiming at every two frames of target user request video frames in the at least one frame of target user request video frames included in the target request video to obtain the video frame similarity between the two frames of target user request video frames;
based on the video frame similarity between every two frames of target user request video frames in the at least one frame of target user request video frames included in the target request video, clustering the at least one frame of target user request video frames included in the target request video to obtain at least one video frame group corresponding to the target request video.
5. The smart city data-based service processing method according to any one of claims 1 to 4, wherein the step of determining the region location information of the target region corresponding to the target request video based on the similarity between the at least one video frame group and the pre-configured multi-frame region standard video frame comprises:
for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in a multi-frame region standard video frame configured in advance to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining a frame of the regional standard video frame corresponding to the target first similarity corresponding to the video frame group as a target regional standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the area position information of the area position corresponding to the target area standard video frame corresponding to the video frame group to obtain the area position information corresponding to the video frame group;
classifying the at least one video frame group based on whether the corresponding region position information is the same or not to obtain at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and determining the area position information of the target area corresponding to the target request video based on the area position information corresponding to each video frame group set in the at least one video frame group set.
6. The smart city data-based traffic processing method as claimed in claim 5, wherein the step of calculating, for each video frame group of the at least one video frame group, a similarity between the video frame group and each regional standard video frame of the pre-configured multi-frame regional standard video frames to obtain a plurality of first similarities corresponding to the video frame group, and determining a first similarity having a maximum value among the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group comprises:
calculating the video frame similarity between the target user request video frame and each regional standard video frame in each video frame group in the at least one video frame group and each regional standard video frame in a pre-configured multi-frame regional standard video frame;
respectively calculating the average value of the video frame similarity between the video frame group and each frame of the regional standard video aiming at each video frame group in the at least one video frame group, and respectively obtaining the first similarity between the video frame group and each frame of the regional standard video so as to obtain a plurality of first similarities corresponding to the video frame group;
and for each video frame group in the at least one video frame group, determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as the target first similarity corresponding to the video frame group.
7. The smart city data-based service processing method of claim 5, wherein the step of determining the region location information of the target region corresponding to the target request video based on the region location information corresponding to each of the at least one video frame set comprises:
for each video frame group set in the at least one video frame group set, counting the number of the video frame groups included in the video frame group set to obtain the number of groups corresponding to the video frame group set;
and determining a video frame group set corresponding to the group number with the maximum value as a target video frame group set, and determining the position information corresponding to the target video frame group set as the area position information of a target area corresponding to the target request video.
8. A business processing system based on smart city data is applied to a city monitoring server and comprises:
the video frame screening module is used for screening a video frame of a multi-frame user request video frame included in a request video to be processed after the request video to be processed is acquired, wherein the request video to be processed is sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, so as to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and acquiring a corresponding target request video based on the at least one frame of target user request video frame, wherein the multi-frame user request video frame is a multi-frame continuous video frame obtained based on image acquisition of a target area;
a video frame grouping module, configured to group the at least one frame of target user request video frame included in the target request video to obtain at least one video frame group corresponding to the target request video, where each video frame group includes at least one frame of the target user request video frame;
and the area position determining module is used for determining the area position information of the target area corresponding to the target request video based on the similarity relation between the at least one video frame group and a preset multi-frame area standard video frame, wherein the multi-frame area standard video frame is obtained by respectively carrying out image acquisition on a plurality of area positions.
9. The smart city data-based traffic processing system of claim 8, wherein the video frame grouping module is specifically configured to:
calculating a pixel difference value between the target user request video frame and an adjacent next frame of target user request video frame aiming at each frame of target user request video frame except for the last frame in the at least one frame of target user request video frame included in the target request video;
determining a relative size relationship between a pixel difference value between the target user request video frame and an adjacent next frame target user request video frame and a pre-configured pixel difference value threshold value aiming at each frame target user request video frame except for the last frame in the at least one frame target user request video frame included in the target request video, and determining the position between the target user request video frame and the adjacent next frame target user request video frame as a video segmentation position when the pixel difference value is greater than or equal to the pixel difference value threshold value;
and segmenting the at least one frame of target user request video frame included in the target request video based on each determined video segmentation position to obtain at least one corresponding video frame group.
10. The smart city data-based business processing system of claim 8, wherein the region location determination module is specifically configured to:
for each video frame group in the at least one video frame group, calculating the similarity between the video frame group and each frame region standard video frame in a multi-frame region standard video frame configured in advance to obtain a plurality of first similarities corresponding to the video frame group, and determining the first similarity with the maximum value in the plurality of first similarities corresponding to the video frame group as a target first similarity corresponding to the video frame group;
for each video frame group in the at least one video frame group, determining a frame of the regional standard video frame corresponding to the target first similarity corresponding to the video frame group as a target regional standard video frame corresponding to the video frame group;
for each video frame group in the at least one video frame group, acquiring the area position information of the area position corresponding to the target area standard video frame corresponding to the video frame group to obtain the area position information corresponding to the video frame group;
classifying the at least one video frame group based on whether the corresponding region position information is the same or not to obtain at least one video frame group set corresponding to the at least one video frame group, wherein each video frame group set in the at least one video frame group set comprises at least one video frame group, the region position information corresponding to any two video frame groups in the same video frame group set is the same, and the region position information corresponding to any two video frame groups in any two different video frame group sets is different;
and determining the area position information of the target area corresponding to the target request video based on the area position information corresponding to each video frame group set in the at least one video frame group set.
CN202111346837.8A 2021-11-15 2021-11-15 Business processing method and system based on smart city data Active CN113949881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111346837.8A CN113949881B (en) 2021-11-15 2021-11-15 Business processing method and system based on smart city data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111346837.8A CN113949881B (en) 2021-11-15 2021-11-15 Business processing method and system based on smart city data

Publications (2)

Publication Number Publication Date
CN113949881A true CN113949881A (en) 2022-01-18
CN113949881B CN113949881B (en) 2023-10-03

Family

ID=79338191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111346837.8A Active CN113949881B (en) 2021-11-15 2021-11-15 Business processing method and system based on smart city data

Country Status (1)

Country Link
CN (1) CN113949881B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115424353A (en) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 AI model-based service user feature identification method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011146930A (en) * 2010-01-14 2011-07-28 Sony Corp Information processing apparatus, information processing method, and program
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN108537157A (en) * 2018-03-30 2018-09-14 特斯联(北京)科技有限公司 A kind of video scene judgment method and device based on artificial intelligence classification realization
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN111488487A (en) * 2020-03-20 2020-08-04 西南交通大学烟台新一代信息技术研究院 Advertisement detection method and detection system for all-media data
JP2020149641A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Object tracking device and object tracking method
CN112954393A (en) * 2021-01-21 2021-06-11 北京博雅慧视智能技术研究院有限公司 Target tracking method, system, storage medium and terminal based on video coding
CN113259213A (en) * 2021-06-28 2021-08-13 广州市威士丹利智能科技有限公司 Intelligent home information monitoring method based on edge computing intelligent gateway
CN113628073A (en) * 2021-07-23 2021-11-09 续斐 Property management method and system for intelligent cell

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011146930A (en) * 2010-01-14 2011-07-28 Sony Corp Information processing apparatus, information processing method, and program
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN108537157A (en) * 2018-03-30 2018-09-14 特斯联(北京)科技有限公司 A kind of video scene judgment method and device based on artificial intelligence classification realization
JP2020149641A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Object tracking device and object tracking method
CN111488487A (en) * 2020-03-20 2020-08-04 西南交通大学烟台新一代信息技术研究院 Advertisement detection method and detection system for all-media data
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN112954393A (en) * 2021-01-21 2021-06-11 北京博雅慧视智能技术研究院有限公司 Target tracking method, system, storage medium and terminal based on video coding
CN113259213A (en) * 2021-06-28 2021-08-13 广州市威士丹利智能科技有限公司 Intelligent home information monitoring method based on edge computing intelligent gateway
CN113628073A (en) * 2021-07-23 2021-11-09 续斐 Property management method and system for intelligent cell

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115424353A (en) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 AI model-based service user feature identification method and system

Also Published As

Publication number Publication date
CN113949881B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN114140713A (en) Image recognition system and image recognition method
CN110837582A (en) Data association method and device, electronic equipment and computer-readable storage medium
CN114581856B (en) Agricultural unit motion state identification method and system based on Beidou system and cloud platform
CN114140712A (en) Automatic image recognition and distribution system and method
CN114925348B (en) Security verification method and system based on fingerprint identification
CN115188485A (en) User demand analysis method and system based on intelligent medical big data
CN113868471A (en) Data matching method and system based on monitoring equipment relationship
CN113949881B (en) Business processing method and system based on smart city data
CN116821777B (en) Novel basic mapping data integration method and system
CN114139016A (en) Data processing method and system for intelligent cell
CN114697618A (en) Building control method and system based on mobile terminal
CN115620243B (en) Pollution source monitoring method and system based on artificial intelligence and cloud platform
CN115100541B (en) Satellite remote sensing data processing method, system and cloud platform
CN115065842B (en) Panoramic video streaming interaction method and system based on virtual reality
CN115375886A (en) Data acquisition method and system based on cloud computing service
CN115049792B (en) High-precision map construction processing method and system
CN115457467A (en) Building quality hidden danger positioning method and system based on data mining
CN115424193A (en) Training image information processing method and system
CN115484044A (en) Data state monitoring method and system
CN115330140A (en) Building risk prediction method based on data mining and prediction system thereof
CN114095734A (en) User data compression method and system based on data processing
CN114140714A (en) Data processing method and system for smart city
CN114189535A (en) Service request method and system based on smart city data
CN115082709B (en) Remote sensing big data processing method, system and cloud platform
CN114997343B (en) Fault reason tracing method and system based on air purification detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230906

Address after: No.1 Lanhai Road, hi tech Zone, Yantai City, Shandong Province

Applicant after: Shandong Ruihan Network Technology Co.,Ltd.

Address before: 650101 block B, building 9, Dingyi business center, No. 99, Keyuan Road, high tech Zone, Wuhua District, Kunming, Yunnan Province

Applicant before: Zhao Qianqian

GR01 Patent grant
GR01 Patent grant