CN113011576A - Method and device for identifying case type - Google Patents

Method and device for identifying case type Download PDF

Info

Publication number
CN113011576A
CN113011576A CN201911330198.9A CN201911330198A CN113011576A CN 113011576 A CN113011576 A CN 113011576A CN 201911330198 A CN201911330198 A CN 201911330198A CN 113011576 A CN113011576 A CN 113011576A
Authority
CN
China
Prior art keywords
case
model
result
multimedia data
qualitative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911330198.9A
Other languages
Chinese (zh)
Inventor
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Haisi Technology Co ltd
Original Assignee
Shanghai Haisi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Haisi Technology Co ltd filed Critical Shanghai Haisi Technology Co ltd
Priority to CN201911330198.9A priority Critical patent/CN113011576A/en
Publication of CN113011576A publication Critical patent/CN113011576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Educational Administration (AREA)
  • Biophysics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for identifying case types, which realize intelligent analysis and judgment of case types and provide qualitative analysis results for the work of law enforcement officers. The method comprises the following steps: acquiring multimedia data of a first case; and inputting the multimedia data of the first case into a neural network model group to obtain a qualitative analysis result of the type of the first case, wherein the neural network model group comprises a field condition detection model and a case property identification model.

Description

Method and device for identifying case type
Technical Field
The present application relates to the field of computer vision, and more particularly, to a method and apparatus for identifying a case type.
Background
The first line policemen are different in working experience, cultural level and social experience, and when the policemen face some serious and complicated case alarm situations, after early treatment and investigation and evidence collection, the case properties cannot be accurately determined, preliminary treatment opinions are provided, and the practice is often the opinions of the policemen with experience of consultation or the departments of public security legal. When forced measures are taken for case planning in the early stage, problems of inaccurate qualification, error in law application and the like often occur. For example, a crime case is commonly called as a pocket crime, and various illegal criminal behaviors such as randomly beating others, intentionally damaging property, forcibly buying and forcibly selling and the like are covered. False policing is often the case.
Disclosure of Invention
The application provides a method and a device for identifying case types, which realize intelligent analysis and judgment of case types and provide guidance for the work of law enforcement officers.
In a first aspect, a method for identifying case types is provided, which includes: acquiring multimedia data of a first case; and inputting the multimedia data of the first case into a neural network model group to obtain a qualitative analysis result of the type of the first case, wherein the neural network model group comprises a field condition detection model and a case property identification model.
With reference to the first aspect, in some implementations of the first aspect, the qualitative analysis result of the type of the first case includes a plurality of crime names and a probability corresponding to each of the plurality of crime names; the plurality of criminal name types include a first criminal name type, the first criminal name type corresponds to a first probability value, and the first probability value is used for indicating the probability of qualifying the first case as the first criminal name type.
According to the method for identifying the case type, the qualitative analysis result of the case is output according to the multimedia data of the case and the intelligent analysis of the neural network model group, the qualitative guidance suggestion of the case is provided for law enforcement officers, and the problems of miscrime and the like caused by insufficient legal knowledge and case handling experience of the law enforcement officers can be solved. Through intelligent analysis of the neural network model group, the accuracy of case qualification can be improved, and the current situation that legal experts are suffering from frequent error correction and the current situation that law enforcement is unfair due to human intervention are solved.
With reference to the first aspect, in some implementations of the first aspect, the output further includes directions and suggestions for evidence collection of cases suspected of being illegal or criminal. Illustratively, when the type of the first case is the intentional injury crime, the corresponding article of the second hundred thirty four articles of criminal law and the evidence-taking guidance suggestion corresponding to the intentional injury crime are also output.
With reference to the first aspect, in some implementations of the first aspect, inputting the multimedia data of the first case into a neural network model set to obtain a result of qualitative analysis of the type of the first case includes: detecting the multimedia data of the first case by using a field condition detection model to obtain a detection result, wherein the detection result comprises at least one of the following items: a person detection result, a behavior detection result, a scene detection result, or an event detection result.
With reference to the first aspect, in some implementations of the first aspect, inputting the multimedia data of the first case into a neural network model set to obtain a result of qualitative analysis of the type of the first case includes: recognizing the multimedia data and the detection result of the first case by using the case property recognition model to obtain a recognition result, wherein the recognition result comprises at least one of the following items: a character information recognition result, a character relationship recognition result, a damage degree recognition result, or an influence recognition result.
With reference to the first aspect, in some implementations of the first aspect, the entering the multimedia data of the first case into the neural network model set to obtain a result of qualitative analysis on the type of the first case includes: and carrying out qualitative analysis on the first case by using the case qualitative model according to the recognition result to obtain a qualitative analysis result of the type of the first case.
With reference to the first aspect, in some implementations of the first aspect, the case qualitative model includes a case cause qualitative model and a case property qualitative model, and classifying the first case according to the recognition result by using the case qualitative model to obtain a qualitative analysis result of the first case includes: determining the cause of the first case according to the recognition result by using a case cause qualitative model to obtain cause data of the first case; and classifying the first case according to the image identification result and the cause data of the first case by using the case property qualitative model so as to obtain a qualitative analysis result of the first case.
In the case type identification method provided by the application, the neural network model group can comprise three neural network models, and as the scenes related to the multimedia data of the case are generally complex, the three neural network models are adopted to perform segmented processing on the multimedia data of the case, so that the problem that a single neural network model can only complete specific independent functions can be solved.
With reference to the first aspect, in certain implementations of the first aspect, the method is performed by a server, and the method further comprises: and sending the qualitative analysis result of the type of the first case to the terminal equipment.
With reference to the first aspect, in certain implementations of the first aspect, the method is performed by a server, and acquiring multimedia data of a first case includes: the multimedia data of the first case is received from the terminal device or the monitoring device.
The method for identifying the case type can be executed by the server side, and when the method is executed by the server side, the server can receive the multimedia data of the case from the terminal equipment and the monitoring equipment and then send the processed result to the terminal equipment.
With reference to the first aspect, in certain implementations of the first aspect, the method is performed by a terminal device, and acquiring multimedia data of a first case includes: shooting multimedia data of a first case; or receiving the multimedia data of the first case from a server or a monitoring device.
The method for identifying the case type can be executed by the terminal device, when the method is executed by the terminal device, the terminal device can acquire the multimedia data of the case in a direct shooting mode, and can also receive the multimedia data of the case from the server and the monitoring device, and then directly provide the processed result to law enforcement officers.
With reference to the first aspect, in certain implementations of the first aspect, the multimedia data of the first case includes at least one of data captured by surveillance of the case scene or data captured by a law enforcement recorder.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: inputting multimedia data of a training case into an original neural network model group, training and adjusting the original neural network model group, wherein when the similarity between the output result of the adjusted neural network model group and the type of the training case meets a preset condition; and taking the adjusted neural network model group as a trained neural network model group.
The application also provides a training method of the neural network model group, and the trained neural network model can be stored and used for later case type recognition. The training of the neural network model group comprises the respective training of a field condition detection model, a case property recognition model and a case qualitative model. The labeling of the multimedia data of the training case by the professional labeling team may further include labeling the detection result and the recognition result of the multimedia data. Inputting the multimedia data of the training case into an original field condition detection model to obtain an output detection result, and adjusting the original field condition detection model to enable the similarity between the output detection result and the marked detection result to meet a preset condition. The multimedia data and the detection result of the training case are input into a case property recognition model, wherein the detection result can be a detection result output by a trained field condition detection model or a detection result marked by a professional marking team, and then the case property recognition model is adjusted to enable the similarity between the output recognition result and the marked recognition result to meet a preset condition. And inputting the recognition result into a case qualitative model, wherein the recognition result can be the recognition result output by a trained case property recognition model or the recognition result marked by a professional marking team, and then adjusting the case qualitative model to ensure that the similarity between the output case type result and the marked case type meets the preset condition. Since the case qualitative model further comprises a case cause qualitative model and a case property qualitative model, the training of the case qualitative model further comprises: inputting the recognition result into the case cause qualitative model, and then adjusting the case cause qualitative model to enable the similarity between the output case cause data and the marked case causes to meet the preset condition; and then inputting the cause data of the case and the identification result causing the influence into the case property qualitative model, and adjusting the case property qualitative model to ensure that the similarity between the output case type result and the marked case type meets the preset condition.
Meanwhile, new training data can be obtained periodically, namely, the trained database is updated, and then a new neural network model group is trained by combining the new training data to update the neural network model group periodically.
In a second aspect, there is provided an apparatus for identifying a case type, comprising: the acquiring unit is used for acquiring the multimedia data of the first case; and the processing unit is used for inputting the multimedia data of the first case into the neural network model group to obtain a qualitative analysis result of the type of the first case, and the neural network model group comprises a field condition detection model and a case property identification model.
With reference to the second aspect, in some implementations of the second aspect, the qualitative analysis result of the type of the first case includes a plurality of crime name types and a plurality of probability values, where each crime name type corresponds to one probability value, the plurality of crime name types includes a first crime name type, the first crime name type corresponds to a first probability value, and the first probability value is used to indicate a probability of qualifying the first case as the first crime name type.
With reference to the second aspect, in some implementations of the second aspect, the processing unit is specifically configured to: detecting the multimedia data of the first case by using a field condition detection model to obtain a detection result, wherein the detection result comprises at least one of the following items: a person detection result, a behavior detection result, a scene detection result, or an event detection result.
With reference to the second aspect, in some implementations of the second aspect, inputting the multimedia data of the first case into a neural network model set to obtain a result of qualitative analysis of the type of the first case includes: recognizing the multimedia data and the detection result of the first case by using the case property recognition model to obtain a recognition result, wherein the recognition result comprises at least one of the following items: a character information recognition result, a character relationship recognition result, a damage degree recognition result, or an influence recognition result.
With reference to the second aspect, in some implementations of the second aspect, the neural network model set further includes a case qualitative model, and the inputting the multimedia data of the first case into the neural network model set to obtain a result of qualitative analysis on the type of the first case includes: and carrying out qualitative analysis on the first case by using the case qualitative model according to the recognition result to obtain a qualitative analysis result of the type of the first case.
With reference to the second aspect, in some implementations of the second aspect, the case qualitative model includes a case cause qualitative model and a case property qualitative model, and classifying the first case according to the recognition result by using the case qualitative model to obtain a result of qualitative analysis on the first case includes: determining the cause of the first case according to the recognition result by using a case cause qualitative model to obtain cause data of the first case; and classifying the first case according to the image identification result and the cause data of the first case by using the case property qualitative model so as to obtain a qualitative analysis result of the first case.
With reference to the second aspect, in some implementations of the second aspect, the apparatus is configured at the server side, and the apparatus is further configured to: and sending the guiding opinion of the type of the first case to the terminal equipment.
With reference to the second aspect, in some implementations of the second aspect, the apparatus is configured at a server side, and acquires the multimedia data of the first case, including: the multimedia data of the first case is received from the terminal device or the monitoring device.
With reference to the second aspect, in some implementations of the second aspect, the apparatus is configured at a terminal device, and acquiring the multimedia data of the first case includes: shooting multimedia data of a first case; or receiving the multimedia data of the first case from a server or a monitoring device.
With reference to the second aspect, in certain implementations of the second aspect, the multimedia data of the first case includes at least one of data captured by surveillance of the case scene or data captured by a law enforcement recorder.
With reference to the second aspect, in certain implementations of the second aspect, the apparatus is further configured to: inputting multimedia data of a training case into an original neural network model group, training and adjusting the original neural network model group, wherein when the similarity between the output result of the adjusted neural network model group and the type of the training case meets a preset condition; and taking the adjusted neural network model group as a neural network model group.
The training of the neural network model group comprises the respective training of a field condition detection model, a case property recognition model and a case qualitative model. The labeling of the multimedia data of the training case by the professional labeling team may further include labeling the detection result and the recognition result of the multimedia data. Inputting the multimedia data of the training case into an original field condition detection model to obtain an output detection result, and adjusting the original field condition detection model to enable the similarity between the output detection result and the marked detection result to meet a preset condition. The multimedia data and the detection result of the training case are input into a case property recognition model, wherein the detection result can be a detection result output by a trained field condition detection model or a detection result marked by a professional marking team, and then the case property recognition model is adjusted to enable the similarity between the output recognition result and the marked recognition result to meet a preset condition. And inputting the recognition result into a case qualitative model, wherein the recognition result can be the recognition result output by a trained case property recognition model or the recognition result marked by a professional marking team, and then adjusting the case qualitative model to ensure that the similarity between the output case type result and the marked case type meets the preset condition. Since the case qualitative model further comprises a case cause qualitative model and a case property qualitative model, the training of the case qualitative model further comprises: inputting the recognition result into the case cause qualitative model, and then adjusting the case cause qualitative model to enable the similarity between the output case cause data and the marked case causes to meet the preset condition; and then inputting the cause data of the case and the identification result causing the influence into the case property qualitative model, and adjusting the case property qualitative model to ensure that the similarity between the output case type result and the marked case type meets the preset condition.
In a third aspect, an apparatus for identifying a case type is provided, which includes a transmission interface configured to receive or transmit data, and a processor configured to perform the method of the first aspect and any possible implementation manner of the first aspect.
In a fourth aspect, a terminal device is provided, which includes an obtaining unit configured to obtain multimedia data of a first case; and the processing unit is used for inputting the multimedia data of the first case into the neural network model group to obtain a qualitative analysis result of the type of the first case, and the neural network model group comprises a field condition detection model and a case property identification model.
In one possible implementation, each of the plurality of criminal name types corresponds to a probability value, and the plurality of criminal name types includes a first criminal name type, and the first criminal name type corresponds to a first probability value, and the first probability value is used for indicating a probability of qualifying the first case as the first criminal name type.
In a possible implementation manner, the processing unit is specifically configured to: detecting the multimedia data of the first case by using the field condition detection model to obtain a detection result, wherein the detection result comprises at least one of the following items: a person detection result, a behavior detection result, a scene detection result, or an event detection result.
In a possible implementation manner, the processing unit is specifically configured to: recognizing the multimedia data of the first case and the detection result by using the case property recognition model to obtain a recognition result, wherein the recognition result comprises at least one of the following items: a character information recognition result, a character relationship recognition result, a damage degree recognition result, or an influence recognition result.
In a possible implementation manner, the neural network model set further includes a case qualitative model, and the processing unit is specifically configured to: and classifying the first case according to the recognition result by using the case qualitative model to obtain a qualitative analysis result of the type of the first case.
In a possible implementation manner, the recognition result includes the influence-causing recognition result, the case qualitative model includes a case cause qualitative model and a case property qualitative model, and the processing unit is specifically configured to: the case cause qualitative model is used for qualitatively determining the cause of the first case according to the identification result so as to obtain cause data of the first case; and classifying the first case according to the cause and influence identification result and the cause data of the first case by using the case property qualitative model to obtain a qualitative analysis result of the first case.
In a possible implementation manner, the terminal further includes an imaging unit, and the acquiring unit is specifically configured to: acquiring multimedia data of the first case shot by the imaging unit; or, the obtaining unit is specifically configured to: the multimedia data of the first case is received from a server or a monitoring device.
In a possible implementation manner, the terminal further includes a sending unit, before the obtaining unit receives the multimedia data of the first case from the server or the monitoring device, the sending unit is configured to: and sending the media data request information to a server or a monitoring device.
In one possible implementation, the multimedia data of the first case includes at least one of data captured by monitoring of the scene of the case or data captured by a law enforcement recorder.
In a fifth aspect, there is provided a server apparatus (computer apparatus) including an acquisition unit configured to acquire multimedia data of a first case; and the processing unit is used for inputting the multimedia data of the first case into the neural network model group to obtain a qualitative analysis result of the type of the first case, and the neural network model group comprises a field condition detection model and a case property identification model.
In one possible implementation, each of the plurality of criminal name types corresponds to a probability value, and the plurality of criminal name types includes a first criminal name type, and the first criminal name type corresponds to a first probability value, and the first probability value is used for indicating a probability of qualifying the first case as the first criminal name type.
In a possible implementation manner, the processing unit is specifically configured to: detecting the multimedia data of the first case by using the field condition detection model to obtain a detection result, wherein the detection result comprises at least one of the following items: a person detection result, a behavior detection result, a scene detection result, or an event detection result.
In a possible implementation manner, the processing unit is specifically configured to: recognizing the multimedia data of the first case and the detection result by using the case property recognition model to obtain a recognition result, wherein the recognition result comprises at least one of the following items: a character information recognition result, a character relationship recognition result, a damage degree recognition result, or an influence recognition result.
In a possible implementation manner, the neural network model set further includes a case qualitative model, and the processing unit is specifically configured to: and classifying the first case according to the recognition result by using the case qualitative model to obtain a qualitative analysis result of the type of the first case.
In a possible implementation manner, the recognition result includes the influence-causing recognition result, the case qualitative model includes a case cause qualitative model and a case property qualitative model, and the processing unit is specifically configured to: the case cause qualitative model is used for qualitatively determining the cause of the first case according to the identification result so as to obtain cause data of the first case; and classifying the first case according to the cause and influence identification result and the cause data of the first case by using the case property qualitative model to obtain a qualitative analysis result of the first case.
In a possible implementation manner, the server device further includes a sending unit, configured to send a result of qualitative analysis of the type of the first case to a terminal device.
In a possible implementation manner, the obtaining unit is specifically configured to: and receiving the multimedia data of the first case from the terminal equipment or the monitoring equipment.
In one possible implementation, the multimedia data of the first case includes at least one of data captured by monitoring of the scene of the case or data captured by a law enforcement recorder.
A sixth aspect provides a terminal device, comprising a memory and a processor, wherein the memory stores a program, and when the program is executed, the processor is configured to perform the method of the first aspect and any possible implementation manner of the first aspect.
In a seventh aspect, a server device (computer device) is provided, which includes a memory and a processor, wherein the memory stores a program, and when the program is executed, the processor is configured to perform the method of the first aspect and any possible implementation manner of the first aspect.
In an eighth aspect, a computer-readable storage medium is provided that stores program code for execution by a device, the program code comprising instructions for performing the method of the first aspect and any one of the possible implementations of the first aspect.
In a ninth aspect, a chip is provided, where the chip includes a processor and a data interface, and the processor reads instructions stored in a memory through the data interface to perform the method of the first aspect and any possible implementation manner of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of a system architecture provided in an embodiment of the present application.
Fig. 2 is a schematic flow chart of a method for identifying a case type according to an embodiment of the present application.
FIG. 3 is a block diagram of case type identification using a neural network model set provided by an embodiment of the present application.
FIG. 4 is a flow chart of providing case-type guidance advice to law enforcement personnel during a law enforcement procedure utilizing the method of an embodiment of the present application.
Fig. 5 is a schematic flowchart of a training method of a neural network model set provided in an embodiment of the present application.
Fig. 6 is a schematic block diagram of a training apparatus for a neural network model set provided in an embodiment of the present application.
Fig. 7 is a schematic hardware structure diagram of a training apparatus for a neural network model set according to an embodiment of the present application.
Fig. 8 is a schematic block diagram of a neural network model set provided by an embodiment of the present application.
Detailed Description
The system architecture of the embodiment of the present application is described in detail below with reference to fig. 1.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application. As shown in FIG. 1, the system architecture 100 includes an execution device 110, a training device 120, a database 130, a client device 140, a data storage system 150, and a data collection system 160.
In addition, the execution device 110 includes a calculation module 111, an I/O interface 112, a preprocessing module 113, and a preprocessing module 114. Wherein, the calculation module 111 may include the target model/rule 101, and the pre-processing module 113 and the pre-processing module 114 are optional.
The data acquisition device 160 is used to acquire training data. For the training method of the neural network model set according to the embodiment of the present application, the training data may include multimedia data of a training case and annotation data of the training case. After the training data is collected, data collection device 160 stores the training data in database 130, and training device 120 trains target model/rule 101 based on the training data maintained in database 130.
The following describes that the training device 120 obtains the target model/rule 101 based on the training data, the training device 120 performs feature extraction on the input multimedia data of the training case to obtain a corresponding feature vector, and repeats the feature extraction on the input multimedia data of the training case until the function value of the loss function meets a preset requirement (is less than or equal to a preset threshold), thereby completing the training of the target model/rule 101.
It should be appreciated that the training of the target model/rule 101 described above may be an unsupervised training.
The target model/rule 101 can be used for realizing the method for identifying the case type of the embodiment of the application, namely, the multimedia data of the case is input into the target model/rule 101, so that the extracted feature vectors of the multimedia data of the training case can be obtained, the case type is identified based on the extracted feature vectors, and the qualitative analysis result of the case type is determined. The target model/rule 101 in the embodiment of the present application may specifically be a neural network. It should be noted that, in practical applications, the training data maintained in the database 130 may not necessarily all come from the collection of the data collection device 160, and may also be received from other devices. It should be noted that, the training device 120 does not necessarily perform the training of the target model/rule 101 based on the training data maintained by the database 130, and may also obtain the training data from the cloud or other places for performing the model training.
The target model/rule 101 obtained by training according to the training device 120 may be applied to different systems or devices, for example, the execution device 110 shown in fig. 1, where the execution device 110 may be a terminal, such as a recorder, a mobile phone terminal, a tablet computer, a laptop computer, an Augmented Reality (AR)/Virtual Reality (VR), a vehicle-mounted terminal, or a server or a cloud. In fig. 1, the execution device 110 configures an input/output (I/O) interface 112 for data interaction with an external device, and a user may input data to the I/O interface 112 through the client device 140, where the input data may include: multimedia data input by the client device. The client device 140 here may specifically be a recorder.
The preprocessing module 113 and the preprocessing module 114 are used for preprocessing the input data received by the I/O interface 112, and in this embodiment, there may be no preprocessing module 113 and the preprocessing module 114 or only one preprocessing module. When the preprocessing module 113 and the preprocessing module 114 are not present, the input data may be directly processed using the calculation module 111.
In the process that the execution device 110 preprocesses the input data or in the process that the calculation module 111 of the execution device 110 executes the calculation or other related processes, the execution device 110 may call the data, the code, and the like in the data storage system 150 for corresponding processes, and may store the data, the instruction, and the like obtained by corresponding processes in the data storage system 150.
Finally, the I/O interface 112 presents the processing result (specifically, the result of the qualitative analysis of the case type), such as the result of the qualitative analysis of the case type obtained by processing the multimedia data of the case by the target model/rule 101, to the client device 140, so as to provide the result to the user.
Specifically, the qualitative analysis result of the case type obtained by case type recognition through the target model/rule 101 in the calculation module 111 may be sent to the client device 140 for display through the I/O interface.
It should be understood that, when the preprocessing module 113 and the preprocessing module 114 are not present in the system architecture 100, the computing module 111 may also transmit the result of the qualitative analysis of the case type obtained by case type identification to the I/O interface, and then the I/O interface sends the processing result to the client device 140 for display.
It should be noted that the training device 120 may be configured to target different targets or different tasks (e.g., separate training of the field condition detection model, the case property recognition model, and the case qualitative model), based on different training data (it should be understood that the training data of the neural network model should be a set of data including input training data and output training data, and when the input training data and the data obtained by inputting the training data into the neural network model approach the output training data, the neural network model is considered to be successfully trained The output training data is the recognition result (including character information, character relation, injury degree, influence and the like) corresponding to the multimedia data of the case marked by the professional team; the case qualitative model comprises a case cause qualitative model and a case property qualitative model, wherein input training data of the case cause qualitative model is recognition results output by the trained case property recognition model or recognition results corresponding to multimedia data of marked cases, and output training data are causes (including intentional injuries, accident suppression, and the like) of cases marked by a professional team; the input training data of the case property qualitative model is the case cause data output by the trained case cause qualitative model and the recognition result corresponding to the multimedia data of the marked case, the output training data is the qualitative analysis result of the case (comprising a plurality of crime types and a plurality of probability values of the case marked by a professional team) to generate a corresponding target model/rule 101, and the corresponding target model/rule 101 can be used for realizing the target or completing the task, so that the required result is provided for the user.
It should be noted that fig. 1 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the position relationship between the devices, modules, and the like shown in the diagram does not constitute any limitation, for example, in fig. 1, the data storage system 150 is an external memory with respect to the execution device 110, and in other cases, the data storage system 150 may also be disposed in the execution device 110.
As shown in fig. 1, the target model/rule 101, which may be a neural network (model), is trained according to the training device 120. Specifically, the neural network (model) may be a Convolutional Neural Network (CNN), a Deep Convolutional Neural Network (DCNN), or the like.
Fig. 2 shows a schematic flow chart of a method for identifying a case type according to an embodiment of the present application. As shown in fig. 2, the method includes steps S201 to S202.
S201, multimedia data of the first case are obtained.
Optionally, the multimedia data of the first case may include real-time monitoring data of the case site and data collected by a terminal device carried by law enforcement personnel.
In the embodiment of the present application, the method for identifying the case type may be executed by the server, or may be executed by the terminal device, which will be described below.
If the method for identifying the case type is executed by the server, the multimedia data of the first case is acquired, which may specifically be: when a case occurs, the police firstly leads to a server request through the police service during the police, and inputs the location of the case-sending site. And the server sends a request to the monitoring equipment at the positioning position, establishes connection and acquires the real-time monitoring data of the monitoring equipment to the server. And the server processes the acquired real-time monitoring data so as to obtain a preliminary probability result of the case type, and then sends the preliminary probability result to the terminal equipment. Alternatively, after the police officer arrives at the scene, the police officer uses a terminal device, such as a law enforcement recorder for police, to shoot the scene of the case, so as to acquire the data acquired by the terminal device, and then transmits the data acquired by the terminal device to the server. And the server receives the data collected by the terminal equipment and processes the data collected by the terminal equipment, or can continuously process the data collected by the terminal equipment by combining the previous initial probability result, thereby obtaining the final probability result of the case type. And the server sends the obtained final probability result and qualitative analysis result of the case type to the terminal equipment.
If the method for identifying the case type is executed by the terminal device, the multimedia data of the first case is acquired, which may specifically be: when a case occurs, the police firstly leads to a server request through the police service during the police, and inputs the location of the case-sending site. The server sends a request to the monitoring equipment at the positioning position, establishes connection, acquires real-time monitoring data of the monitoring equipment to the server, and sends the acquired real-time monitoring data to the terminal equipment. Optionally, the terminal device may also directly obtain real-time monitoring data from the monitoring device, that is, the terminal may directly establish a communication connection with the monitoring device near the emergency site, and then the terminal initiates a data request to the monitoring device, and the monitoring device sends the data to the terminal without the relay of the server. And the terminal equipment processes the acquired real-time monitoring data so as to obtain an initial probability result of the case type. Optionally, after the police arrives at the scene, the police shoots the scene of the case by using a terminal device, such as a law enforcement recorder for police, so as to obtain the data collected by the terminal device, and the terminal device processes the collected data, so as to obtain the final probability result of the type of the case.
S202, inputting the multimedia data of the first case into a neural network model group to obtain a qualitative analysis result of the type of the first case, wherein the neural network model group comprises a field condition detection model and a case property identification model.
In the embodiment of the application, the method for identifying the case type may be executed by a server or a terminal device, and the server and the terminal device have the same processing procedure for the acquired multimedia data of the first case.
FIG. 3 shows a block diagram of case type identification using a neural network model set in an embodiment of the present application.
In particular, the neural network model set may include a field situation detection model, a case property identification model, and a case qualitative model. After inputting the multimedia data of the first case into the neural network model group, first detecting the multimedia data of the first case by using the field condition detection model to obtain a detection result, optionally, the detection result may include:
person detection, such as number of persons (whether multiple persons participate), whether a specific object is targeted, whether a weapon is carried, etc.;
detecting behaviors, such as body movements, language and the like, so as to detect whether a beating behavior or not;
detecting scenes, such as whether people are involved or not, whether traffic jam or paralysis is caused or not, and the like;
and detecting events, namely detecting which events occur, such as fight, noise and the like.
Then, the detection result is input into the case property identification model, and optionally, the multimedia data of the first case can also be input into the case property identification model. And identifying the multimedia data and the detection result of the first case by using the case property identification model to obtain an identification result. Optionally, the recognition result may include:
person information identification, such as person identity, history case bottom, and the like;
identifying the relationship of the characters, such as the relationship of relatives among the characters, the bottom of a historical dispute among the characters and the like;
identifying the injury degree, such as the injury position, the injury grade and the like;
causing impact recognition, traffic congestion, crowd panic, social order confusion, etc.
Finally, the recognition result is input into the case qualitative model, and optionally, the multimedia data of the first case can also be input into the case qualitative model. The multimedia data and the recognition result of the first case are classified by using the case qualitative model, and the case type can be accurately judged by referring to case reasons, case influences, related laws and the like in the case property classification process. Illustratively, the case qualitative model includes: case cause qualitative model and case property qualitative model.
The case cause qualitative model is used for determining the case cause, such as whether the case cause is intentionally damaged or not, whether the case cause is a cause of failure or not, and whether the case cause is frequently disturbed or reported for a specific object or not; the input of the case-cause qualitative model is the recognition result of the case property recognition model, and the output of the case-cause qualitative model is the case cause, such as accidental injury, accident-borrowing, frequent disturbance or report for a specific object, and the like. The input of the case property qualitative model is the recognition result which causes influence in the recognition result and the output of the case cause qualitative model, the output of the case property qualitative model is a plurality of possible crime types and a plurality of corresponding probability values, and the probability values are used for indicating the probability that the corresponding crime types are established. The case property qualitative model is trained on a plurality of sets of training data including case causes, case influences and corresponding related laws, so that when the case causes and the case influences are input, the case property qualitative model can give possible crime types and corresponding probabilities by combining the related laws.
For example, the first case can be a crowd blast, an aggressive collector crime, a drug sale crime, etc. in a social management crime case, or a delinquent death crime, an intentional injury crime, a delinquent heavy injury crime, etc. in a personal injury crime, or a theft crime, a robbery, a fraud crime, etc. in an infringing on property crime case.
According to the identification process, the result of qualitative analysis of the case type can be output. Alternatively, the qualitative analysis result may be an enumeration of possible multiple names of guilties and a probability corresponding to each of the names of guilties. Meanwhile, the output result may also include case law and evidence-taking guidance advice of the suspected law or crime, for example, when the type of the first case is the intentional injury crime, the corresponding article of the second hundred thirty-four articles of the criminal law, the evidence-taking guidance advice corresponding to the intentional injury crime, and the like may also be output. For example, Table 1 shows the results of a qualitative analysis of one possible type of first case. Alternatively, the one-to-one correspondence between the names of the crimes and corresponding case law and evidence collection guidance suggestions can be realized through a program, and the embodiment of the application is not particularly limited herein.
TABLE 1
Figure BDA0002329365000000101
FIG. 4 shows a flow chart for providing case-type guidance advice to law enforcement personnel during a law enforcement procedure using the method of an embodiment of the present application.
In fig. 4, the method for identifying a case type provided by the embodiment of the present application is executed by a terminal device. When a law enforcement officer is on the way of police, the law enforcement officer firstly sends the location of a case site to a server through police affairs and requests the monitoring data of the case site. And the server requests real-time monitoring data from the monitoring equipment on the scene according to the received positioning. And after receiving the request of the server, the monitoring equipment sends monitoring real-time data to the server. And the server sends the monitoring real-time data to the terminal equipment. The terminal device processes the received monitoring real-time data, the processing method refers to the method shown in fig. 3, and for brevity, no further description is given herein in this embodiment of the present application, so that a preliminary qualitative analysis result of the case type can be obtained according to the monitoring real-time data and provided to law enforcement officers. After the law enforcement personnel arrive at the case site, the terminal equipment is used for shooting the case site. And the terminal equipment processes the acquired shooting data, so that a final case type qualitative analysis result can be obtained and provided for reference of law enforcement officers.
Fig. 5 is a schematic flow chart of a training method for a neural network model set provided in an embodiment of the present application. As shown in fig. 5, the method includes steps S501 to S503.
S501, inputting the multimedia data of the training case into an original neural network model group to obtain an output result.
Specifically, training data is obtained, the training data comprising multimedia data of a training case and a type of the training case. The multimedia data of the training cases can be from a large amount of law enforcement record data in a police database, such as audio and video data collected by a server and terminal equipment in daily life, corresponding labels are marked on experts of a professional marking team such as a public security legal department, the corresponding labels are case types corresponding to the training cases, and the marked multimedia data are used as the multimedia data of the training cases. Then, the multimedia data of the training case is input into the original neural network model, and a corresponding output result can be obtained.
S502, adjusting the original neural network model group according to the output result so that the similarity between the output result and the type of the training case meets the preset condition.
For example, the case type result output here may be a plurality of case names and their corresponding probabilities, and the condition that the similarity between the case type result output and the case type marked meets the preset condition may be that the similarity between the case name with the highest probability and the case type marked meets the preset condition.
And S503, storing the adjusted original neural network model group as a trained neural network model group for carrying out type recognition on the later case.
The neural network model group comprises a field condition detection model, a case property recognition model and a case qualitative model, so that the method of the embodiment of the application further comprises the respective training of the three models. The labeling of the multimedia data of the training case by the professional labeling team may further include labeling the detection result and the recognition result of the multimedia data. Inputting the multimedia data of the training case into an original field condition detection model to obtain an output detection result, and adjusting the original field condition detection model to enable the similarity between the output detection result and the marked detection result to meet a preset condition. The multimedia data and the detection result of the training case are input into a case property recognition model, wherein the detection result can be a detection result output by a trained field condition detection model or a detection result marked by a professional marking team, and then the case property recognition model is adjusted to enable the similarity between the output recognition result and the marked recognition result to meet a preset condition. And inputting the recognition result into a case qualitative model, wherein the recognition result can be the recognition result output by a trained case property recognition model or the recognition result marked by a professional marking team, and then adjusting the case qualitative model to ensure that the similarity between the output case type result and the marked case type meets the preset condition. Since the case qualitative model further comprises a case cause qualitative model and a case property qualitative model, the training of the case qualitative model further comprises: inputting the recognition result into the case cause qualitative model, and then adjusting the case cause qualitative model to enable the similarity between the output case cause data and the marked case causes to meet the preset condition; and then inputting the cause data of the case and the identification result causing the influence into the case property qualitative model, and adjusting the case property qualitative model to ensure that the similarity between the output case type result and the marked case type meets the preset condition. And finally, storing the trained field condition detection model, case property identification model and case qualitative model for later case type identification.
The training method of the neural network model group according to the embodiment of the application further includes the steps of periodically obtaining new training data, namely updating the trained database, then training the new neural network model group by combining the new training data, and periodically updating the neural network model group.
Fig. 6 is a schematic block diagram of a training apparatus of a neural network model group according to an embodiment of the present application. The training apparatus 600 of the neural network model set shown in fig. 6 includes an obtaining unit 601 and a processing unit 602.
The obtaining unit 601 and the processing unit 602 may be configured to perform a training method of a neural network model set according to an embodiment of the present application.
Specifically, the obtaining unit 601 may be configured to obtain training data, and the processing unit 602 may perform the above steps 501 to 503.
The obtaining unit 601 in the apparatus 600 shown in fig. 6 may be equivalent to the communication interface 703 in the apparatus 700 shown in fig. 7, and the corresponding training data may be obtained through the communication interface 703, or the obtaining unit 601 may also be equivalent to the processor 702, and in this case, the training data may be obtained from the memory 701 through the processor 702, or the training data may be obtained from the outside through the communication interface 703. Additionally, the processing unit 602 in the apparatus 600 may correspond to the processor 702 in the apparatus 700.
Fig. 7 is a hardware configuration diagram of a training apparatus for a neural network model group according to an embodiment of the present application. The training apparatus 700 of the neural network model set shown in fig. 7 (the apparatus 700 may be a computer device) includes a memory 701, a processor 702, a communication interface 703 and a bus 704. The memory 701, the processor 702, and the communication interface 703 are communicatively connected to each other via a bus 704.
The memory 701 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 701 may store a program, and when the program stored in the memory 701 is executed by the processor 702, the processor 702 is configured to perform the steps of the training method of the neural network model set according to the embodiment of the present application.
The processor 702 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or one or more integrated circuits, and is configured to execute related programs to implement the method for training the neural network model set according to the embodiment of the present disclosure.
The processor 702 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the training method of the neural network model set of the present application may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 702.
The processor 702 may also be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 701, and the processor 702 reads information in the memory 701, and completes, in combination with hardware thereof, functions to be executed by units included in the training apparatus for a neural network model group according to the embodiment of the present application, or performs a training method for a neural network model group according to the embodiment of the present application.
The communication interface 703 enables communication between the apparatus 700 and other devices or communication networks using transceiver means such as, but not limited to, transceivers. For example, training data may be obtained through the communication interface 703. For example, the communication Interface 703 may also be a High Definition Multimedia Interface (HDMI), a V-By-One Interface, an Embedded Display Port (eDP), a Mobile Industry Processor Interface (MIPI), a Display Port (DP), or the like, which are generally electrical communication interfaces, but may also be a mechanical Interface or other types of interfaces, which is not limited in this embodiment.
Bus 704 may include a pathway to transfer information between various components of apparatus 700, such as memory 701, processor 702, and communication interface 703.
It should be noted that although the apparatus 700 described above shows only memories, processors, and communication interfaces, in a particular implementation, those skilled in the art will appreciate that the apparatus 700 may also include other components necessary to achieve proper operation. Also, those skilled in the art will appreciate that the apparatus 700 may also include hardware components for performing other additional functions, according to particular needs. Furthermore, those skilled in the art will appreciate that apparatus 700 may also include only those components necessary to implement embodiments of the present application, and need not include all of the components shown in FIG. 7.
Fig. 8 is a schematic block diagram of a neural network model set provided by an embodiment of the present application. As shown in fig. 8, the obtained multimedia data of the case is input into the field situation detection model, and the output detection result is obtained. And inputting the multimedia data and the detection result of the training case into the case property recognition model to obtain an output recognition result. The recognition result is input into the case qualitative model, and the case qualitative model also comprises a case cause qualitative model and a case property qualitative model, so that the recognition result is input into the case cause qualitative model to obtain output cause data of the case, and then the cause data of the case and the recognition result influencing the recognition result are input into the case property qualitative model to obtain an output case type result.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

1. A method of identifying a case type, comprising:
acquiring multimedia data of a first case;
inputting the multimedia data of the first case into a neural network model set to obtain a qualitative analysis result of the type of the first case, wherein the neural network model set comprises a field condition detection model and a case property identification model.
2. The method of claim 1, wherein the qualitative analysis of the type of the first case comprises: the case qualification method comprises a plurality of criminal name types and a plurality of probability values, wherein each criminal name type corresponds to one probability value, the plurality of criminal name types comprise a first criminal name type, the first criminal name type corresponds to a first probability value, and the first probability value is used for indicating the probability of qualifying the first case as the first criminal name type.
3. The method according to claim 1 or 2, wherein said inputting multimedia data of said first case into a neural network model set to obtain a result of qualitative analysis of the type of said first case comprises:
detecting the multimedia data of the first case by using the field condition detection model to obtain a detection result, wherein the detection result comprises at least one of the following items: a person detection result, a behavior detection result, a scene detection result, or an event detection result.
4. The method of claim 3, wherein said inputting multimedia data of said first case into a neural network model set to obtain a qualitative analysis of a type of said first case, further comprises:
recognizing the multimedia data of the first case and the detection result by using the case property recognition model to obtain a recognition result, wherein the recognition result comprises at least one of the following items: a character information recognition result, a character relationship recognition result, a damage degree recognition result, or an influence recognition result.
5. The method of claim 4, wherein the set of neural network models further includes a case qualitative model,
the inputting the multimedia data of the first case into a neural network model group to obtain a qualitative analysis result of the type of the first case comprises the following steps:
and classifying the first case according to the recognition result by utilizing the case qualitative model so as to obtain a qualitative analysis result of the type of the first case.
6. The method according to claim 5, wherein said recognition result comprises said cause impact recognition result, said case qualitative model comprises a case cause qualitative model and a case property qualitative model,
the classifying the first case according to the recognition result by using the case qualitative model to obtain a qualitative analysis result of the first case comprises the following steps:
using the case cause qualitative model to qualitatively determine the cause of the first case according to the identification result to obtain cause data of the first case;
and classifying the first case according to the cause and influence identification result and the cause data of the first case by using the case property qualitative model to obtain a qualitative analysis result of the first case.
7. The method according to any one of claims 1 to 6, characterized in that the method is performed by a server, and
the method further comprises the following steps:
and sending the qualitative analysis result of the type of the first case to terminal equipment.
8. The method according to any one of claims 1 to 7, characterized in that the method is performed by a server, and
the acquiring of the multimedia data of the first case comprises:
and receiving the multimedia data of the first case from the terminal equipment or the monitoring equipment.
9. The method according to any of claims 1 to 6, characterized in that the method is performed by a terminal device, and
the acquiring of the multimedia data of the first case comprises:
shooting multimedia data of the first case; or
And receiving the multimedia data of the first case from a server or a monitoring device.
10. The method according to any one of claims 1 to 9,
the multimedia data of the first case comprises at least one of data shot by monitoring of the case site or data shot by a law enforcement recorder.
11. An apparatus for identifying a type of case, comprising: the system comprises a transmission interface and a processor, wherein a neural network model group is deployed in the processor and comprises a field condition detection model and a case property identification model;
the transmission interface is used for acquiring multimedia data of a first case;
the processor is used for inputting the multimedia data of the first case into the neural network model group so as to obtain a qualitative analysis result of the type of the first case.
12. The apparatus of claim 11, wherein the results of the qualitative analysis of the type of the first case comprise: the case qualification method comprises a plurality of criminal name types and a plurality of probability values, wherein each criminal name type corresponds to one probability value, the plurality of criminal name types comprise a first criminal name type, the first criminal name type corresponds to a first probability value, and the first probability value is used for indicating the probability of qualifying the first case as the first criminal name type.
13. The apparatus according to claim 11 or 12, wherein the processor is specifically configured to:
detecting the multimedia data of the first case by using the field condition detection model to obtain a detection result, wherein the detection result comprises at least one of the following items: a person detection result, a behavior detection result, a scene detection result, or an event detection result.
14. The apparatus of claim 13, wherein the processor is further configured to:
recognizing the multimedia data of the first case and the detection result by using the case property recognition model to obtain a recognition result, wherein the recognition result comprises at least one of the following items: a character information recognition result, a character relationship recognition result, a damage degree recognition result, or an influence recognition result.
15. The apparatus of claim 13, wherein the set of neural network models further comprises a case qualitative model, and wherein the processor is further configured to:
and carrying out qualitative analysis on the first case according to the recognition result by utilizing the case qualitative model so as to obtain a qualitative analysis result of the type of the first case.
16. The apparatus of claim 15, wherein the recognition result comprises the cause impact recognition result, wherein the case qualitative model comprises a case cause qualitative model and a case property qualitative model,
the processor is specifically configured to:
using the case cause qualitative model to qualitatively determine the cause of the first case according to the identification result to obtain cause data of the first case;
and classifying the first case according to the case property qualitative model and the cause data of the first case to obtain a qualitative analysis result of the first case.
17. The apparatus according to any of claims 11 to 16, wherein the apparatus is configured on a server side, the apparatus further configured to:
and sending the qualitative analysis result of the type of the first case to terminal equipment.
18. The apparatus according to any of claims 11 to 17, wherein the apparatus is configured on a server side, and
the acquiring of the multimedia data of the first case comprises:
and receiving the multimedia data of the first case from the terminal equipment or the monitoring equipment.
19. The arrangement according to any of claims 11-16, characterized in that the arrangement is configured at a terminal equipment side, and
the acquiring of the multimedia data of the first case comprises:
shooting multimedia data of the first case; or
And receiving the multimedia data of the first case from a server or a monitoring device.
20. The apparatus according to any one of claims 11 to 19, wherein the multimedia data of the first case comprises at least one of data taken from a monitoring of a case scene or data taken from a law enforcement recorder.
21. A computer-readable storage medium, characterized in that the computer-readable medium stores program instructions which, when run on a processor or computer, perform the method according to any one of claims 1-10.
22. A computer program product, characterized in that the method according to any of claims 1 to 10 is performed when the computer program product is run on a computer or a processor.
CN201911330198.9A 2019-12-20 2019-12-20 Method and device for identifying case type Pending CN113011576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911330198.9A CN113011576A (en) 2019-12-20 2019-12-20 Method and device for identifying case type

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911330198.9A CN113011576A (en) 2019-12-20 2019-12-20 Method and device for identifying case type

Publications (1)

Publication Number Publication Date
CN113011576A true CN113011576A (en) 2021-06-22

Family

ID=76382174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911330198.9A Pending CN113011576A (en) 2019-12-20 2019-12-20 Method and device for identifying case type

Country Status (1)

Country Link
CN (1) CN113011576A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188206A (en) * 2022-12-06 2023-05-30 北京师范大学 Judicial case decision result prediction method based on decision tree

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188206A (en) * 2022-12-06 2023-05-30 北京师范大学 Judicial case decision result prediction method based on decision tree

Similar Documents

Publication Publication Date Title
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
CN109446936A (en) A kind of personal identification method and device for monitoring scene
US10262209B2 (en) Method for analyzing video data
CN107004353B (en) Traffic violation management system and traffic violation management method
CN111368619A (en) Method, device and equipment for detecting suspicious people
CN112464030B (en) Suspicious person determination method and suspicious person determination device
JP2016157165A (en) Person identification system
CN111127508A (en) Target tracking method and device based on video
CN111259682B (en) Method and device for monitoring safety of construction site
CN110717357B (en) Early warning method and device, electronic equipment and storage medium
CN111428572A (en) Information processing method, information processing apparatus, electronic device, and medium
US20210089784A1 (en) System and Method for Processing Video Data from Archive
CN110751116A (en) Target identification method and device
CN113408464A (en) Behavior detection method and device, electronic equipment and storage medium
CN110738077B (en) Foreign matter detection method and device
CN113011576A (en) Method and device for identifying case type
CN109816893B (en) Information transmission method, information transmission device, server, and storage medium
CA3069539C (en) Role-based perception filter
WO2023005662A1 (en) Image processing method and apparatus, electronic device, program product and computer-readable storage medium
CN112818150B (en) Picture content auditing method, device, equipment and medium
CN113837066A (en) Behavior recognition method and device, electronic equipment and computer storage medium
CN114359783A (en) Abnormal event detection method, device and equipment
CN113920544A (en) Safety management system and method for stamping workshop and electronic equipment
CN113947795A (en) Mask wearing detection method, device, equipment and storage medium
CN113762092A (en) Hospital user medical alarm detection method, system, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 101, No. 2 Hongqiaogang Road, Qingpu District, Shanghai, 201721

Applicant after: Haisi Technology Co.,Ltd.

Address before: Room 101, 318 Shuixiu Road, Jinze Town, Qingpu District, Shanghai, 20121

Applicant before: Shanghai Haisi Technology Co.,Ltd.

CB02 Change of applicant information