WO2021121206A1 - 一种用于判定服务事故的责任的方法和系统 - Google Patents

一种用于判定服务事故的责任的方法和系统 Download PDF

Info

Publication number
WO2021121206A1
WO2021121206A1 PCT/CN2020/136414 CN2020136414W WO2021121206A1 WO 2021121206 A1 WO2021121206 A1 WO 2021121206A1 CN 2020136414 W CN2020136414 W CN 2020136414W WO 2021121206 A1 WO2021121206 A1 WO 2021121206A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
responsibility
sample
service request
information
Prior art date
Application number
PCT/CN2020/136414
Other languages
English (en)
French (fr)
Inventor
苏红
沙泓州
郄小虎
刘章勋
吴文栋
王震宇
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201911329083.8A external-priority patent/CN111860927A/zh
Priority claimed from CN202010036685.0A external-priority patent/CN111833137A/zh
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2021121206A1 publication Critical patent/WO2021121206A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Definitions

  • This specification relates to the field of computer technology, and in particular to a method and system for determining responsibility for service accidents.
  • One of the embodiments of this specification provides a method for determining responsibility for a service accident, the method includes: obtaining a service request, the service request has a service accident; extracting the characteristics of the service request; The characteristic is processed to determine the responsibility determination result of the service accident, and the responsibility determination result includes at least whether the service provider of the service request is the responsible party of the service accident.
  • One of the embodiments of this specification provides a system, the system includes: at least one database, the at least one database includes instructions for determining the responsibility of a service accident; at least one processor, the at least one processor and the At least one database communication, wherein, when the instruction is executed, the at least one processor is configured to: obtain a service request, the service request has a service accident; extract the characteristics of the service request; The characteristic is processed to determine the responsibility determination result of the service accident, and the responsibility determination result includes at least whether the service provider of the service request is the responsible party of the service accident.
  • One of the embodiments of this specification provides a system for determining the responsibility of a service accident.
  • the system includes: an acquisition module for acquiring a service request, where a service accident occurs in the service request; an extraction module, for extracting the service And a determination module, configured to process the characteristics based on the liability judgment model, and determine the responsibility judgment result of the service accident, the responsibility judgment result at least including: whether the service provider of the service request is the owner State the party responsible for the service incident.
  • One of the embodiments of this specification provides a computer-readable storage medium that stores computer instructions. After the computer reads the computer instructions in the storage medium, the computer executes the above-mentioned method for determining responsibility for a service accident.
  • One of the embodiments of this specification provides a method for determining the responsibility of a service accident, the method includes: obtaining a service request, the service request has a service accident; extracting the characteristics of the service request, the characteristics including at least communication Information, the communication information is determined based on the communication content between the service requester of the service request and the service provider; and the characteristic is processed based on the judgment model to determine the responsibility judgment result of the service accident,
  • the responsibility determination result includes whether the service provider of the service request is the responsible party of the service accident.
  • One of the embodiments of this specification provides a method for determining responsibility for a service accident, the method includes: obtaining a service request, the service request has a service accident; extracting the characteristics of the service request; The characteristics are processed to determine the responsibility determination result of the service accident, the responsibility determination result includes: whether the service provider of the service request is the responsible party of the service accident, and if the service provider is the responsible party The responsible party, the target responsibility scenario corresponding to the service provider.
  • One of the embodiments of this specification provides a computer-readable storage medium that stores computer instructions.
  • the computer reads the computer-executable instructions in the storage medium, the computer executes the method described in the above technical solution.
  • FIG. 1 is a schematic diagram of an application scenario of an exemplary online to offline (online to offline, O2O) service system according to some embodiments of this specification;
  • Fig. 2 is a schematic diagram of exemplary hardware and/or software of an exemplary computing device according to some embodiments of the present specification
  • Fig. 3 is a schematic diagram of exemplary hardware and/or software of an exemplary mobile device according to some embodiments of the present specification
  • Fig. 4 is a block diagram of an exemplary processing device according to some embodiments of this specification.
  • FIG. 5 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification
  • Fig. 6 is a flowchart of an exemplary process for extracting features of a service request according to some embodiments of this specification
  • Fig. 7 is a schematic diagram of an exemplary word vector generation model according to some embodiments of the present specification.
  • Fig. 8 is a schematic diagram of an exemplary text vector generation process according to some embodiments of the present specification.
  • FIG. 9 is a schematic diagram of an exemplary process of determining a responsibility judgment result based on a judgment model according to some embodiments of the present specification.
  • FIG. 10 is a schematic diagram of an exemplary process of determining a responsibility judgment result based on a judgment model according to some embodiments of the present specification
  • FIG. 11 is a flowchart of an exemplary process of determining a target responsibility scenario according to some embodiments of this specification.
  • FIG. 12 is a flowchart of an exemplary process of training a judgment model according to some embodiments of this specification.
  • FIG. 13 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification
  • FIG. 14 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification.
  • FIG. 15 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification.
  • FIG. 16 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification
  • FIG. 17 is an exemplary flowchart of a method for establishing a judgment model according to some embodiments of the present specification.
  • Figure 18 is a block diagram of a device for determining responsibility for a service accident according to some embodiments of this specification.
  • Figure 19 is a block diagram of a device for determining responsibility for a service accident according to some embodiments of this specification.
  • FIG. 20 is a flowchart of an exemplary process of training a judgment model according to some embodiments of the present specification
  • FIG. 21 is a flowchart of an exemplary process of training a judgment model according to some embodiments of this specification.
  • FIG. 22 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification.
  • FIG. 23 is a flowchart of an exemplary process of training a judgment model according to some embodiments of this specification.
  • FIG. 24 is a block diagram of an apparatus for training an exemplary blame model according to some embodiments of this specification.
  • FIG. 25 is a block diagram of an exemplary device for determining responsibility for a service accident according to some embodiments of this specification.
  • FIG. 26 is a block diagram of an exemplary apparatus for determining responsibility for a service accident according to some embodiments of this specification.
  • Fig. 27 is a block diagram of an exemplary apparatus for determining responsibility for a service accident according to some embodiments of the present specification.
  • the system and method of this specification can be applied to any type of on-demand service.
  • the system and method of this specification can be applied to transportation systems in different environments, including land (for example, road or off-road), water (for example, river, lake, or ocean), air, aerospace, etc., or any combination thereof.
  • the means of transportation of the transportation system may include taxis, private cars, downwinds, buses, trains, high-speed trains, subways, ships, ships, airplanes, spacecraft, hot air balloons, unmanned vehicles, etc., or any combination thereof.
  • the transportation system may also include any transportation system for management and/or distribution, for example, a system for sending and/or receiving couriers.
  • Applications of the system and method of this specification may include mobile device (for example, smart phone or smart tablet) applications, web pages, browser plug-ins, client terminals, client systems, internal analysis systems, artificial intelligence robots, etc., or any combination thereof.
  • passenger can be used to refer to individuals, entities, or tools that request or order services, and are used interchangeably.
  • driver can be used to refer to individuals, entities or tools that provide services or assist in providing services.
  • service provider and “provider” described in this manual are interchangeable and refer to individuals, entities or tools that provide services or assist in providing services.
  • user in this specification is used to refer to individuals, entities or tools that can request services, order services, provide services, or facilitate the provision of services.
  • the terms “requester” and “service requester terminal” can be used interchangeably, and the terms “provider” and “service provider terminal” can be used interchangeably.
  • the terms “request”, “service”, “service request” and “order” in this manual can be used to indicate that passengers, requesters, service requesters, customers, drivers, providers, service providers, suppliers, etc. or Requests initiated in any combination, and can be used interchangeably.
  • the service request can be accepted by any one of the passenger, requester, service requester, customer, driver, provider, service provider, or supplier.
  • the service request is accepted by the driver, provider, service provider, or supplier.
  • Service requests can be billed or free.
  • the positioning technology used in this manual can be based on Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Compass Navigation System (COMPASS), Galileo Positioning System, Quasi-Zenith Satellite System (QZSS), Wireless Fidelity (WiFi) Positioning technology, etc. or any combination thereof.
  • GPS Global Positioning System
  • GLONASS Global Navigation Satellite System
  • COMPASS Compass Navigation System
  • Galileo Positioning System Galileo Positioning System
  • QZSS Quasi-Zenith Satellite System
  • WiFi Wireless Fidelity
  • Fig. 1 is a schematic diagram of an application scenario of an exemplary online to offline (O2O) service system according to some embodiments of the present specification.
  • the O2O service system 100 may be used to determine responsibility for service requests in which service incidents occur.
  • the service request can be any location-based service request.
  • the service request may be related to a transportation service (for example, online taxi service, courier service).
  • the O2O service system 100 may include a server 110, a network 120, a storage device 130, a service requester terminal 140 and a service provider terminal 150.
  • the server 110 may be a single server or a server group.
  • the server group may be centralized or distributed (for example, the server 110 may be a distributed system).
  • the server 110 may be local or remote.
  • the server 110 may access information and/or data stored in the service requester terminal 140, the service provider terminal 150, and/or the storage device 130 via the network 120.
  • the server 110 may be directly connected to the service requester terminal 140, the service provider terminal 150, and/or the storage device 130 to access stored information and/or data.
  • the server 110 may be implemented on a cloud platform.
  • the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, internal cloud, multi-layer cloud, etc., or any combination thereof.
  • the server 110 may be implemented on a computing device 200 including one or more components as shown in FIG. 2.
  • the server 110 may include a processing device 112.
  • the processing device 112 may process information and/or data related to the service request to perform one or more functions described in this specification. For example, the processing device 112 may process the characteristics of the service request in which the service accident occurs based on the judgment model, and determine the responsibility judgment result of the service accident.
  • the processing device 112 may include one or more processing engines (e.g., a single-core processing engine or a multi-core processing engine).
  • the processing device 112 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a graphics processing unit (GPU), a physical processing unit (PPU), a digital signal processor (DSP), Field programmable gate array (FPGA), programmable logic device (PLD), controller, microcontroller unit, reduced instruction set computer (RISC), microprocessor, etc. or any combination thereof.
  • the processing device 112 may be integrated in the service requester terminal 140 or the service provider terminal 150.
  • the network 120 may facilitate the exchange of information and/or data.
  • one or more components of the O2O service system 100 may send information and/or data via the network 120 To other components of the O2O service system 100.
  • the server 110 may obtain a service request in which a service accident occurs from the storage device 130 via the network 120.
  • the server 110 may obtain a service request in which a service accident occurs from the service requester terminal 140 via the network 120.
  • the network 120 may be a wired network, a wireless network, etc., or any combination thereof.
  • the network 120 may include a cable network, a wired network, an optical fiber network, a telecommunications network, an internal network, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), and a metropolitan area network (MAN) , Public Switched Telephone Network (PSTN), Bluetooth network, Zigbee network, Near Field Communication (NFC) network, etc. or any combination thereof.
  • the network 120 may include one or more network access points.
  • the network 120 may include wired or wireless network access points, such as base stations and/or Internet exchange points 120-1, 120-2,... Through the access point, one or more components of the O2O service system 100 may be connected to the network 120 to exchange data and/or information.
  • the storage device 130 may store data and/or instructions related to the service request.
  • the storage device 130 may store data acquired from the service provider terminal 150 and/or the service requester terminal 140.
  • the storage device 130 may store data and/or instructions that the server 110 may execute or use to perform the exemplary methods described in this specification.
  • the aforementioned data and/or instructions may include the service request in which the accident occurred, the characteristics of the service service request, the communication information corresponding to the service request, and the like.
  • the storage device 130 may include mass memory, removable memory, volatile read-write memory, read-only memory (ROM), etc., or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like.
  • Exemplary removable storage may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tapes, and the like.
  • Exemplary volatile read-write memory may include random access memory (RAM).
  • Exemplary RAM may include dynamic random access memory (DRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), zero Capacitive random access memory (Z-RAM), etc.
  • DRAM dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • SRAM static random access memory
  • T-RAM thyristor random access memory
  • Z-RAM zero Capacitive random access memory
  • Exemplary ROMs may include mask-type read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), optical disk only Read memory (CD-ROM), digital versatile disk read-only memory, etc.
  • the storage device 130 may be implemented on a cloud platform.
  • the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, internal cloud, multi-layer cloud, etc., or any combination thereof.
  • the storage device 130 may be connected to the network 120 to communicate with one or more components of the O2O service system 100 (for example, the server 110, the service provider terminal 150, and the service requester terminal 140).
  • One or more components of the O2O service system 100 can access data and/or instructions stored in the storage device 130 via the network 120.
  • the storage device 130 may be directly connected to or communicate with one or more components of the O2O service system 100 (for example, the server 110, the service requester terminal 140, the service provider terminal 150).
  • the storage device 130 may be part of the server 110.
  • the service requester terminal 140 may include a mobile device 140-1, a tablet computer 140-2, a laptop computer 140-3, etc., or any combination thereof.
  • the mobile device 140-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, etc., or any combination thereof.
  • the smart home equipment may include smart lighting equipment, smart electrical appliance control devices, smart monitoring equipment, smart TVs, smart cameras, walkie-talkies, etc., or any combination thereof.
  • the wearable device may include smart bracelets, smart footwear, smart glasses, smart helmets, smart watches, smart clothes, smart backpacks, smart accessories, etc., or any combination thereof.
  • smart mobile devices may include smart phones, personal digital assistants (PDAs), gaming devices, navigation devices, point of sale (POS), etc., or any combination thereof.
  • the virtual reality device and/or augmented virtual reality device may include a virtual reality helmet, virtual reality glasses, virtual reality goggles, augmented reality helmets, augmented reality glasses, augmented reality goggles, etc., or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include Google GlassTM, Oculus RiftTM, HololensTM, Gear VRTM, etc.
  • the service requester terminal 140 may be a device with positioning technology for locating the location of the service requester and/or the service requester terminal 140.
  • the service provider terminal 150 may be a device similar to or the same as the service requester terminal 140.
  • the service provider terminal 150 may include a mobile device 150-1, a tablet computer 150-2, a laptop computer 150-3, etc., or any combination thereof.
  • the service provider terminal 150 may be a device with positioning technology for locating the location of the service provider and/or the service provider terminal 150.
  • the service requester terminal 140 and/or the service provider terminal 150 may communicate with other positioning devices to determine the location of the service requester, the service requester terminal 140, the service provider, and/or the service provider terminal 150 .
  • the service requester terminal 140 and/or the service provider terminal 150 may transmit positioning information to the server 110.
  • the service requester may be a user of the service requester terminal 140.
  • the user of the service requester terminal 140 may be someone other than the service requester.
  • the user A of the service requester terminal 140 may use the service requester terminal 140 to send a service request corresponding to the user B or receive a service confirmation and/or information or instructions from the server 110.
  • the service provider may be a user of the service provider terminal 150.
  • the user of the service provider terminal 150 may be someone other than the service provider.
  • the user C of the service provider terminal 150 may receive a service request for the user D through the service provider terminal 150 and/or receive information or instructions from the server 110.
  • one or more components of the O2O service system 100 may have permission to access the storage device 130.
  • one or more components of the O2O service system 100 can read and/or modify information related to the service requester, service provider, and/or the public when one or more conditions are met.
  • the server 110 may read and/or modify the information of one or more service requesters after the service is completed.
  • the service provider terminal 150 receives a service request from the service requester terminal 140, the service provider terminal 150 can access information related to the service requester, but cannot modify the related information of the service requester.
  • the information exchange of one or more components of the O2O service system 100 can be realized by requesting a service.
  • the object of the service request can be any product.
  • the product may be a tangible product or an intangible product.
  • Tangible products may include food, medicine, commodities, chemical products, electrical appliances, clothing, automobiles, houses, luxury goods, etc., or any combination thereof.
  • Non-material products may include service products, financial products, knowledge products, Internet products, etc. or any combination thereof.
  • Internet products may include personal host products, website products, mobile Internet products, commercial host products, embedded products, etc., or any combination thereof.
  • Mobile Internet products can be used in mobile terminal software, programs, systems, etc., or any combination thereof.
  • the mobile terminal may include a tablet computer, a laptop computer, a mobile phone, a personal digital assistant (PDA), a smart watch, a POS device, a vehicle-mounted computer, a vehicle-mounted TV, a wearable device, etc., or any combination thereof.
  • the product can be any software and/or application used on a computer or mobile phone.
  • the software and/or application program may be related to social networking, shopping, transportation, entertainment, learning, investment, etc. or any combination thereof.
  • transportation-related system software and/or application programs may include travel software and/or application programs, vehicle scheduling software and/or application programs, map software and/or application programs, and the like.
  • vehicles may include horses, carriages, rickshaws (for example, unicycles, bicycles, tricycles, etc.), automobiles (for example, taxis, buses, private cars, etc.), trains, subways, and ships. , Aircraft (for example, airplanes, helicopters, space shuttles, rockets, hot air balloons, etc.), etc. or any combination thereof.
  • horses carriages, rickshaws (for example, unicycles, bicycles, tricycles, etc.), automobiles (for example, taxis, buses, private cars, etc.), trains, subways, and ships.
  • Aircraft for example, airplanes, helicopters, space shuttles, rockets, hot air balloons, etc.
  • the elements may be executed by electrical signals and/or electromagnetic signals.
  • the processor of the service requester terminal 140 may generate an electrical signal of the encoding request. Then, the processor of the service requester terminal 140 may send the electric signal to the output port. If the service requester terminal 140 communicates with the server 110 via a wired network, the output port may be physically connected to a cable, which further transmits an electric signal to the input port of the server 110.
  • the output port of the service requester terminal 140 may be one or more antennas, which convert electrical signals into electromagnetic signals.
  • the service provider terminal 150 may process tasks through the operation of logic circuits in its processor, and receive instructions and/or service requests from the server 110 via electrical signals or electromagnetic signals.
  • the processor processes instructions, issues instructions, and/or executes actions, the instructions and/or actions are performed through electrical signals.
  • the processor when the processor retrieves or saves data from a storage medium (for example, the storage device 130), it can send an electrical signal to a read/write device of the storage medium, which can read or write structured data in the storage medium. .
  • the structure data can be transmitted to the processor in the form of electrical signals through the bus of the electronic device.
  • the electrical signal may refer to one electrical signal, a series of electrical signals, and/or at least two discontinuous electrical signals.
  • Fig. 2 is a schematic diagram of exemplary hardware and/or software of an exemplary computing device according to some embodiments of the present specification.
  • the server 110, the service requester terminal 140, or the service provider terminal 150 may be implemented on the computing device 200.
  • the processing device 112 may implement and execute the functions of the processing device 112 disclosed in this specification on the computing device 200.
  • the computing device 200 may include a bus 210, a processor 220, a read-only memory 230, a random access memory 240, a communication port 250, an input/output 260, and a hard disk 270.
  • the processor 220 can execute calculation instructions (program code) and perform the functions of the O2O service system 100 described in this specification.
  • the calculation instructions may include programs, objects, components, data structures, procedures, modules, functions (the functions refer to the specific functions described in this specification), and the like.
  • the processor 220 may process image or text data obtained from any other components of the O2O service system 100.
  • the processor 220 may include a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a central processing unit (CPU) , Graphics processing unit (GPU), physical processing unit (PPU), microcontroller unit, digital signal processor (DSP), field programmable gate array (FPGA), advanced RISC machine (ARM), programmable logic device, and Any circuits and processors that perform one or more functions, etc., or any combination thereof.
  • RISC reduced instruction set computer
  • ASIC application specific integrated circuit
  • ASIP application specific instruction set processor
  • CPU central processing unit
  • GPU Graphics processing unit
  • PPU physical processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ARM advanced RISC machine
  • programmable logic device any circuits and processors that perform one or more functions, etc., or any combination thereof.
  • the computing device 200 in FIG. 2 only describes one processor, but it should be noted that the computing
  • the memory of the computing device 200 may store data/information acquired from any other components of the O2O service system 100.
  • exemplary ROMs may include mask ROM (MROM), programmable ROM (PROM), erasable programmable ROM (PEROM), electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM), and digital Universal disk ROM, etc.
  • Exemplary RAM may include dynamic RAM (DRAM), double rate synchronous dynamic RAM (DDR SDRAM), static RAM (SRAM), thyristor RAM (T-RAM), zero capacitance (Z-RAM), and the like.
  • the input/output 260 may be used to input or output signals, data or information.
  • the input/output 260 may include an input device and an output device.
  • Exemplary input devices may include a keyboard, a mouse, a touch screen, a microphone, etc., or any combination thereof.
  • Exemplary output devices may include display devices, speakers, printers, projectors, etc., or any combination thereof.
  • Exemplary display devices may include liquid crystal displays (LCD), light emitting diode (LED) based displays, flat panel displays, curved displays, television equipment, cathode ray tubes (CRT), etc., or any combination thereof.
  • LCD liquid crystal displays
  • LED light emitting diode
  • CRT cathode ray tubes
  • the communication port 250 can be connected to a network for data communication.
  • the connection can be a wired connection, a wireless connection, or a combination of both.
  • Wired connections can include cables, optical cables, telephone lines, etc., or any combination thereof.
  • the wireless connection may include Bluetooth, Wi-Fi, WiMax, WLAN, ZigBee, mobile networks (for example, 3G, 4G, or 5G, etc.), etc., or any combination thereof.
  • the communication port 250 may be a standardized port, such as RS232, RS485, and so on. In some embodiments, the communication port 250 may be a specially designed port.
  • Fig. 3 is a schematic diagram of exemplary hardware and/or software of an exemplary mobile device according to some embodiments of the present specification.
  • the mobile device 300 may include a communication unit 310, a display unit 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an input/output unit 350, a memory 360, a storage unit 370, and the like.
  • the mobile device 300 may also include any other suitable components, including but not limited to a system bus or a controller (not shown in the figure).
  • the operating system 361 for example, iOS, Android, Windows Phone, etc.
  • the application program 362 may be loaded from the storage unit 370 into the memory 360 so as to be executed by the CPU 340.
  • the application program 362 may include a browser or an application program for receiving text, image, audio, or other related information from the O2O service system 100.
  • the user interaction of the information flow may be implemented through the input/output unit 350 and provided to the processing device 112 and/or other components of the O2O service system 100 through the network 120.
  • a computing device or a mobile device can be used as a hardware platform for one or more components described in this specification.
  • the hardware components, operating systems, and programming languages of these computers or mobile devices are conventional in nature, and those skilled in the art can adapt these technologies to the system described in this specification after being familiar with these technologies.
  • a computer with user interface elements can be used to implement a personal computer (PC) or other types of workstations or terminal devices, and if properly programmed, the computer can also act as a server.
  • PC personal computer
  • Fig. 4 is a block diagram of an exemplary processing device according to some embodiments of the present specification.
  • the processing device 112 of the system for determining the responsibility of the service accident may include an acquisition module 410, an extraction module 420, a determination module 430, and a return module 440.
  • the obtaining module 410 may be used to obtain a service request.
  • the service request may be a cancelled service request.
  • the service request please refer to step 510 and its related description, which will not be repeated here.
  • the obtaining module 410 may also be used to obtain the judgment model (for example, obtain the judgment model through training).
  • the obtaining module 410 may obtain multiple sample service requests, label information corresponding to the sample service request, and sample characteristics corresponding to the sample service request, and obtain training based on the multiple sample service requests and their corresponding label information and sample characteristics. Judgment model. Regarding the training of the judgment model, refer to Figure 12 and its related descriptions, which will not be repeated here.
  • the obtaining module 410 may also update the judgment model.
  • the obtaining module 410 may also be used to obtain appeal information of the service provider, and update the judgment model based on the appeal information and the responsibility judgment result.
  • the update of the judgment model please refer to 530 and its related description, which will not be repeated here.
  • the extraction module 420 may be used to extract features of the service request.
  • Features include: communication information, basic information of the service request, portrait information of the service provider and/or portrait information of the service requester.
  • the extraction module 420 may determine the corresponding text vector feature (eg, target text vector) based on the text information (eg, communication information) in the feature.
  • the corresponding text vector feature eg, target text vector
  • FIG. 6 and its related description For more details on the feature, please refer to step 520 and its related description, and for more details on determining the target text vector, refer to FIG. 6 and its related description.
  • the determination module 430 can be used to determine the result of the responsibility determination.
  • the responsibility determination result includes at least: whether the service provider of the service request is the party responsible for the service accident; if the responsibility determination result shows that the service provider is the responsible party, the responsibility determination result also includes: the target responsibility scenario corresponding to the service provider.
  • the determination module 430 may process the features based on the responsibility model to determine the responsibility determination result of the service accident.
  • the determination module 430 may determine candidate responsibility scenarios, and determine the target responsibility scenarios based on the priority of the candidate responsibility scenarios. For more details about the responsibility determination result, the target responsibility scenario and its priority, please refer to steps 530 and 1120, which will not be repeated here.
  • the return module 440 may be used to return the responsibility determination result to the service provider or/and the service requester.
  • system and its modules shown in FIG. 4 can be implemented in various ways.
  • the system and its modules may be implemented by hardware, software, or a combination of software and hardware.
  • the hardware part can be implemented using dedicated logic;
  • the software part can be stored in a memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware.
  • an appropriate instruction execution system such as a microprocessor or dedicated design hardware.
  • the above-mentioned methods and systems can be implemented using computer-executable instructions and/or included in processor control code, for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware Such codes are provided on a programmable memory or a data carrier such as an optical or electronic signal carrier.
  • the system and its modules in this specification can not only be implemented by hardware circuits such as very large-scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. It may also be implemented by software executed by various types of processors, or may be implemented by a combination of the above hardware circuit and software (for example, firmware).
  • the above description of the processing device 112 and its modules of the O2O service system 100 is only for convenience of description, and does not limit this specification to the scope of the embodiments mentioned. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules, or form a subsystem to connect with other modules without departing from this principle.
  • the acquisition module 410 and the extraction module 420 may be two different modules, or may be combined into the same module. Such deformations are all within the protection scope of this specification.
  • Fig. 5 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of the present specification.
  • the process 500 may be executed by a processing device (for example, the processing device 112 or other processing devices).
  • the process 500 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 500.
  • the process 500 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 5 is not restrictive.
  • Step 510 Obtain a service request, and the service request has a service accident.
  • step 510 may be performed by the acquisition module 410.
  • a service request may refer to a request initiated by a passenger, a requester, a service requester, a user, a driver, a provider, a service provider, a supplier, etc., or any combination thereof.
  • the service request may be any location-based service request.
  • the service request may be a request related to transportation services (for example, online taxi service, courier service).
  • the service request may be a real-time request or an appointment request.
  • a real-time request may mean that the requester expects to receive the service at the current moment or at a specified time that is less than a preset threshold.
  • the appointment request may mean that the requester expects to receive the service at a specified time that is greater than the preset threshold value from the current moment.
  • the preset threshold may be a system default value, or it may be adjusted according to different situations.
  • the preset threshold may be 3 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, etc.
  • the preset threshold can be set to be relatively small (for example, 10 minutes); while during off-peak hours (for example, 10:00-12:00 in the morning), the time threshold can be set to be relatively large (For example, 1 hour).
  • the service requester and the service provider may be different.
  • the service requester may be a passenger, and the service provider may be a driver.
  • the corresponding service request may be a taxi request.
  • the service requester may be a user, and the service provider may be a rider, and the corresponding service request may be a takeaway request.
  • the embodiment of this specification takes an "online taxi service" scenario as an example for description.
  • a service incident can refer to an abnormal situation related to a service request.
  • service incidents may include service requests being cancelled, service requests being interrupted, service requests being delayed, and so on.
  • service incidents may be related to negative events. Negative incidents can include traffic accidents, service providers providing services overtime, service providers providing services incorrectly, service requesters complaining to service providers, etc.
  • service incidents may be caused by multiple reasons.
  • the cancellation of the service request may be caused by the service provider.
  • the service request was cancelled by the service requester because the service provider did not answer the phone in time and the service provider’s vehicle did not arrive at the boarding location in time (it may also be accompanied by complaints from the service requester to the service provider).
  • the cancellation of the service request may be caused by the service requester. For example, because the service requester did not answer the phone in time, the service requester did not reach the boarding location in time, the service requester temporarily changed the itinerary, etc., the service request was cancelled (in some cases, it was accompanied by malicious Complaint).
  • the interruption of the service request may be related to an objective event. For example, on the way from the service provider to the boarding location, the service request is interrupted due to a traffic accident, a breakdown of the service provider's vehicle, or the inability of the service provider to arrive due to health reasons.
  • the service request delayed may be caused by the service provider or service requester.
  • the service provider did not receive the service request in time, the service provider was late, the service provider’s signal was delayed by the network, and the service requester did not arrive at the pickup location in time, the service request was delayed.
  • the service request may be further complained by the service provider or the service requester.
  • the cancelled service request may include the complained service request.
  • the obtaining module 410 may obtain a service request in which a service accident occurs in a variety of ways.
  • the obtaining module 410 may obtain a service request in which a service accident occurs from a storage device (for example, the storage device 130).
  • the obtaining module 410 may obtain the cancelled service request from the service request data stored in the storage device (for example, the database 130).
  • the service request that is cancelled may be determined according to the mileage of the service request, charging information (the mileage does not meet the target mileage of the service request, and the charging is zero), etc.
  • the obtaining module 410 may obtain a service request in which a service accident occurs from the service provider terminal 150 or the service requester terminal 140 via the network 120. For example, the obtaining module 410 may obtain one or more service requests interrupted by the service provider at the current moment from the service provider terminal 150.
  • Step 520 Extract the characteristics of the service request.
  • step 520 may be performed by the extraction module 420.
  • the characteristics of the service request may include at least communication information.
  • the communication information may be determined based on the communication content between the service requester and the service provider of the service request.
  • the communication content may be presented in the form of text, voice, image, video, etc.
  • the communication content may be a text chat record between the service provider and the service requester.
  • the communication content may be voice chat records, call recordings, etc. between the service provider and the service requester.
  • the service provider and the service requester can send voice messages to each other through the chat function in the app, which may include the service provider and the service requester negotiating on issues such as the pick-up location and pick-up time that may change Call records, etc.
  • the extraction module 420 may process the communication information, and extract the text vector corresponding to the communication information.
  • the text vector of the communication information please refer to Fig. 6 and its related description, which will not be repeated here.
  • the extraction module 420 may also preprocess the communication information. For example, the extraction module 420 may convert the communication content in the form of voice into the corresponding communication content in the form of text. For another example, the extraction module 420 may perform noise reduction processing on the communication content in the form of voice (for example, perform noise reduction processing through an acoustic model).
  • the characteristics of the service request may include basic information of the service request, portrait information of the service provider, portrait information of the service requester, etc., or any combination thereof.
  • the basic information of the service request may include the departure place, destination, boarding location, departure time, mileage, estimated price, and driving distance of the driver (for example, the driver’s location from the location when the service request was received to the boarding location Distance between the driver, the time when the driver accepted the service request, the location of the driver when the service request was accepted, whether the driver and passengers were on the way, information related to service accidents (for example, the time between when the driver accepted the service request and when the service request was cancelled) Interval or mileage, time when the service request was cancelled, whether the driver arrived at the pick-up location), etc.
  • driving distance of the driver for example, the driver’s location from the location when the service request was received to the boarding location Distance between the driver, the time when the driver accepted the service request, the location of the driver when the service request was accepted, whether the driver and passengers were on the way, information related to service accidents (for example, the time between when the driver accepted the service request and when the service request was cancelled) Interval or mileage, time when the service request was cancelled, whether the
  • the basic information of the service request may also include vehicle information.
  • Vehicle information can include vehicle type, vehicle age, total vehicle mileage, vehicle cost, vehicle revenue, vehicle price, license plate number, fuel consumption per kilometer, remaining fuel, trunk size, and other equipment information (e.g., emergency medical equipment Information, fire extinguishing equipment information, etc.), vehicle location information, etc., or any combination thereof.
  • the portrait information of the service provider refers to labeled information abstracted based on information such as the service provider's social attributes, living habits, service provision behavior, and/or historical information.
  • the historical information of the service provider may include the service score of the service provider, the complaint rate of the service provider, and so on.
  • the historical information of the service provider may include the cancellation rate of service requests within a historical preset time period (for example, within 1 to 3 months), the accountability rate of the service provider in the cancelled service requests, and the service provision Party’s service flow (for example, the price of all services provided by the service provider), the service volume of the service provider (that is, the number of all services provided by the service provider), etc.
  • the portrait information of the service requester refers to tagged information abstracted based on information such as the social attributes of the service requester, life habits requesting service behavior, and/or historical information.
  • the historical information of the service requester may include the credit score of the service requester, the credit rating of the service requester, and so on.
  • the historical information of the service requester may include the cancellation rate of the request initiated by the service requester, the complaint rate of the service requester, the accountability rate of the service requester in the cancelled service request, the flow of the service requester, and the service requester The amount of service and so on.
  • the extraction module 420 may extract the above-mentioned features in a variety of ways. In some embodiments, the extraction module 420 may extract the above-mentioned features from one or more components of the O2O service system 100 (for example, the storage device 130, the service provider terminal 150, the service requester terminal 140, etc.) or from an external source via the network 120 . In some embodiments, the extraction module 420 may extract the features of the service request through a feature extraction algorithm.
  • the feature extraction algorithm may include HOG feature extraction algorithm, LBP feature extraction algorithm, Haar feature extraction algorithm, LoG feature extraction algorithm, Harris corner feature extraction algorithm, SIFT feature extraction algorithm, SURF feature extraction algorithm, etc., or any combination thereof.
  • the characteristics of the service request may be expressed in multiple forms.
  • the characteristics of the service request may be expressed in the form of non-real values (for example, vectors, characters, strings, codes, graphics, etc.).
  • Step 530 Process the features based on the liability judgment model, and determine the liability judgment result of the service accident.
  • step 530 may be performed by the determination module 430.
  • the responsibility determination result may include whether the service provider of the service request is the party responsible for the service incident. In some embodiments, if the result of the responsibility determination indicates that the service provider is the party responsible for the service accident, the result of the responsibility determination also includes the target responsibility scenario corresponding to the service provider.
  • the "responsibility scenario” may reflect the cause of the service accident to a certain extent.
  • the responsibility scenario corresponding to the service provider may be "the service provider did not answer the call in time", "the vehicle did not arrive at the destination in time”, and so on.
  • the responsibility scenario corresponding to the service requester may be "the service requester initiates an incorrect service request", "the service requester initiates a malicious complaint", and so on.
  • the responsibility scenario may also reflect the severity of the service accident.
  • different liability scenarios may correspond to different severity levels. For example, the responsibility scenario "service provider caused a vehicle accident” may have a severity corresponding to "very serious”; and the responsibility scenario “service provider did not arrive at the destination in time” may have a corresponding severity of "moderate severity”.
  • the responsibility determination model may be a machine learning model.
  • machine learning models may include, but are not limited to, convolutional neural network models, recurrent neural network models, XGBoost models, decision tree models, GBDT (Gradient Boosted Decision Tree/Grdient Boosted Regression Tree) models, linear regression models, and the like.
  • the judgment model may include one or more classification models for performing classification tasks.
  • the classification model may include, but is not limited to, KNN (k-nearestneighbors) model, perceptron model, naive Bayes model, decision tree model, logistic regression model, support vector machine model, random forest model, Neural network model, etc. or any combination thereof.
  • the judgment module 430 can input the characteristics of the service request into the judgment model, and the judgment model can output the judgment result of whether the service provider is the party responsible for the service accident.
  • the judgment model may include a sub-model for performing a binary classification task. This sub-model can be used to classify the characteristics of the input service request into two categories, and whether the service provider of the output service request is the party responsible for the service accident. For example, the output responsibility determination result is "Yes" or "No". Further, the determination module 430 may also determine the target responsibility scenario corresponding to the service provider based on preset rules. For more details, please refer to Figure 10 and its related descriptions, which will not be repeated here.
  • the judgment module 430 may input the characteristics of the service request into the judgment model, and the judgment model may simultaneously output the judgment result of whether the service provider is the party responsible for the service accident and the corresponding target liability scenario.
  • the judgment model may include a first sub-model for performing a two-classification task and a second sub-model for performing a multi-classification task.
  • the first sub-model can be used to classify the characteristics of the input service request into two categories, and whether the service provider of the output service request is the party responsible for the service accident.
  • the second sub-model can be used to classify the characteristics of the input service request and output the target responsibility scenario (for example, "the service provider did not arrive at the service request location on time", "the service provider received the service request timeout").
  • the judgment module 430 can input the above-mentioned characteristics of the service request into the judgment model, and the judgment model can output the probability that the service requester is the responsible party for the service accident.
  • the judgment module 430 can further determine the service provider according to the output probability. Whether it is the party responsible for the service incident. For example, if the probability value output by the model is greater than the preset probability threshold, the determination module 430 may determine that the service requester is the party responsible for the service accident.
  • the preset probability threshold can be a system default value, or it can be adjusted according to different situations.
  • the training module may train the judgment model based on multiple sets of labeled training samples. Specifically, the training module may input the labeled training samples into the initial judgment model, and iteratively update the parameters of the initial judgment model to determine the final judgment model. In some embodiments, the training module may train the judgment model through various methods (eg, gradient descent method). For more details about training the judgment model, please refer to Figure 12 and its related descriptions, which will not be repeated here.
  • the determination module 430 may also send the responsibility determination result to the service provider.
  • the service provider can send appeal information to the system if it disagrees with the responsibility judgment result.
  • the training module may obtain appeal information and update the judgment model based on the appeal information and the responsibility determination result.
  • the training module can update the label of the training sample, and optimize the existing model based on the continuously expanding training sample, thereby improving the accuracy of the judgment model outputting responsibility judgment results.
  • the service provider of the service request is the responsible party of the service accident based on the judgment model, and determine the corresponding target liability scenario. It is also possible to send the target liability scenario to the service provider to facilitate the service provider to understand the reason why it is the party responsible for the service accident in a timely manner. On the one hand, it can increase the service provider’s recognition of its responsibility; on the other hand, it can also help the service provider to better improve its own services to provide better services.
  • Fig. 6 is a flowchart of an exemplary process for extracting features of a service request according to some embodiments of the present specification.
  • the process 600 may be executed by a processing device (for example, the processing device 112 or other processing devices).
  • the process 600 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 600.
  • the process 600 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 6 is not restrictive.
  • the process 600 may be executed by the processing device 112 (for example, the extraction module 420) or other processing devices.
  • the process 600 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 600.
  • the process 600 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 6 is not restrictive.
  • Step 610 Perform word segmentation processing on the communication information, and determine at least one target word segmentation result.
  • word segmentation processing refers to dividing a continuous text (for example, a sentence) into several word or phrase sequences according to certain rules. For example, assuming that the text is "Beijing Airport Arrival Gate", the result of word segmentation processing on this text may be "Beijing/Airport/Inbound Gate”.
  • the target word segmentation result is the result of word segmentation processing on the text of the communication information.
  • the target segmentation result can include independent words, phrases, punctuation marks or other semantic units with definite meaning in the communication information.
  • word segmentation methods for word segmentation processing may include, but are not limited to, dictionary-based word segmentation methods, understanding-based word segmentation methods, statistics-based word segmentation methods (for example, N-gram model, hidden Marco Husband model, etc.), rule-based word segmentation methods (for example, minimum matching algorithm, maximum matching algorithm, reverse maximum matching algorithm, verbatim matching algorithm, N-shortest path word segmentation algorithm, etc.).
  • the processing device may perform word segmentation on the communication information to obtain at least one preliminary word segmentation result. Further, the processing device may filter the at least one preliminary word segmentation result based on the attribute characteristics respectively corresponding to the at least one preliminary word segmentation result to determine at least one target word segmentation result.
  • the processing device may determine at least one preliminary word segmentation result based on a preset rule. For example, the processing device may segment the communication information based on the smallest word unit or symbol unit to obtain at least one preliminary word segmentation result.
  • at least one preliminary word segmentation result may include nouns, verbs, punctuation marks, and so on.
  • the communication information is "2019/12/14 12:01 driver: hello, I am about to arrive at the pick-up point”
  • at least one preliminary word segmentation result can include "2019/12/14 12:01", “driver” ", “Hello", ",”, “I”, “Coming soon", “Arrival", “Boarding point”.
  • At least one preliminary word segmentation result can include “time”, “2019/12/14", “location”, “ Somewhere", “person”, “A”, “B”.
  • the attribute feature may refer to information that can reflect its characteristics, such as type, nature, and actual meaning.
  • the attribute feature may include the communication party corresponding to the preliminary word segmentation result (for example, the service requester, service provider, customer service), the importance of the preliminary word segmentation result, and so on.
  • "importance” can reflect the importance or significance of the preliminary word segmentation result in communication in a specific application scenario. For example, taking the "online taxi” scenario in this manual as an example, suppose the preliminary word segmentation result is "I/Go/Pedestrian Street". Here, "Go” is only a conjunctive verb that expresses an action, so the importance of "Go” is low. In “Me” and "Pedestrian Street”.
  • the processing device may filter it based on the attribute characteristics. For example, the processing device can remove the less important word segmentation results.
  • the processing device can remove the less important word segmentation results.
  • the communication information is "2019/12/14 12:01 driver: hello, I am about to arrive at the pick-up point”
  • at least one preliminary word segmentation result is "2019/12/14 12:01”
  • driver , "Hello", ",", "I”, "Coming soon", "Arrival", “Boarding point”.
  • the processing equipment can remove the undisputed preliminary word segmentation results such as "Hello” and punctuation, and obtain at least one target word segmentation result.
  • the processing device may process at least one preliminary word segmentation result based on semantic information to determine at least one target word segmentation result. For example, the processing device may merge one or more preliminary word segmentation results corresponding to the smallest semantic unit according to the largest semantic unit to determine the target word segmentation result.
  • the processing equipment can merge the corresponding smallest semantic unit Preliminary word segmentation results to determine the target word segmentation result "I/Go/Xihu next to/ ⁇ /Zhejiang University”.
  • Step 620 Convert at least one target word segmentation result into at least one target word segmentation vector.
  • the processing device may convert at least one target word segmentation result into at least one target word segmentation vector through a word segmentation model.
  • the word segmentation model may include, but is not limited to, a word2vec model, an N-gram model, a CBOW model, and the like.
  • the processing device may convert at least one target word segmentation result into at least one target word segmentation vector through an encoding algorithm.
  • the encoding algorithm may include, but is not limited to, one-hot encoding, N-gram algorithm, and so on.
  • Step 630 Determine a target text vector corresponding to the communication information based on at least one target word segmentation vector.
  • the processing device may splice at least one target word segmentation vector to determine the target text vector corresponding to the communication information.
  • the processing device may splice the corresponding at least one word segmentation vector into a two-dimensional matrix according to the appearance order of the at least one target word segmentation result in the communication information (where each row represents a word segmentation).
  • Vector, the number of rows represents the number of at least one target segmentation result), and then the target text vector corresponding to the communication information is obtained through convolution and maximum pooling.
  • the processing device may also determine the target text vector through other models or algorithms.
  • the above-mentioned other models and algorithms may include, but are not limited to, bag-of-words model, Word2Vec model, N-gram, Bert model, etc.
  • Fig. 7 is a schematic diagram of an exemplary word vector generation model according to some embodiments of the present specification.
  • the processing device for example, the processing device 112 can convert at least one target word segmentation result into at least one target word segmentation vector through a word segmentation model.
  • the word2vec model includes an input layer, a hidden layer, and an output layer.
  • the one-hot vector corresponding to the target word segmentation result can be determined. For example, assuming there are 3 target segmentation results, the one-hot vectors corresponding to the 3 target segmentation results can be (1, 0, 0), (0, 1, 0), and (0, 0, 1).
  • a preset threshold can be a system
  • the default value can also be adjusted according to different situations.
  • the row vector is the target word segmentation vector corresponding to the target word segmentation result Vj.
  • Fig. 8 is a schematic diagram of an exemplary text vector generation process according to some embodiments of the present specification.
  • at least one target word segmentation vector corresponding to at least one target word segmentation result can be determined based on the word vector generation model.
  • a convolutional neural network may be used to determine the target text vector corresponding to the communication information based on at least one target word segmentation vector.
  • at least one target word segmentation vector may be joined to a two-dimensional matrix, where each row in the two-dimensional matrix represents a target word segmentation vector, and the number of rows in the two-dimensional matrix represents the number of at least one target word segmentation vector.
  • the two-dimensional matrix can be further processed through convolution and maximum pooling to obtain the target text vector corresponding to the communication information.
  • Fig. 9 is a schematic diagram of an exemplary process of determining a responsibility judgment result based on a judgment model according to some embodiments of the present specification.
  • the process 900 may be executed by the processing device 112 (for example, the determination module 430) or other processing devices.
  • the process 900 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 900.
  • the process 900 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the judgment module 430 can input the characteristics of the service request into the judgment model, and then the judgment model can simultaneously output the judgment result of whether the service provider is the party responsible for the service accident and the corresponding target liability scenario.
  • the determination module 430 can input the feature 901 of the service request into the judgment model 902, and determine whether the service provider is the party responsible for the service accident through the output of the judgment model 902.
  • the responsibility judgment model 902 may include a responsible party judgment sub-model 902-1 and a responsibility scenario judgment sub-model 902-2.
  • the responsible party determination sub-model 902-1 can be used to determine whether the service provider of the service request is the party responsible for the service accident
  • the responsibility scenario determination sub-model 902-2 can be used to determine the responsibility scenario of the service accident.
  • the responsible party determination sub-model 902-1 and/or the responsibility scenario determination sub-model 902-2 may be a classification model.
  • the classification model may include, but is not limited to, KNN (k-nearestneighbors) model, perceptron model, naive Bayes model, decision tree model, logistic regression model, support vector machine model, random forest model, Neural network model, etc. or any combination thereof.
  • Fig. 10 is a schematic diagram of an exemplary process of determining a responsibility judgment result based on a judgment model according to some embodiments of the present specification.
  • the process 1000 may be executed by the processing device 112 (for example, the determination module 430) or other processing devices.
  • the process 1000 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the process 1000 may be implemented. 1000.
  • the process 1000 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 10 is not restrictive.
  • the judgment module 430 can input the characteristics of the service request into the judgment model, and the judgment model can output the judgment result of whether the service provider is the party responsible for the service accident.
  • the determination module 430 may input the characteristics 1001 of the service request into the judgment model 1002, and determine the judgment result 1003 of whether the service provider is the party responsible for the service accident through the output of the judgment model 1002.
  • the determination module 430 may also obtain the target responsibility scenario 1004 corresponding to the service provider based on the first preset rule 1005. For more details about the first preset rule, please refer to FIG. 11 and related descriptions, which will not be repeated here.
  • the judgment model 1002 may be a classification model.
  • the classification model may include, but is not limited to, KNN (k-nearestneighbors) model, perceptron model, naive Bayes model, decision tree model, logistic regression model, support vector machine model, random forest model, Neural network model, etc. or any combination thereof.
  • Fig. 11 is a flowchart of an exemplary process of determining a target responsibility scenario according to some embodiments of the present specification.
  • the process 1100 may be executed by a processing device (for example, the determination module 430) or other processing devices.
  • the process 1100 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 1100.
  • the process 1100 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 11 is not restrictive.
  • Step 1110 Process the characteristics of the service request based on the first preset rule, and determine at least one candidate responsibility scenario.
  • the characteristics of the service request may include the basic information of the service request, the portrait information of the service provider, the portrait information of the service requester, the communication information between the service provider and the service requester, etc., or any combination thereof.
  • the first preset rule may be a screening, analysis, or judgment condition related to the characteristics of the service request.
  • the first preset rule may be a standard operating procedure (Standard Operation Procedure, SOP) used to determine the responsibility scenario of the service provider.
  • SOP Standard Operation Procedure
  • the SOP may be a set of service provider responsibility judgment standards implemented for the service request complained by the service requester (for which a service accident has occurred).
  • cluster analysis may be performed on related information (for example, complaint information corresponding to historical service requests) of sample service requests (for example, historical service requests) to abstract the service provider’s responsibility scenarios, thereby determining The first preset rule.
  • the characteristics of historical service requests may be analyzed by a clustering algorithm, similar characteristics may be clustered into groups, and the corresponding responsibility scenarios may be abstracted based on the grouped characteristics.
  • the clustering algorithm may include k-means clustering algorithm, fuzzy c-means clustering algorithm, hierarchical clustering algorithm, Gaussian clustering algorithm, minimum spanning tree (MST)-based clustering algorithm, kernel k-means Clustering algorithm, density-based clustering algorithm, etc.
  • the first preset rule may also be a preset judgment experience.
  • the first preset rule may be a judgment experience determined based on historical judgment data.
  • the first preset rule can be adjusted according to different situations. For example, different cities or regions may correspond to different first preset rules; different time periods may correspond to different first preset rules, and so on.
  • Step 1120 Determine the target responsibility scenario corresponding to the service provider according to the respective priorities of the at least one candidate responsibility scenario.
  • the priority may be related to the complaint conversion rate, the number of complaints, the complaint method, and the like.
  • the complaint conversion rate corresponding to the candidate responsibility scenario refers to the proportion of service requests complained by the service requester among the service requests cancelled in the candidate responsibility scenario. For example, suppose the number of service requests cancelled in the candidate responsibility scenario within a preset time period (for example, the past 3 months) is N, where the number of service requests complained by the service requester is X, then the complaint conversion rate Is X/N. It can be understood that the higher the complaint rate, the higher the priority of the responsibility scenario. For example, the complaint conversion rate of candidate responsibility scenario A is 20%, and the complaint conversion rate of candidate responsibility scenario B is 10%. Then, the priority of the candidate responsibility scenario A is higher than that of the candidate responsibility scenario B.
  • the number of complaints corresponding to the candidate responsibility scenario refers to the number of complaints from the service requester in the candidate responsibility scenario (that is, the number of service requests complained by the service requester among the cancelled service requests) .
  • the number of complaints is X. It can be understood that the more the number of complaints, the higher the priority of the responsibility scenario.
  • the number of complaints may also include the average number of complaints (for example, the number of complaints per day).
  • the method of complaint refers to the method used by the service requester when making a complaint.
  • video complaints, voice complaints, on-site complaints, text complaints, etc. The way of complaining can reflect the degree of complaints made by users to service requests. It can be understood that the more intuitive the complaint method, the higher the corresponding complaint degree. For example, video complaints are more intuitive than voice complaints, and the complaint degree of video complaints is higher than the complaint degree of voice complaints.
  • the processing device may obtain historical data of at least one candidate responsibility scenario from one or more of the storage device 130, the service provider terminal 150, and the service requester terminal 140.
  • the processing device may determine the complaint conversion rate, the number of complaints, the complaint method, etc., respectively corresponding to at least one candidate responsibility scenario based on the historical data.
  • the processing device may determine the priority corresponding to the at least one candidate responsibility scenario based on one or more of the complaint conversion rate, the number of complaints, and the complaint manner. For example, the processing device may determine the respective priorities of at least one candidate responsibility scenario based only on the complaint conversion rate. The higher the complaint conversion rate, the higher the priority.
  • the processing device may determine the respective priorities of at least one candidate responsibility scenario based only on the number of complaints. The more complaints, the higher the priority. For another example, the processing device may determine the respective priorities of at least one candidate responsibility scenario based on the complaint mode only. The more intuitive the complaint method, the higher the priority. For another example, the processing device may determine the priority corresponding to at least one candidate responsibility scenario based on multiple of the complaint conversion rate, the number of complaints, the complaint method, etc., where the complaint conversion rate, the number of complaints, the complaint method, etc. may correspond to different weights. The weight can be the default value of the system, or it can be adjusted according to different situations.
  • the priority may be preset.
  • the preset priority can be stored in the storage device (for example, the storage device 130) described anywhere in this specification.
  • the processing device can access the storage device and read the priority corresponding to the at least one candidate responsibility scenario from the storage device.
  • the priority can also be determined in other ways.
  • the priority can be abstracted through the first preset rule or other judgment experience. This manual does not limit the way to determine the priority.
  • the priority can be dynamically updated. For example, taking a specific candidate responsibility scenario as an example, as time goes by, the number of cancelled service requests (and service requests that are complained about) in the candidate responsibility scenario changes accordingly. Parameters related to the priority of the candidate responsibility scenario (for example, the conversion rate of complaints, the number of complaints, and the way of complaining) are also changing accordingly.
  • the priority of the candidate responsibility scene can also be dynamically updated to more accurately determine the final target responsibility scene.
  • the priority may be updated at every fixed time period. The fixed time period can be one hour, one day, one week, etc.
  • the priority may not be updated at a fixed frequency, but updated when a preset condition is met, for example, the increase in the number of cancelled service requests in a certain candidate responsibility scenario exceeds a preset threshold.
  • the processing device may determine the candidate responsibility scenario with the highest priority as the target responsibility scenario. For example, the processing device may rank at least one candidate responsibility scene based on priority, and determine the highest ranked candidate responsibility scene as the target responsibility scene. In some embodiments, the processing device may determine a candidate responsibility scenario whose priority exceeds a preset threshold as the target responsibility scenario.
  • At least one candidate responsibility scenario may be further determined based on rules. And further determine the target responsibility scenario according to the respective priorities of at least one candidate responsibility scenario. For example, the candidate responsibility scenario with the highest priority is determined as the target responsibility scenario and fed back to the service provider. Correspondingly, the recognition of the feedback results from the service provider can be effectively improved.
  • the processing device can directly determine the target responsibility scenario based on the complaint conversion rate, the number of complaints, the complaint method, etc., without determining the respective priorities of at least one candidate responsibility scenario.
  • Fig. 12 is a flowchart of an exemplary process of training a judgment model according to some embodiments of the present specification.
  • the process 1200 may be executed by the processing device 112 (for example, the obtaining module 410) or other processing devices.
  • the process 1200 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 1200.
  • the process 1200 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 12 is not restrictive.
  • Step 1210 Obtain multiple sample service requests.
  • the sample service request may be a historical service request in which service incidents have occurred.
  • the sample service request may be a cancelled historical service request.
  • the sample service request may be a historical service request that was cancelled and complained by the service requester.
  • the historical service request may be a service request within a certain period of time in the past, for example, a service request within 1 month, 3 months, 12 months, and so on.
  • the processing device may obtain samples in a variety of ways from one or more of the storage device (for example, the storage device 130), the service provider terminal 150, and the service requester terminal 140 Request for service.
  • the processing device can randomly sample one or more sample service requests from the storage device.
  • the processing device may obtain all historical service requests in which service accidents have occurred from the storage device, and establish a sample service request library based on the foregoing historical service requests. The processing device may randomly select one or more sample service requests from the sample service request library.
  • Step 1220 Annotate multiple sample service requests to obtain annotation information corresponding to the multiple sample service requests.
  • the labeling information includes at least whether the sample service provider of the sample service request is the sample responsible party for the service accident.
  • the marked sample service requests can be divided into two categories: one is the service request for which the sample service provider is responsible, and the other is the service request for which the sample service provider is not responsible.
  • the output of the judgment model may be the judgment result of whether the service provider is the party responsible for the service accident.
  • the label information corresponding to the sample service request may only label whether the sample service provider is the sample responsible party for the service accident.
  • the liability judgment model can simultaneously output the judgment result of whether the service provider is the party responsible for the service accident and the corresponding target liability scenario.
  • the labeling information indicates the sample responsible party of the sample service provider (may be referred to as the "first labeling information")
  • the labeling information may also include the sample target corresponding to the sample responsible party Responsibility scenario (may be referred to as "second label information").
  • the processing device may annotate multiple sample service requests based on multiple methods. For example, the processing device may mark the sample service request corresponding to the sample service provider being the responsible party as “1”, and mark the sample service request corresponding to the sample service provider not being the responsible party as “0”.
  • the processing device may determine the annotation information corresponding to the multiple sample service requests based on the second preset rule. For example, the label information corresponding to the sample service request cancelled due to "driver being late" is "the responsible party of the sample is the sample service provider", and the “sample target responsibility scenario is that the sample service provider is late". As described in conjunction with FIG. 11, the second preset rule and the first preset rule may be the same or different.
  • multiple methods such as manual labeling and model labeling may be used to label the sample service request. This manual does not limit the marking method of sample service requests.
  • Step 1230 Extract sample features corresponding to the multiple sample service requests respectively.
  • the sample characteristics may include basic information of the sample service request, portrait information of the sample service provider, portrait information of the sample service requester, communication information between the sample service provider and the sample service requester, etc., or any combination thereof .
  • the sample characteristics may include basic information of the sample service request, portrait information of the sample service provider, portrait information of the sample service requester, communication information between the sample service provider and the sample service requester, etc., or any combination thereof .
  • Step 1240 training to obtain a judgment model based on the annotation information and the sample characteristics corresponding to the multiple sample service requests, respectively.
  • the judgment model is used to determine whether the service provider is the party responsible for the service accident.
  • the sample service request for which the sample service provider is the sample responsible for the service accident can be taken as a positive sample, and the service request for which the sample service provider is not the sample responsible for the service accident can be taken as a negative sample.
  • model training can be performed based on a preset algorithm to obtain a judgment model.
  • the preset algorithm may include an extreme gradient boosting algorithm (eXtreme Gradient Boosting, Xgboost), a support vector machine algorithm (Support Vector Machine, SVM), a random forest algorithm (Random Forest, RF), and the like.
  • eXtreme Gradient Boosting eXtreme Gradient Boosting, Xgboost
  • SVM Support Vector Machine
  • RF Random Forest
  • the preset algorithms may also include neural network algorithms, sorting algorithms, regression algorithms, instance-based algorithms, normalization algorithms, decision tree algorithms, Bayesian algorithms, clustering algorithms, association rule algorithms, Deep learning algorithms, dimensionality reduction algorithms, etc. or any combination thereof.
  • Neural network algorithms may include recurrent neural network, perceptron neural network, back propagation, Hopfield network, self-organizing map (SOM), learning vector quantization (LVQ), and so on.
  • Regression algorithms can include ordinary least squares, logistic regression, stepwise regression, multiple adaptive regression splines, local scatter smoothing estimation, and so on.
  • Sorting algorithms may include insertion sort, selection sort, merge sort, heap sort, bubble sort, shell sort, comb sort, count sort, bucket sort, radix sort, etc.
  • Example-based algorithms may include K nearest neighbor (KNN), learning vector quantization (LVQ), self-organizing map (SOM), and so on.
  • the normalization algorithm may include ridge regression, lasso algorithm (LASSO), elastic network, and so on.
  • Decision tree algorithms can include classification and regression trees (CART), iterative binary tree three generations (ID3), chi-square automatic interaction detection (CHAID), decision tree stumps, multiple adaptive regression splines (MARS), gradient enhancement machines (GBM), and so on.
  • Bayesian algorithms may include naive Bayesian algorithms, average first-order estimators (AODE), or Bayesian belief networks (BBN), etc.
  • Kernel-based algorithms can include, radial basis function (RBF) or linear discriminant analysis (LDA) and so on.
  • Clustering algorithms can include k-means clustering algorithm, fuzzy c-means clustering algorithm, hierarchical clustering algorithm, Gaussian clustering algorithm, minimum spanning tree (MST)-based clustering algorithm, kernel k-means clustering algorithm, density-based Clustering algorithm and so on.
  • the association rule algorithm may include a priori algorithm or equivalent class transformation (Eclat) algorithm.
  • Deep learning algorithms may include restricted Boltzmann machines (RBN), deep belief networks (DBN), convolutional networks, stacked autoencoders, and so on.
  • the dimensionality reduction algorithm may include principal component analysis (PCA), partial least square regression (PLS), Sammon mapping, multidimensional scaling (MDS), projection pursuit, etc.
  • PCA principal component analysis
  • PLS partial least square regression
  • MDS multidimensional scaling
  • the judgment model may be a supervised learning model.
  • the processing device may obtain the judgment model based on the algorithm training used for training the supervised learning model.
  • Exemplary algorithms may include gradient boosting decision tree (GBDT) algorithm, decision tree algorithm, random forest algorithm, logistic regression algorithm, support vector machine (SVM) algorithm, naive Bayes algorithm, adaptive enhancement algorithm, K nearest neighbor (KNN ) Algorithm, Markov chain algorithm, etc. or any combination thereof.
  • the judgment model may be an unsupervised learning model.
  • the processing device may obtain the judgment model based on the algorithm training used to train the unsupervised learning model.
  • Exemplary algorithms may include k-means clustering algorithm, hierarchical clustering algorithm, density-based clustering method with noise (DBSCAN) algorithm, self-organizing mapping algorithm, etc., or any combination thereof.
  • the judgment model may be a reinforcement learning model.
  • the processing device may obtain the judgment model based on the algorithm training used to train the reinforcement learning model.
  • Exemplary algorithms may include deep reinforcement learning algorithms, inverse reinforcement learning algorithms, apprentice learning algorithms, etc., or any combination thereof.
  • the processing device may train the initial judgment model to determine the judgment model.
  • the initial judgment model may be stored in a storage device (for example, the database 130) or other memory (for example, ROM or RAM).
  • the training ends.
  • the preset condition may be that the result of the loss function converges or is less than a preset threshold, the number of iterations reaches the preset number, and so on.
  • the model training process is to construct the mapping relationship between whether the service provider is responsible for the service accident and the characteristics of the service request. Correspondingly, it can be determined whether the service provider of the service request is the party responsible for the service accident according to the mapping relationship obtained after the model is completed, and according to the characteristics of any service request. It can effectively improve the accuracy and reliability of the judgment result of the service request, and at the same time, improve the judgment efficiency of the service request.
  • the cancelled historical service request is obtained as the sample service request, the sample is marked, and the judgment is obtained according to the characteristics of each marked sample service request.
  • the responsibility model can make the obtained judgment model more reliable. Further, when the judgment is made on whether the service request is responsible according to the judgment model, the accuracy and reliability of the judgment result are better, thereby improving the service provider and service request The service experience of the party.
  • FIG. 13 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification.
  • the process 1300 may be executed by the processing device 112 or other processing devices.
  • the process 1300 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 1300.
  • the process 1300 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 13 is not restrictive.
  • Step 1310 Obtain a service request. In some embodiments, this step 1310 may be performed by the obtaining module 410. For more details about obtaining the service request, refer to step 510 and its related description, which will not be repeated here.
  • Step 1320 Extract the characteristics of the service request, where the characteristics include at least communication information. In some embodiments, this step 1320 may be performed by the extraction module 420. For more details about extracting the characteristics of the service request, refer to step 520 and its related description, which will not be repeated here.
  • Step 1330 Process the features based on the judgment model, and determine the responsibility judgment result of the service accident. In some embodiments, this step 1330 may be performed by the determination module 430. For more details on determining the responsibility determination result of the service accident, refer to step 530 and its related description, which will not be repeated here.
  • FIG. 14 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification.
  • the process 1400 may be executed by the processing device 112 or other processing devices.
  • the process 1400 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 1400.
  • the process 1400 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 14 is not restrictive.
  • Step 1410 Obtain a service request.
  • this step 1410 may be performed by the obtaining module 410.
  • the obtaining module 410 For more details about obtaining the service request, refer to step 510 and its related description, which will not be repeated here.
  • Step 1420 Extract the characteristics of the service request.
  • this step 1420 may be performed by the obtaining module 410.
  • Step 1430 Process the characteristics based on the liability model and determine the result of the responsibility judgment of the service accident.
  • the result of the responsibility judgment includes: whether the service provider of the service request is the responsible party for the service accident, if the service provider is the responsible party, the service provider The corresponding target responsibility scenario.
  • this step 1430 may be performed by the acquisition module 430. For more details on determining the responsibility determination result of the service accident, refer to step 530 and its related description, which will not be repeated here.
  • FIG. 15 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification.
  • the process 1500 may be executed by the processing device 112 or other processing devices.
  • the process 1500 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 1500.
  • the process 1500 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 15 is not restrictive.
  • the service request is made through an order.
  • the service request can be initiated by initiating an order.
  • the evaluation of the order is actually the evaluation of the service request.
  • the basic information of the service request may include the order. information.
  • a service accident occurs in the service request, which may cause the service request to be cancelled.
  • determining responsibility for service incidents may include the following steps:
  • Step 1510 Obtain the order information of the cancelled service request and the communication information between the service provider and the service requester corresponding to the cancelled service request.
  • the order information is the basic information of the order.
  • the order information may include one or a combination of the following: driving distance, passenger boarding location, order location, order time, order cancellation Time and so on.
  • Step 1520 based on the order information and communication information, use the pre-stored judgment model to determine the responsibility of the cancelled service request.
  • the pre-stored judgment model is a machine learning model that has been trained or established in advance. Regarding the training or establishment of the judgment model, refer to Figure 17 and its related descriptions, which will not be repeated here.
  • Some embodiments of this specification provide a method for determining responsibility for a service accident. After determining that a canceled service request occurs, obtain the order information of the canceled service request and the service provider (for example, the driver or driver) and the service request The communication information of the party (for example, passengers), and then based on the order information and communication information, use the pre-stored responsibility determination model to determine the responsibility of the cancelled service request, for example, determine whether the driver's cancellation of the service request is the driver
  • the service requester and the service provider can use the built-in app
  • the chat function sends information to each other. For example, the driver and the passenger negotiate about possible changes such as the pick-up point and the pick-up time, so as to form a communication message between the driver and the passenger.
  • FIG. 16 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of this specification.
  • the process 1600 may be executed by a processing device (for example, the processing device 112 or other processing devices).
  • the process 1600 may be stored in a storage device (for example, the storage device 130 or a storage unit of a processing device) in the form of a program or instruction.
  • the processor 220 or the module shown in FIG. 4 executes the program or instruction, the process may be implemented. 1600.
  • the process 1600 may utilize one or more additional operations not described below, and/or not be completed by one or more operations discussed below.
  • the order of operations shown in FIG. 16 is not restrictive.
  • the basic information of the service request may include order information.
  • a service accident occurs in the service request, which may cause the service request to be cancelled.
  • determining responsibility for service incidents may include the following steps:
  • Step 1610 establish a judgment model.
  • Step 1620 Obtain the order information of the cancelled service request and the communication information between the service provider and the service requester corresponding to the cancelled service request. For details, see step 1510, which will not be repeated here.
  • step 1630 based on the order information and communication information, the responsibility judgment model is used to determine the responsibility of the cancelled service request.
  • a judgment model for determining the responsibility of the service provider for canceling the service request is established. Specifically, a large number of sample service requests are obtained, and the communication information, order information, and historical information of the sample service request provider and the sample service requester are used as features to train the judgment model to help the judgment model cancel service requests in various scenarios Make the correct judgment.
  • the judgment model may be XGBoost (Extreme Gradient Boosting). For details, refer to Figure 17 and related descriptions, which will not be repeated here.
  • FIG. 17 is an exemplary flowchart of a method for establishing a judgment model according to some embodiments of the present specification. As can be seen from Figure 15 and Figure 16 and related descriptions, responsibility judgments can be made on the processing of order information and communication information based on the judgment model. Correspondingly, the establishment or training of the judgment model in the process 1700 may include the following steps:
  • Step 1710 Obtain sample order information of the sample service request, and sample communication information between the sample service provider and the sample service requester corresponding to the sample service request.
  • Step 1720 Obtain pre-stored historical information.
  • the portrait information of the service provider may be labeled information abstracted based on the service provider’s social attributes, living habits, service provision behavior, or historical information, etc.
  • the portrait information of the service requester refers to the Labeled information abstracted from information such as the social attributes of the service requester, living habits, requesting service behavior, or historical information.
  • the pre-stored historical information may be historical information of the service requester and/or historical information of the service provider.
  • the historical information includes one or a combination of the following: driver service score, driver complaint rate, passenger credit score, and the like.
  • the driver's service score, the driver's complaint rate, and the passenger credit score may be obtained within a preset interval, for example, the driver's service score in the past month, and the complaint complained by passengers in the past month Rate etc.
  • Step 1730 Establish a judgment model based on the sample order information of the sample service request, the sample communication information and historical information between the sample service provider and the sample service requester corresponding to the sample service request.
  • establishing the judgment model may include: performing word segmentation processing on the sample communication information between the sample service requester and the sample service provider corresponding to the sample service request to obtain multiple words; and converting each word into a word vector; Convert multiple word vectors into text vectors; establish a responsibility judgment model based on the text vector, sample order information and historical information of sample service requests.
  • the sample communication information is segmented to obtain the word vector of each word.
  • the word vectors are spliced into a two-dimensional matrix according to the appearance order of the words, and then the text vector of the sample communication information is obtained through convolution and maximum pooling.
  • the text vector of the sample communication information is combined with the sample order information and historical information to predict Whether the service provider (for example, the driver) is responsible.
  • the method before converting each word into a word vector, the method further includes: marking multiple words as a communicator; and/or performing validity filtering on multiple words.
  • the word before converting each word into a word vector, the word may be marked, that is, whether the speaker of the marked word is a sample service provider (for example, a driver) or a sample service requester (for example, a passenger), Or mark the time of the word. Or, remove meaningless words (that is, words with low importance, such as hello), that is, discard redundant text content, so as to improve the speed and accuracy of model training.
  • a sample service provider for example, a driver
  • a sample service requester for example, a passenger
  • mark the time of the word or, remove meaningless words (that is, words with low importance, such as hello), that is, discard redundant text content, so as to improve the speed and accuracy of model training.
  • establishing a judgment model includes the following steps:
  • Step 1 Preprocess the sample communication content of the sample communication information between the sample service provider and the sample service requester.
  • the sample communication content needs to be segmented first, that is, "2019/12/14 12:01 driver: hello, I am about to arrive at the pick-up point” is divided into “2019/12/14 12:01", “Driver”, “Hello”, ",”, “I”, “Coming soon”, “Arrival”, “Boarding point”. Secondly, it is necessary to mark the timestamp of the text content after word segmentation, whether the speaker is the sample service provider or the sample service requester, and remove meaningless words (for example, "hello"), that is, discard redundant text content.
  • Step 2 Convert the word into a digital vector.
  • the word2vec model Through the word2vec model, through the text content after word segmentation, the word vector of each word is obtained, where the word2vec model is a model used to generate the word vector.
  • the specific principle of the word2vec model is shown in Figure 7 and its related descriptions, which will not be repeated here.
  • Step 3 Pre-training of the convolutional neural network.
  • text information is used on the marked training set to predict whether the service provider is ultimately responsible.
  • the purpose is to convert all word vectors of a complete communication message into a text vector.
  • the specific method is as follows : The communication information corresponding to a service request is processed in steps 1 and 2 to obtain valid words and corresponding word vectors.
  • the target word segmentation result includes valid words.
  • Step 4 Add the text characteristics of the sample communication information of the sample service provider and the sample service requester to the initial judgment model for training.
  • the text feature of the sample communication information may include a text vector of the sample communication information.
  • XGBoost is used as a model for predicting whether the service provider is responsible, and the text vector of length C obtained through steps 1 to 3 of the sample communication information, together with sample order information and historical information, is added to the model as sample features Conduct training.
  • Fig. 18 is a block diagram of an apparatus for determining responsibility for a service accident according to some embodiments of this specification.
  • the service request processing apparatus 1800 includes: an extraction module 420 and a determination module 430.
  • the extraction module 420 is used to extract the characteristics of the service request.
  • the extraction module 420 may be used to obtain the order information of the cancelled order and the communication information between the service requester and the service provider corresponding to the cancelled order.
  • the determination module 430 is used to determine the responsibility determination result.
  • the determination module 430 may use a pre-stored responsibility determination model to determine the responsibility of the cancelled service request based on the order information and communication information.
  • the device 600 obtains the order information of the cancelled service request and the communication information between the service provider (for example, the driver) and the service requester (for example, the passenger) after determining that the cancelled service request occurs, and then according to the order information And to communicate information, use the pre-stored responsibility judgment model to determine the responsibility of the cancelled service request, for example, to determine whether the driver’s cancellation of the order is the driver’s responsibility.
  • the service provider for example, the driver
  • the service requester for example, the passenger
  • Fig. 19 is a block diagram of an apparatus for determining responsibility for a service accident according to some embodiments of this specification.
  • the service request processing apparatus 1800 includes: an extraction module 420, a determination module 430, and a model establishment module 1910.
  • the obtaining module 410 may be used to obtain a judgment model.
  • the acquisition module 410 may include multiple sub-models.
  • the acquisition module 410 may include a model establishment module 1910.
  • the model establishment module 1910 can establish a judgment model.
  • the model establishment module 1910 can obtain sample order information of the sample service request, sample communication information between the sample service provider and the sample service requester corresponding to the sample service request , Obtain pre-stored historical information, and establish a responsibility judgment model based on sample order information, sample communication information and historical information.
  • FIG. 17 and its related description which will not be repeated here.
  • FIG. 20 is a flowchart of an exemplary process of training a judgment model according to some embodiments of the present specification.
  • the subject of execution of the process 2000 may be a computer, a server, and other devices with data processing functions.
  • the sample service request may be a historical service request that was cancelled and complained by the service requester.
  • training the blame model may include:
  • Step 2010 According to the service complaint information of the service requester, from the multiple cancelled historical service requests, determine the service request being complained as a sample service request.
  • the service requester in the process of requesting the service, may be dissatisfied with the requested service for various reasons, and thus complains about the requested service through the terminal device of the service requester, and generates service complaint information.
  • the service complaint information generated for a large number of service requesters may be stored in the database (for example, the storage device 130) of the service platform, so as to be used as historical data for reference during big data analysis.
  • the service complaint information of the service requester for multiple cancelled historical service requests may be obtained from a database (for example, the storage device 130).
  • a database for example, the storage device 130.
  • service request A, service request B, service request C have been complained by at least one service requester in the past period of time, that is, service request A, service request B, and service request C all correspond to multiple service complaints information. Therefore, multiple service requests with service complaint information can be used as sample service requests.
  • step 2020 the sample service request is marked according to the first preset rule, so that the sample service request is marked with: first label information, and the first label information is used to indicate whether the sample service provider is the sample responsible party for the service accident.
  • the first preset rule may be SOP.
  • step 1110 For more details about the first preset rule, refer to step 1110, and step 1220 for more details of the first annotation information, which will not be repeated here.
  • Step 2030 Perform feature extraction on the sample service request to obtain sample features of the sample service request.
  • the marked sample service requests can be divided into two types, one is the sample service request for which the sample service provider is responsible, and the other is the sample service request for which the sample service provider is not responsible.
  • Each type of sample service request can include multiple sample service requests.
  • the sample characteristics of each annotated sample service request can be obtained.
  • step 1230 For more details about the characteristics of the sample, refer to step 1230, which will not be repeated here.
  • Step 2040 Perform model training according to the first label information and sample characteristics to obtain a judgment model, which is used to determine whether the service provider is the party responsible for the service accident.
  • step 1240 For more details about model training, please refer to step 1240, which will not be repeated here.
  • the training method of the judgment model provided by some embodiments of this description is to obtain historical service requests that have service complaints and canceled service requests, use them as sample service requests, perform sample labeling, and serve according to each labelled sample
  • the requested sample characteristics are trained to obtain the judgment model, which makes the obtained judgment model more reliable. Furthermore, when the judgment is made on whether the service request is responsible according to the judgment model, the accuracy and reliability of the judgment result is better, thereby improving The service request experience degree of the service provider and service requester is calculated.
  • FIG. 21 is a flowchart of an exemplary process of training a judgment model according to some embodiments of the present specification.
  • the subject of execution of the process 2100 may be a computer, a server, and other devices with data processing functions.
  • the sample service request may be a historical service request that was cancelled and complained by the service requester.
  • training the blame model may include:
  • Step 2010 According to the service complaint information of the service requester, from the multiple cancelled historical service requests, determine the service request being complained as a sample service request.
  • step 2020 the sample service request is marked according to the first preset rule, so that the sample service request is marked with: first label information, and the first label information is used to indicate whether the service provider is the party responsible for the service accident.
  • Step 2110 Mark the sample service request according to the responsibility scenario corresponding to the preset preset rules, so that the sample service request is marked with: second label information, which is used to indicate the sample target responsibility of the sample service provider Scenes.
  • the service provider s responsibility scenario, that is, for a service request that is cancelled and complained by the service requester, when it is determined that the service request is cancelled due to the service provider, then the service provider causes the service request to be cancelled , And the reason for the complaint.
  • step 1220 For more details about the second annotation information, refer to step 1220 and related descriptions, which will not be repeated here.
  • Step 2030 Perform feature extraction on the sample service request to obtain sample features of the sample service request.
  • Step 2120 Perform model training according to the first annotation information, the second annotation information, and the sample characteristics to obtain a judgment model; the judgment model is used to determine whether the service provider is the party responsible for the service accident, and it is also used to determine whether the service request corresponds to The target responsibility scenario of the service provider.
  • the service provider According to the labeling information of the responsibility scenario contained in the sample service request, train and obtain the judgment model, so that when the responsibility judgment result is that the party responsible for the service accident is the service provider, the service provider’s corresponding target responsibility scenario can be generated at the same time, and the target responsibility The scene is fed back to the service provider, so that the service provider knows the reason why the service request provided by it is responsible in time. On the one hand, it can increase the service provider’s recognition of its responsibility; on the other hand, it can also help the service provider to better improve its own services to provide better services.
  • each second annotation information corresponds to a sample candidate responsibility scenario and the priority of the sample candidate responsibility scenario of the sample service provider.
  • the responsibility scenario multiple service requests can be correspondingly included.
  • the service provider does not answer the call in time
  • its corresponding responsibility scenarios may also include multiple, that is, the second annotation information of the sample service request may include multiple sample candidate responsibility scenarios.
  • the responsibility scenario corresponding to service request A may include: the service provider fails to answer the call in time, and the service provider fails to arrive at the destination on time.
  • the sample service request before marking the sample service request according to the responsibility scenario corresponding to the preset preset rule, it may further include: determining the priority of the responsibility scenario. For more details on determining the priority of the responsibility scenario, refer to step 1120.
  • the responsibility scenario with the highest priority can be fed back to the service according to the priority of the responsibility scenario corresponding to the service request Providers, thereby effectively improving the service provider’s recognition of the feedback results.
  • FIG. 22 is a flowchart of an exemplary process of determining responsibility for a service accident according to some embodiments of the present specification.
  • the execution subject of the process 2200 may also be a terminal, a server, and other devices with data processing functions.
  • the device that executes the process 2200 and the device that executes the training method of the judgment model may be the same device or different devices.
  • determining responsibility for service incidents may include the following steps:
  • Step 2210 Obtain the cancelled service request.
  • step 510 For more details about obtaining the cancelled service request, please refer to step 510, which will not be repeated here.
  • Step 2220 Use the pre-trained judgment model to process the cancelled service request, and determine the responsibility judgment result of the cancelled service request.
  • the responsibility determination result may include: responsibility indication information, which is used to indicate whether the service provider is the party responsible for the cancelled service request; the judgment model is a training method using the aforementioned judgment model ( For example, Figure 20 or Figure 21) the trained model.
  • the characteristics of the canceled service request may be extracted, and the extracted characteristics may be input into a pre-trained judgment model to calculate the probability that the party responsible for the canceled service request is the service provider.
  • the judgment model when using the judgment model to predict whether the party responsible for the service request is the service provider, it is also necessary to preset the liability probability threshold, so that the service provider corresponding to the calculated cancelled service request can be regarded as the responsible party.
  • the probability of the party is compared with the preset responsibility probability threshold. If the probability that the service provider corresponding to the cancelled service request is the responsible party meets the preset threshold, the responsibility judgment result is that the service request is the responsible party.
  • the preset responsible probability threshold can be set according to actual applications, and there is no specific limitation here.
  • the responsibility determination result may also include: responsibility scenario indication information, which is used to indicate the target responsibility scenario of the service provider .
  • the judgment model can also be based on the corresponding relationship between the characteristics of the service request and the responsibility scenario during the training process of the judgment model. Determine the target responsibility scenario of the service provider corresponding to the cancelled service request.
  • the responsibility scenario indication information may include: multiple responsibility scenarios indication information. Further, according to the priorities of the multiple candidate responsibility scenarios, from the multiple candidate responsibility scenarios, the one with the highest priority is determined to be the target responsibility scenario. Among them, when there are multiple candidate responsibility scenarios with the same priority, the multiple candidate responsibility scenarios can be used as target responsibility scenarios.
  • the responsibility determination result is returned to the service provider corresponding to the cancelled service request, and the responsibility determination result includes: responsibility indication information and indication information of the target responsibility scenario. Understandably, when the result of the responsibility determination is that the service provider is not the responsible party, there is no indication information corresponding to the target responsibility scenario.
  • FIG. 23 is a flowchart of an exemplary process of training a judgment model according to some embodiments of the present specification.
  • the execution subject of the process 2300 may also be a device with data processing functions such as a terminal and a server.
  • the device that executes the process 2300 and the device that executes the training method of the judgment model may be the same device or different devices.
  • Step 2310 Obtain the appeal information of the service provider corresponding to the cancelled service request.
  • step 2320 the training data of the judgment model is updated according to the result of the responsibility judgment and the appeal information.
  • Step 2330 Perform optimization of the judgment model according to the updated training data.
  • the training data of the judgment model can be updated based on the above-obtained service provider's appeal information and the service provider's appeal information corresponding to the responsibility determination result, and the sample data in the sample service request can be continuously expanded. .
  • the model can be continuously trained according to the updated data to achieve the effect of model optimization.
  • the pre-trained judgment model has the same technical effect as the training method of the judgment model provided in the above-mentioned embodiment, and the trained judgment model has the same technical effect, so that when the responsibility judgment is made according to the pre-trained judgment model, the judgment The accuracy and reliability of the results are high, and the accuracy of the target responsibility scenarios that are fed back to the service provider is also high, and it has good persuasiveness. Furthermore, according to the service provider’s appeal results and responsibility judgment results, the model training sample data is updated, and the model training is carried out based on the updated data, thereby optimizing the judgment model and effectively improving the judgment model’s performance. The accuracy rate of responsibility judgment.
  • FIG. 24 is a block diagram of an apparatus for training an exemplary blame model shown in some embodiments of this specification.
  • the apparatus 2400 may include a first determining module 2410, a labeling module 2420, a sample feature acquiring module 2430, and a training module 2440.
  • the obtaining module 410 may be used to obtain the judgment model (for example, obtain the judgment model through training).
  • the acquiring module 410 may also include multiple sub-modules.
  • the acquisition module 410 may include a first determination module 2410, an annotation module 2420, a sample feature acquisition module 2430, and a training module 2440.
  • the first determination module 2410 is configured to determine the service request being complained as a sample service request from among the multiple cancelled historical service requests according to the service complaint information of the service requester. For more details about determining the sample service request, please refer to step 1210 and its related description, which will not be repeated here.
  • the labeling module 2420 is used to label the sample service request according to the first preset rule, so that the sample service request is labeled with: first labeling information, the first labeling information is used to indicate whether the sample service provider is the sample responsible for the service accident square.
  • the labeling module 2420 is also used to label the sample service request according to the responsibility scenario corresponding to the first preset rule, so that the sample service request is labeled with: second labeling information, and the second labeling information is used to indicate the sample service provider’s Sample target responsibility scenarios.
  • first preset rule, the first annotation information, and the second annotation information please refer to FIG. 11 and FIG. 12 and related descriptions, which will not be repeated here.
  • the sample feature acquisition module 2430 is used to perform feature extraction on the sample service request to obtain the sample feature of the sample service request. For more details about the sample characteristics, please refer to FIG. 12 and related descriptions, which will not be repeated here.
  • the training module 2440 is used to perform model training according to the first label information and sample characteristics to obtain a judgment model, which is used to determine whether the service provider is the party responsible for the service accident.
  • the training module 2440 is further configured to perform model training according to the first label information, the second label information, and the sample characteristics to obtain a judgment model.
  • a judgment model used to determine whether the service provider is the party responsible for the service accident.
  • the training module 2440 is further configured to perform model training according to the first label information, the second label information, and the sample characteristics to obtain a judgment model.
  • the first determining module 2410 is further configured to determine the priority of the responsibility scenario according to the complaint conversion rate of the service requester in the responsibility scenario.
  • Fig. 25 is a block diagram of an exemplary apparatus for determining responsibility for a service accident according to some embodiments of the present specification.
  • the apparatus 2500 may include an acquisition module 410 and a prediction module 2510.
  • the obtaining module 410 is used to obtain a service request. In some embodiments, the obtaining module 410 may obtain the cancelled service request.
  • the determination module 430 is used to determine the responsibility determination result.
  • the determination module 430 may also include multiple sub-modules.
  • the determination module 430 may include a prediction module 2510.
  • the prediction module 2510 is configured to use a pre-trained judgment model to process the cancelled service request, and determine the responsibility judgment result of the cancelled service request. For more details of the responsibility determination result, please refer to the flowcharts in Figure 5 and Figure 22.
  • Fig. 26 is a block diagram of an exemplary apparatus for determining responsibility for a service accident according to some embodiments of the present specification.
  • the apparatus 2600 may include: an acquisition module 410, a prediction module 2510, a second determination module 2610, and a return module 440.
  • the obtaining module 410 can obtain the cancelled service request.
  • prediction module 2510 As mentioned above, more details about the prediction module 2510 can be found in FIG. 25 and related descriptions, which will not be repeated here.
  • the determination module 430 is configured to determine the target responsibility scenario based on the priority of the candidate responsibility scenario.
  • the determination module 430 may also include multiple sub-modules.
  • the determination module 430 may include a second determination module 2610.
  • the second determination module 2610 is configured to determine, from the multiple candidate responsibility scenarios, at least one candidate responsibility scenario with the highest priority as the target responsibility scenario according to the priorities of the multiple candidate responsibility scenarios. For more details about the target responsibility scenario, please refer to step 530 and its related description, which will not be repeated here.
  • the return module 440 may be used to return the responsibility determination result to the service provider or/and the service requester. For example, the return module 440 may return the responsibility determination result to the service provider corresponding to the cancelled service request. For more details on the result of the responsibility determination, see the flowcharts in Figure 5 and Figure 22.
  • Fig. 27 is a block diagram of an exemplary apparatus for determining responsibility for a service accident according to some embodiments of the present specification.
  • the apparatus 2700 may include a first determination module 2410, a labeling module 2420, an appeal information acquisition module 2710, an update module 2720, and an optimization module 2730.
  • the obtaining module 410 may be used to obtain the appeal information of the service provider, and update the judgment model based on the appeal information and the result of the responsibility judgment.
  • the obtaining module 410 may appeal the information obtaining module 2710, the update module 2720, and the optimization module 2730.
  • the appeal information obtaining module 2710 is used to obtain the appeal information of the service provider corresponding to the cancelled service request.
  • the update module 2720 is used to update the training data of the judgment model according to the responsibility judgment result and the appeal information.
  • the optimization module 2730 is used to optimize the judgment model according to the updated training data.
  • the computer storage medium may contain a propagated data signal containing a computer program code, for example on a baseband or as part of a carrier wave.
  • the propagated signal may have multiple manifestations, including electromagnetic forms, optical forms, etc., or a suitable combination.
  • the computer storage medium may be any computer readable medium other than the computer readable storage medium, and the medium may be connected to an instruction execution system, device, or device to realize communication, propagation, or transmission of the program for use.
  • the program code located on the computer storage medium can be transmitted through any suitable medium, including radio, cable, fiber optic cable, RF, or similar medium, or any combination of the above medium.
  • the computer program codes required for the operation of each part of this manual can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional programming languages such as C language, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code can run entirely on the user's computer, or as an independent software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or processing equipment.
  • the remote computer can be connected to the user's computer through any network form, such as a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, via the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • SaaS service Use software as a service
  • numbers describing the number of ingredients and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifier "about”, “approximately” or “substantially” in some examples. Retouch. Unless otherwise stated, “approximately”, “approximately” or “substantially” indicates that the number is allowed to vary by ⁇ 20%.
  • the numerical parameters used in the specification and claims are approximate values, and the approximate values can be changed according to the required characteristics of individual embodiments. In some embodiments, the numerical parameter should consider the prescribed effective digits and adopt the method of general digit retention. Although the numerical ranges and parameters used to confirm the breadth of the ranges in some embodiments of this specification are approximate values, in specific embodiments, the setting of such numerical values is as accurate as possible within the feasible range.

Landscapes

  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种用于判定服务事故的责任的方法。所述方法包括:获取服务请求(510),所述服务请求发生服务事故;提取所述服务请求的特征(520);以及基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果(530),所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。

Description

一种用于判定服务事故的责任的方法和系统
交叉引用
本申请要求2019年12月20日递交的申请号为201911329083.8的中国申请,以及2020年1月14日递交的申请号为202010036685.0的中国申请的优先权,其所有内容通过引用的方式包含于此。
技术领域
本说明书涉及计算机技术领域,特别涉及一种用于判定服务事故的责任的方法和系统。
背景技术
随着互联网的快速发展,在线打车服务、外卖服务等变得越来越流行。以在线打车服务为例,可能每天都会发生大量的服务事故(例如,服务请求被取消、服务请求被中断)。对于在线打车平台来说,对服务事故进行责任判定是对最为直接的管控手段。然而,服务事故对应的场景复杂且涉及的信息维度很多,难以保证责任判定的准确性。
因此,希望提供一种用于判定服务事故的责任的方法和系统,可以对服务请求进行准确的责任判定。
发明内容
本说明书实施例之一提供一种用于判定服务事故的责任的方法,所述方法包括:获取服务请求,所述服务请求发生服务事故;提取所述服务请求的特征;以及基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。
本说明书实施例之一提供一种系统,所述系统包括:至少一个数据库, 所述至少一个数据库包括用于判定服务事故的责任的指令;至少一个处理器,所述至少一个处理器与所述至少一个数据库通信,其中,在执行所述指令时,所述至少一个处理器被配置为:获取服务请求,所述服务请求发生服务事故;提取所述服务请求的特征;以及基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。
本说明书实施例之一提供一种用于判定服务事故的责任的系统,所述系统包括:获取模块,用于获取服务请求,所述服务请求发生服务事故;提取模块,用于提取所述服务请求的特征;以及判定模块,用于基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。
本说明书实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行上述用于判定服务事故的责任的方法。
本说明书实施例之一提供一种用于判定服务事故的责任的方法,所述方法包括:获取服务请求,所述服务请求发生服务事故;提取所述服务请求的特征,所述特征至少包括沟通信息,所述沟通信息基于所述服务请求的服务请求方和所述服务提供方之间的沟通内容确定;以及基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果包括所述服务请求的服务提供方是否为所述服务事故的责任方。
本说明书实施例之一提供一种用于判定服务事故的责任的方法,所述方法包括:获取服务请求,所述服务请求发生服务事故;提取所述服务请求的特征;以及基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果包括:所述服务请求的服务提供方是否为所述服务事故的责任方,以及若所述服务提供方为所述责任方,所述服务提供方对应的目标责任场景。
本说明书实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取所述存储介质中的计算机执指令时,所述计算机执行上述技术方案所述方法。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的示例性线上至线下(online to offline,O2O)服务系统的应用场景示意图;
图2是根据本说明书一些实施例所示的示例性计算设备的示例性硬件和/或软件的示意图;
图3是根据本说明书一些实施例所示的示例性移动设备的示例性硬件和/或软件的示意图;
图4是根据本说明书一些实施例所示的示例性处理设备的模块图;
图5是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图;
图6是根据本说明书一些实施例所示的提取服务请求的特征的示例性过程的流程图;
图7是根据本说明书一些实施例所示的示例性词向量生成模型的示意图;
图8是根据本说明书一些实施例所示的示例性文本向量生成过程的示意图;
图9是根据本说明书一些实施例所示的基于判责模型确定责任判断结果的示例性过程的示意图;
图10是根据本说明书一些实施例所示的基于判责模型确定责任判断结果的示例性过程的示意图;
图11是根据本说明书一些实施例所示的确定目标责任场景的示例性过程的流程图;
图12是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图;
图13是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图;
图14是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图;
图15是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图;
图16是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图;
图17是根据本说明书一些实施例所示的用于建立判责模型的方法的示例性流程图;
图18是根据本说明书一些实施例所示的判定服务事故的责任的装置的模块图;
图19是根据本说明书一些实施例所示的判定服务事故的责任的装置的模块图;
图20是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图;
图21是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图;
图22是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图;
图23是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图;
图24是根据本说明书一些实施例所示的示例性判责模型的训练的装置的模块图;
图25是根据本说明书一些实施例所示的示例性判定服务事故的责任的装置的模块图;
图26是根据本说明书一些实施例所示的示例性判定服务事故的责任的装置的模块图;以及
图27是根据本说明书一些实施例所示的示例性判定服务事故的责任的装置的模块图。
具体实施方式
以下描述是为了使本领域的普通技术人员能够实施和利用本说明书,并且该描述是在特定的应用场景及其要求的环境下提供的。对于本领域的普通技术人员来讲,对本说明书披露的实施例进行的各种修改是显而易见的,并且本文中定义的通则在不背离本说明书的精神及范围的情况下,可以适用于其他实施例及应用。因此,本说明书并不限于所描述的实施例,而应该被给予与权利要求一致的最广泛的范围。
本说明书中所使用的术语仅用于描述特定的示例性实施例,并不限制本说明书的范围。如本说明书使用的单数形式“一”、“一个”及“该”可以同样包括复数形式,除非上下文明确提示例外情形。还应当理解,如在本说明书中,术语“包括”、“包含”仅提示存在所述特征、整体、步骤、操作、组件和/或部件,但并不排除存在或添加一个或以上其他特征、整体、步骤、操作、组件、部件和/或其组合的情况。
在考虑了作为本说明书一部分的附图的描述内容后,本说明书的特征和特点以及操作方法、结构的相关元素的功能、各部分的组合、制造的经济性变得显而易见。然而,应当理解的是,附图仅仅是为了说明和描述的目的,并不旨在限制本说明书的范围。应当理解的是,附图并不是按比例绘制的。
本说明书中使用了流程图用来说明根据本说明书的一些实施例的系统所执行的操作。应该清楚地理解,流程图的操作可以不按顺序实现。相反,可以按照倒序或同时处理各种步骤。同时,也可以将一个或以上其他操作添加到这些流程图中。也可以从流程图中删除一个或以上操作。
本说明书的系统和方法可以应用于任何类型的按需服务。例如,本说明书的系统和方法可以应用于不同环境的运输系统,包括陆地(例如道路或越野)、水(例如河流、湖泊或海洋)、空气、航空航天等或其任意组合。运输系统的交通工具可以包括出租车、私家车、顺风车、公交车、火车、动车、高铁、地铁、船只、船舶、飞机、飞船、热气球、无人驾驶的车辆等或其任意组合。运输系统也可以包括管理和/或分配的任一运输系统,例如,发送和/或接收快递的系统。本说明书的系统和方法的应用可以包括移动设备(例如,智能电话或智能平板)应用程序、网页、浏览器插件、客户终端、客户系统、内部分析系统、人工智能机器人等或其任意组合。
本说明书中的术语“乘客”、“请求者”、“服务请求方”和“客户”可用于表示请求或订购服务的个人、实体或工具,并且可互换使用。同样地,本说明书描述的“司机”、“提供者”、“服务提供方”与“供应者”是可以互换的,是指提供服务或者协助提供服务的个人、实体或工具。本说明书中的术语“用户”用于指代可以请求服务、订购服务、提供服务或促进提供服务的个人、实体或工具。在本说明书中,术语“请求者”和“服务请求方终端”可以互换使用,术语“提供者”和“服务提供方终端”可以互换使用。
本说明书中的术语“请求”、“服务”、“服务请求”和“订单”可用于表示由乘客、请求者、服务请求方、顾客、司机、提供者、服务提供方、供应者等或其任意组合发起的请求,并且可互换使用。根据上下文,服务请求可以由乘客、请求者、服务请求方、顾客、司机、提供者、服务提供方或供应者中的任何一个接受。在一些实施例中,服务请求由司机,提供者,服务提供方或供应者接受。服务请求可以是计费的也可是免费的。
本说明书中使用的定位技术可以基于全球定位系统(GPS)、全球导航卫星系统(GLONASS)、罗盘导航系统(COMPASS)、伽利略定位系统、准天顶卫星系统(QZSS)、无线保真(WiFi)定位技术等或其任意组合。上述定位技术中的一种或以上可以在本说明书中互换使用。
图1是根据本说明书一些实施例所示的示例性线上至线下(online to offline,O2O)服务系统的应用场景示意图。在一些实施例中,O2O服务系统100可以用于对发生服务事故的服务请求进行责任判定。服务请求可以是任何基于位置的服务请求。在一些实施例中,服务请求可以与运输服务(例如,线上打车服务、快递服务)相关的请求。在一些实施例中,如图1所示,O2O服务系统100可包括服务器110、网络120、存储设备130、服务请求方终端140和服务提供方终端150。
服务器110可以是单个服务器,也可以是服务器组。服务器组可以是集中式的,也可以是分布式的(例如,服务器110可以是分布式系统)。在一些实施例中,服务器110可以是本地的,也可以是远程的。例如,服务器110可以经由网络120访问存储在服务请求方终端140、服务提供方终端150和/或存储设备130中的信息和/或数据。又例如,服务器110可以直接连接到服务请求方终端140、服务提供方终端150和/或存储设备130以访问存储的信息和/或数据。在一些实施例中,服务器110可以在云平台上实施。仅作为示例,该云平台可以包括私有云、公共云、混合云、社区云、分布云、内部云、多层云等或其任意组合。在一些实施例中,服务器110可以在如图2所示的包括一个或以上组件的计算设备200上实现。
在一些实施例中,服务器110可包括处理设备112。处理设备112可以处理与服务请求相关的信息和/或数据以执行本说明书描述的一个或以上功能。例如,处理设备112可以基于判责模型对发生服务事故的服务请求的特征进行处理,确定服务事故的责任判定结果。
在一些实施例中,处理设备112可包括一个或以上处理引擎(例如,单 核处理引擎或多核处理引擎)。处理设备112可以包括中央处理单元(CPU)、专用集成电路(ASIC)、专用指令集处理器(ASIP)、图形处理单元(GPU)、物理处理单元(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编程逻辑器件(PLD)、控制器、微控制器单元、精简指令集计算机(RISC)、微处理器等或其任意组合。在一些实施例中,处理设备112可以集成在服务请求方终端140或服务提供方终端150中。
网络120可以促进信息和/或数据的交换。在一些实施例中,O2O服务系统100的一个或以上组件(例如,服务器110、存储设备130、服务提供方终端150和/或服务请求方终端140)可以经由网络120将信息和/或数据发送到O2O服务系统100的其他组件。例如,服务器110可以经由网络120从存储设备130中获取发生服务事故的服务请求。又例如,服务器110可以经由网络120从服务请求方终端140获取发生服务事故的服务请求。在一些实施例中,网络120可以是有线网络或无线网络等或其任意组合。仅作为示例,网络120可以包括电缆网络、有线网络、光纤网络、电信网络、内部网络、互联网、局域网络(LAN)、广域网络(WAN)、无线局域网络(WLAN)、城域网(MAN)、公共交换电话网络(PSTN)、蓝牙网络、紫蜂网络、近场通信(NFC)网络等或其任意组合。在一些实施例中,网络120可以包括一个或以上网络接入点。例如,网络120可以包括有线或无线网络接入点,如基站和/或互联网交换点120-1、120-2、……。通过接入点,O2O服务系统100的一个或以上组件可以连接到网络120以交换数据和/或信息。
存储设备130可以存储与服务请求有关的数据和/或指令。在一些实施例中,存储设备130可以存储从服务提供方终端150和/或服务请求方终端140获取的数据。在一些实施例中,存储设备130可以存储服务器110可以执行或使用以执行本说明书中描述的示例性方法的数据和/或指令。在一些实施例中,上述数据和/或指令可以包括发生事故的服务请求、服务服务请求的特征、服务请求对应的沟通信息等。在一些实施例中,存储设备130可包括大容量存储器、 可移动存储器、易失性读写内存、只读内存(ROM)等或其任意组合。示例性的大容量存储器可以包括磁盘、光盘、固态磁盘等。示例性可移动存储器可以包括闪存驱动器、软盘、光盘、内存卡、压缩盘、磁带等。示例性易失性读写内存可以包括随机存取内存(RAM)。示例性RAM可包括动态随机存取内存(DRAM)、双倍数据速率同步动态随机存取内存(DDR SDRAM)、静态随机存取内存(SRAM)、晶闸管随机存取内存(T-RAM)、零电容随机存取内存(Z-RAM)等。示例性ROM可以包括掩模型只读内存(MROM)、可编程只读内存(PROM)、可擦除可编程只读内存(EPROM)、电可擦除可编程只读内存(EEPROM)、光盘只读内存(CD-ROM)、数字多功能磁盘只读内存等。在一些实施例中,存储设备130可以在云平台上实现。仅作为示例,该云平台可以包括私有云、公共云、混合云、社区云、分布云、内部云、多层云等或其任意组合。
在一些实施例中,存储设备130可以连接到网络120以与O2O服务系统100的一个或以上组件(例如,服务器110、服务提供方终端150、服务请求方终端140)通信。O2O服务系统100的一个或以上组件可以经由网络120访问存储设备130中存储的数据和/或指令。在一些实施例中,存储设备130可以直接连接到O2O服务系统100的一个或以上组件(例如,服务器110、服务请求方终端140、服务提供方终端150)或与之通信。在一些实施例中,存储设备130可以是服务器110的一部分。
服务请求方终端140可以包括移动设备140-1、平板计算机140-2、膝上型计算机140-3等或其任意组合。在一些实施例中,移动设备140-1可以包括智能家居设备、可穿戴设备、智能移动设备、虚拟现实设备、增强现实设备等或其任意组合。在一些实施例中,智能家居设备可以包括智能照明设备、智能电器控制装置、智能监控设备、智能电视、智能摄像机、对讲机等或其任意组合。在一些实施例中,可穿戴设备可包括智能手镯、智能鞋袜、智能眼镜、智能头盔、智能手表、智能衣服、智能背包、智能配件等或其任意组合。在一些实施 例中,智能移动设备可以包括智能电话、个人数字助理(PDA)、游戏设备、导航设备、销售点(POS)等或其任意组合。在一些实施例中,虚拟现实设备和/或增强型虚拟现实设备可以包括虚拟现实头盔、虚拟现实眼镜、虚拟现实眼罩、增强现实头盔、增强现实眼镜、增强现实眼罩等或其任意组合。例如,虚拟现实设备和/或增强现实设备可以包括Google GlassTM、Oculus RiftTM、HololensTM或Gear VRTM等。在一些实施例中,服务请求方终端140可以是具有定位技术的设备,用于定位服务请求方和/或服务请求方终端140的位置。
服务提供方终端150可以是与服务请求方终端140相似或与服务请求方终端140相同的设备。例如,服务提供方终端150可以包括移动设备150-1、平板计算机150-2、膝上型计算机150-3等或其任意组合。在一些实施例中,服务提供方终端150可以是具有定位技术的设备,用于定位服务提供方和/或服务提供方终端150的位置。在一些实施例中,服务请求方终端140和/或服务提供方终端150可以与其他定位设备通信以确定服务请求方、服务请求方终端140、服务提供方和/或服务提供方终端150的位置。在一些实施例中,服务请求方终端140和/或服务提供方终端150可以向服务器110传送定位信息。
在一些实施例中,服务请求方可以是服务请求方终端140的用户。在一些实施例中,服务请求方终端140的用户可以是除该服务请求方之外的其他人。例如,服务请求方终端140的用户A可以使用服务请求方终端140发送对应用户B的服务请求或者从服务器110接收服务确认和/或信息或指令。在一些实施例中,服务提供方可以是服务提供方终端150的用户。在一些实施例中,服务提供方终端150的用户可以是除该服务提供方之外的其他人。例如,服务提供方终端150的用户C可以通过服务提供方终端150为用户D接收服务请求和/或从服务器110处接收信息或指令。
在一些实施例中,O2O服务系统100的一个或以上组件(例如,服务器110、服务请求方终端140、服务提供方终端150)可以具有访问存储设备130的许可。在一些实施例中,O2O服务系统100的一个或以上组件可以在满足一 个或以上条件时读取和/或修改与服务请求方、服务提供方和/或公众有关的信息。例如,服务器110可以在服务完成之后读取和/或修改一个或以上的服务请求方的信息。又例如,当服务提供方终端150从服务请求方终端140接收到服务请求时,服务提供方终端150可以访问与服务请求方相关的信息,但是不能修改服务请求方的相关信息。
在一些实施例中,可以通过请求服务实现O2O服务系统100的一个或以上组件的信息交换。服务请求的对象可以为任何产品。在一些实施方案中,产品可以是有形产品或非物质产品。有形产品可包括食品、药品、商品、化学产品、电器、服装、汽车、房屋、奢侈品等或其任意组合。非物质产品可以包括服务产品、金融产品、知识产品、互联网产品等或其任意组合。互联网产品可以包括个人主机产品、网站产品、移动互联网产品、商业主机产品、嵌入式产品等或其任意组合。移动互联网产品可以用于移动终端的软件、程序、系统等或其任意组合。移动终端可以包括平板计算机、膝上型计算机、移动电话、个人数字助理(PDA)、智能手表、POS设备、车载计算机、车载电视、可穿戴设备等或其任意组合。例如,产品可以是在计算机或移动电话上使用的任何软件和/或应用。该软件和/或应用程序可以与社交、购物、交通、娱乐、学习、投资等或其任意组合相关。在一些实施例中,与运输有关系统软件和/或应用程序可以包括出行软件和/或应用程序、车辆调度软件和/或应用程序、地图软件和/或应用程序等。在车辆调度软件和/或应用程序中,车辆可包括马、马车、人力车(例如,独轮车、自行车、三轮车等)、汽车(例如,出租车、公共汽车、私人汽车等)、火车、地铁、船舶、飞行器(例如,飞机、直升机、航天飞机、火箭、热气球等)等或其任意组合。
本领域普通技术人员将理解,当O2O服务系统100的元件(或组件)执行时,元件可以通过电信号和/或电磁信号执行。例如,当服务请求方终端140向服务器110发送服务请求时,服务请求方终端140的处理器可以生成编码请求的电信号。然后,服务请求方终端140的处理器可以将电信号发送到输出端 口。若服务请求方终端140经由有线网络与服务器110通信,则输出端口可物理连接至电缆,其进一步将电信号传输给服务器110的输入端口。如果服务请求方终端140经由无线网络与服务器110通信,则服务请求方终端140的输出端口可以是一个或以上天线,其将电信号转换为电磁信号。类似地,服务提供方终端150可以通过其处理器中的逻辑电路的操作来处理任务,并且经由电信号或电磁信号从服务器110接收指令和/或服务请求。在电子设备内,例如服务请求方终端140、服务提供方终端150和/或服务器110,当处理器处理指令、发出指令和/或执行动作时,指令和/或动作通过电信号进行。例如,当处理器从存储介质(例如,存储设备130)检索或保存数据时,它可以将电信号发送到存储介质的读/写设备,其可以在存储介质中读取或写入结构化数据。该结构数据可以通过电子设备的总线,以电信号的形式传输至处理器。此处,电信号可以指一个电信号、一系列电信号和/或至少两个不连续的电信号。
图2是根据本说明书一些实施例所示的示例性计算设备的示例性硬件和/或软件的示意图。在一些实施例中,服务器110、服务请求方终端140或服务提供方终端150可以在计算设备200上实现。例如,处理设备112可以在计算设备200上实施并执行本说明书所公开的处理设备112的功能。如图2所示,计算设备200可以包括总线210、处理器220、只读存储器230、随机存储器240、通信端口250、输入/输出260和硬盘270。
处理器220可以执行计算指令(程序代码)并执行本说明书描述的O2O服务系统100的功能。计算指令可以包括程序、对象、组件、数据结构、过程、模块、功能(该功能指本说明书中描述的特定功能)等。例如,处理器220可以处理从O2O服务系统100的其他任何组件获取的图像或文本数据。在一些实施例中,处理器220可以包括微控制器、微处理器、精简指令集计算机(RISC)、专用集成电路(ASIC)、应用特定指令集处理器(ASIP)、中央处理器(CPU)、图形处理单元(GPU)、物理处理单元(PPU)、微控制器单元、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、高级RISC机(ARM)、可编程逻 辑器件以及能够执行一个或多个功能的任何电路和处理器等或其任意组合。仅为了说明,图2中的计算设备200只描述了一个处理器,但需要注意的是,本说明书中的计算设备200还可以包括多个处理器。
计算设备200的存储器(例如,只读存储器(ROM)230、随机存储器(RAM)240、硬盘270等)可以存储从O2O服务系统100的任何其他组件获取的数据/信息。示例性的ROM可以包括掩模ROM(MROM)、可编程ROM(PROM)、可擦除可编程ROM(PEROM)、电可擦除可编程ROM(EEPROM)、光盘ROM(CD-ROM)和数字通用盘ROM等。示例性的RAM可以包括动态RAM(DRAM)、双倍速率同步动态RAM(DDR SDRAM)、静态RAM(SRAM)、晶闸管RAM(T-RAM)和零电容(Z-RAM)等。
输入/输出260可以用于输入或输出信号、数据或信息。在一些实施例中,输入/输出260可以包括输入装置和输出装置。示例性输入装置可以包括键盘、鼠标、触摸屏和麦克风等或其任意组合。示例性输出装置可以包括显示设备、扬声器、打印机、投影仪等或其任意组合。示例性显示装置可以包括液晶显示器(LCD)、基于发光二极管(LED)的显示器、平板显示器、曲面显示器、电视设备、阴极射线管(CRT)等或其任意组合。
通信端口250可以连接到网络以便数据通信。连接可以是有线连接、无线连接或两者的组合。有线连接可以包括电缆、光缆或电话线等或其任意组合。无线连接可以包括蓝牙、Wi-Fi、WiMax、WLAN、ZigBee、移动网络(例如,3G、4G或5G等)等或其任意组合。在一些实施例中,通信端口250可以是标准化端口,如RS232、RS485等。在一些实施例中,通信端口250可以是专门设计的端口。
图3是根据本说明书一些实施例所示的示例性移动设备的示例性硬件和/或软件的示意图。如图3所示,移动设备300可以包括通信单元310、显示单元320、图形处理器(GPU)330、中央处理器(CPU)340、输入/输出单元350、内存360、存储单元370等。在一些实施例中,移动设备300也可以包括任何其 它合适的组件,包括但不限于系统总线或控制器(图中未显示)。在一些实施例中,操作系统361(例如,iOS、Android、Windows Phone等)和应用程序362可以从存储单元370加载到内存360中,以便由CPU340执行。应用程序362可以包括浏览器或用于从O2O服务系统100接收文字、图像、音频或其他相关信息的应用程序。信息流的用户交互可以通过输入/输出单元350实现,并且通过网络120提供给处理设备112和/或O2O服务系统100的其他组件。
为了实现在本说明书中描述的各种模块、单元及其功能,计算设备或移动设备可以用作本说明书所描述的一个或多个组件的硬件平台。这些计算机或移动设备的硬件元件、操作系统和编程语言本质上是常规的,并且本领域技术人员熟悉这些技术后可将这些技术适应于本说明书所描述的系统。具有用户界面元件的计算机可以用于实现个人计算机(PC)或其他类型的工作站或终端设备,如果适当地编程,计算机也可以充当服务器。
图4是根据本说明书一些实施例所示的示例性处理设备的模块图。如图4所示,用于判定服务事故的责任的系统的处理设备112可以包括获取模块410、提取模块420、判定模块430和返回模块440。
获取模块410可以用于获取服务请求。服务请求发生服务事故。例如,服务请求可以是被取消的服务请求。关于服务请求的更多细节可以参见步骤510及其相关描述,此处不再赘述。
获取模块410还可以用于获取判责模型(例如,通过训练获取判责模型)。在一些实施例中,获取模块410可以获取多个样本服务请求、样本服务请求对应的标注信息以及样本服务请求对应的样本特征,并基于多个样本服务请求及其对应标注信息和样本特征训练得到判责模型。关于判责模型的训练参见图12及其相关描述,此处不再赘述。
获取模块410还可以对判责模型进行更新。在一些实施例中,获取模块410还可以用于获取服务提供方的申诉信息,并基于申诉信息和责任判断结果更新判责模型。关于判责模型的更新可以的更多细节参见530及其相关描述,此 处不再赘述。
提取模块420可以用于提取服务请求的特征。特征包括:沟通信息、服务请求的基本信息、服务提供方的画像信息和/或服务请求方的画像信息。在一些实施例中,提取模块420可以基于特征中的文本信息(例如,沟通信息),确定对应的文本向量特征(例如,目标文本向量)。关于特征的更多细节可以参见步骤520及其相关描述,关于确定目标文本向量的更多细节参见图6及其相关描述。
判定模块430可以用于确定责任判责结果。责任判定结果至少包括:服务请求的服务提供方是否为服务事故的责任方;若责任判定结果表明服务提供方为责任方,责任判定结果还包括:服务提供方对应的目标责任场景。在一些实施例中,判定模块430可以基于判责模型对特征进行处理,确定服务事故的责任判定结果。在一些实施例中,判定模块430可以确定候选责任场景,以及基于候选责任场景的优先级确定目标责任场景。关于责任判定结果、目标责任场景及其优先级的更多细节参见步骤530、1120,此处不再赘述。
返回模块440可以用于将责任判定结果返回给服务提供方或/和服务请求方。
应当理解,图4所示的系统及其模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中,由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包括在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本说明书的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如 由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要注意的是,以上对于O2O服务系统100的处理设备112及其模块的描述,仅为描述方便,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。例如,获取模块410可以和提取模块420可以是两个不同的模块,也可以合并成同一个模块。诸如此类的变形,均在本说明书的保护范围之内。
图5是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图。在一些实施例中,流程500可以由处理设备(例如,处理设备112或其他处理设备)执行。例如,流程500可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程500。在一些实施例中,流程500可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图5所示的操作的顺序并非限制性的。
步骤510,获取服务请求,该服务请求发生服务事故。在一些实施例中,步骤510可以由获取模块410执行。
服务请求可以指由乘客、请求者、服务请求方、用户、司机、提供者、服务提供方、供应者等或其任意组合发起的请求。
在一些实施例中,服务请求可以是任何基于位置的服务请求。例如,服务请求可以是与运输服务(例如,线上打车服务、快递服务)相关的请求。在一些实施例中,服务请求可以是实时请求或预约请求。在本说明书中,实时请求可以指请求者期望在当前时刻或与当前时刻的间隔小于预设阈值的指定时间接受服务。预约请求可以指请求者期望在与当前时刻的间隔大于预设阈值的指定时间接受服务。在一些实施例中,预设阈值可以是系统默认值,也可以根据不同情况调整。例如,预设阈值可以是3分钟、5分钟、10分钟、20分钟、30 分钟、1小时等。又例如,在交通高峰时段,预设阈值可以设置得相对较小(例如,10分钟);而在非高峰时段(例如,上午10:00-12:00),时间阈值可以设置得相对较大(例如,1小时)。
在一些实施例中,对于不同的应用场景,服务请求方和服务提供方可以是不同的。例如,对于在线打车服务,服务请求方可以是乘客,服务提供方可以是司机,此时对应的服务请求可以是打车请求。又例如,对于外卖服务,服务请求方可以是用户,服务提供方可以是骑手,此时对应的服务请求可以是外卖请求。为了方便描述,本说明书实施例以“在线打车服务”场景为例进行说明。
服务事故可以指与服务请求相关的异常情况。在一些实施例中,服务事故可以包括服务请求被取消、服务请求被中断、服务请求被延迟等。在一些实施例中,服务事故可能与负面事件相关。负面事件可以包括交通事故、服务提供方提供的服务超时、服务提供方提供的服务有误、服务请求方对服务提供方投诉等。在一些实施例中,服务事故可能是有多种原因导致的。
例如,对于“服务请求被取消”,在一些情况下,服务请求被取消可能是由服务提供方导致的。例如,由于服务提供方未及时接听电话、服务提供方的车辆未及时到达上车地点,导致服务请求被服务请求方取消(可能还伴随服务请求方对服务提供方的投诉)。在一些情况下,服务请求被取消可能是由服务请求方导致的。例如,由于服务请求方未及时接听电话、服务请求方未及时达到上车地点、服务请求方临时改变行程等,导致服务请求被取消(某些情况下还伴随服务请求方对服务提供方的恶意投诉)。
又例如,对于“服务请求被中断”,在一些情况下,服务请求被中断可能与客观事件相关。例如,在服务提供方行驶至上车地点的途中,由于发生交通事故、服务提供方的车辆发生故障、服务提供方因健康原因无法到达等,导致服务请求被中断。
又例如,对于“服务请求被延迟”,在一些情况下,服务请求被延迟可 能是由服务提供方或服务请求方导致的。例如,由于服务提供方未及时接收服务请求、服务提供方迟到、服务提供方的信号出现网络延迟、服务请求方未及时到达上车地点等,导致服务请求被延迟。
若服务请求发生服务事故(例如,被取消),则可能进一步被服务提供方或者服务请求方投诉。在一些实施例中,被取消的服务请求中可能包含被投诉的服务请求。
在一些实施例中,获取模块410可以通过多种方式获取发生服务事故的服务请求。
在一些实施例中,获取模块410可以从存储设备(例如,存储设备130)中获取发生服务事故的服务请求。例如,获取模块410可以从存储设备(例如,数据库130)中的存储的服务请求数据,获取被取消的服务请求。示例的,可以根据服务请求的里程数、收费信息(里程数不满足该服务请求的目标里程数、收费为零)等,确定被取消的服务请求。
在一些实施例中,获取模块410可以通过网络120从服务提供方终端150或服务请求方终端140中获取发生服务事故的服务请求。例如,获取模块410可以从服务提供方终端150中获取服务提供方在当前时刻被中断的一个或多个服务请求。
可以理解,由于存储设备(例如,存储设备130)、服务提供方终端150、服务请求方终端140中可能会存在大量的服务请求数据,在判定服务事故的责任时,需要从上述海量服务请求数据中筛选出发生事故的服务请求。因此,通过上述获取方式,可以快速并准确地筛选出发生服务事故的服务请求,从而提升对服务事故进行责任判定的效率和准确性。
步骤520,提取服务请求的特征。在一些实施例中,步骤520可以由提取模块420执行。
在一些实施例中,服务请求的特征可以至少包括沟通信息。在一些实施例中,沟通信息可以基于服务请求的服务请求方和服务提供方之间的沟通内容 确定。在一些实施例中,沟通内容可以通过文字、语音、图像、视频等形式呈现。例如,沟通内容可以是服务提供方与服务请求方之间的文字聊天记录。又例如,沟通内容可以是服务提供方与服务请求方之间的语音聊天记录、通话录音等。服务提供方和服务请求方可以通过应用程序内自带的聊天功能向对方发送语音信息,其中可能包括服务提供方与服务请求方对上车地点及上车时间等可能发生变动的问题进行协商的通话记录等。
在一些实施例中,提取模块420可以对沟通信息进行处理,提取该沟通信息对应的文本向量。关于提取沟通信息的文本向量的更多细节可以参见图6及其相关描述,此处不再赘述。
在一些实施例中,在提取沟通信息对应的文本向量前,提取模块420还可以对沟通信息进行预处理。例如,提取模块420可以将语音形式的沟通内容转化为对应的文字形式的沟通内容。又例如,提取模块420可以对语音形式的沟通内容进行降噪处理(例如,通过声学模型进行降噪处理)。
在一些实施例中,服务请求的特征可以包括服务请求的基本信息、服务提供方的画像信息、服务请求方的画像信息等或其任意组合。
在一些实施例中,服务请求的基本信息可以包括出发地、目的地、上车地点、出发时间、里程、预估价、司机接驾距离(例如,司机从接受服务请求时的位置至上车地点间的距离)、司机接受服务请求时的时间、司机接受服务请求时的位置、司机与乘客是否顺路、与服务事故相关的信息(例如,司机从接受服务请求到服务请求被取消之间的时间间隔或里程、服务请求被取消时的时间、司机是否到达上车地点)等。
在一些实施例中,服务请求的基本信息还可以包括车辆信息。车辆信息可包括车辆类型、车龄、车辆总里程、车辆成本、车辆收入、车辆价格、车牌号码、每公里油耗、剩余燃油、后备箱大小、其他设备的信息(例如、携带的急救医疗设备的信息、灭火设备的信息等)、车辆的位置信息等或其任意组合。
在一些实施例中,服务提供方的画像信息是指根据服务提供方的社会属 性、生活习惯、提供服务行为和/或历史信息等信息而抽象出的标签化信息。例如,服务提供方的历史信息可以包括服务提供方的服务分数、服务提供方的被投诉率等。又例如,服务提供方的历史信息可以包括历史预设时间段内(例如,近1至3个月内)的服务请求取消率、被取消的服务请求中服务提供方的有责率、服务提供方的服务流水(例如,服务提供方所提供的所有服务的价钱)、服务提供方的服务量(即服务提供方所提供的所有服务的数量)等。
在一些实施例中,服务请求方的画像信息是指根据服务请求方的社会属性、生活习惯请求服务行为和/或历史信息等信息而抽象出的标签化信息。例如,服务请求方的历史信息可以包括服务请求方的信用分数、服务请求方的信用等级等。又例如,服务请求方的历史信息可以包括服务请求方发起请求的取消率、服务请求方被投诉率、被取消的服务请求中服务请求方的有责率、服务请求方的流水、服务请求方的服务量等。
需要说明的是,上述仅示例性的列举了一部分服务请求的特征,具体在实际应用中,服务请求的特征可以不限于上述所列举的特征。
在一些实施例中,提取模块420可以通过多种方式提取上述特征。在一些实施例中,提取模块420可以从O2O服务系统100的一个或以上组件(例如,存储设备130、服务提供方终端150、服务请求方终端140等)或经由网络120从外部源提取上述特征。在一些实施例中,提取模块420可以通过特征提取算法提取服务请求的特征。特征提取算法可以包括HOG特征提取算法、LBP特征提取算法、Haar特征提取算法、LoG特征提取算法、Harris角点特征提取算法、SIFT特征提取算法、SURF特征提取算法等,或其任意组合。
在一些实施例中,服务请求的特征可以以多种形式表示。例如,服务请求的特征可以以非实值(例如,向量、字符、字符串、代码、图形等)形式表示。
步骤530,基于判责模型对特征进行处理,确定服务事故的责任判定结果。在一些实施例中,步骤530可以由判定模块430执行。
在一些实施例中,责任判定结果可以包括服务请求的服务提供方是否为服务事故的责任方。在一些实施例中,若责任判定结果表明服务提供方为服务事故的责任方,则责任判定结果还包括服务提供方对应的目标责任场景。
在一些实施例中,“责任场景”可以在一定程度上体服务事故发生的原因。例如,服务提供方对应的责任场景可以是“服务提供方未及时接听电话”、“车辆未及时到达目的地”等。又例如,服务请求方对应的责任场景可以是“服务请求方发起错误的服务请求”、“服务请求方发起恶意投诉”等。
在一些实施例中,责任场景还可以体现服务事故对应的严重程度。在一些实施例中,不同的责任场景可以对应不同的严重程度。例如,责任场景“服务提供方导致车辆交通事故”对应的严重程度可以为“非常严重”;而责任场景“服务提供方未及时到达目的地”对应的严重程度可以为“中度严重”。
在一些实施例中,责任判定模型可以为机器学习模型。例如,机器学习模型可以包括但不限于卷积神经网络模型、循环神经网络模型、XGBoost模型、决策树模型、GBDT(Gradient Boosted Decision Tree/Grdient Boosted Regression Tree)模型、线性回归模型等。在一些实施例中,判责模型可以包括一个或多个用于执行分类任务的分类模型。在一些实施例中,分类模型可以包括但不限于KNN(k-nearestneighbors)模型、感知机模型、朴素贝叶斯模型、决策树模型、逻辑斯蒂回归模型、支持向量机模型、随机森林模型、神经网络模型等或其任意组合。
在一些实施例中,判定模块430可以将服务请求的特征输入判责模型,进而判责模型可以输出服务提供方是否为服务事故的责任方的判定结果。在这种情况下,判责模型可以包括用于执行二分类任务的子模型。该子模型可以用于对输入的服务请求的特征进行二分类,输出服务请求的服务提供方是否为服务事故的责任方。例如,输出责任判定结果为“是”或“否”。进一步地,判定模块430还可以基于预设规则确定服务提供方对应的目标责任场景。更多细节可以参见图10及其相关描述,此处不再赘述。
在一些实施例中,判定模块430可以将服务请求的特征输入判责模型,进而判责模型可以同时输出服务提供方是否为服务事故的责任方的判定结果以及对应的目标责任场景。在这种情况下,判责模型可以包括用于执行二分类任务的第一子模型以及用于执行多分类任务的第二子模型。第一子模型可以用于对输入的服务请求的特征进行二分类,输出服务请求的服务提供方是否为服务事故的责任方。第二子模型可以用于对输入的服务请求的特征进行多分类,输出目标责任场景(例如,“服务提供方未按时到达服务请求地点”、“服务提供方接收服务请求超时”)。更多细节可以参见图9及其相关描述,此处不再赘述。
在一些实施例中,判定模块430可以将服务请求的上述特征输入判责模型,判责模型可以输出服务请求方为服务事故责任方的概率,进一步判定模块430可以根据输出的概率确定服务提供方是否为服务事故的责任方。例如,如果模型输出的概率值大于预设概率阈值,则判定模块430可以确定服务请求方为服务事故责任方。预设概率阈值可以是系统默认值,也可以根据不同情况调整。
在一些实施例中,训练模块(图中未体现)可以基于多组标注的训练样本训练判责模型。具体地,训练模块可以将标注的训练样本输入初始判责模型,通过迭代更新初始判责模型的参数以确定最终的判责模型。在一些实施例中,训练模块可以通过各种方法(例如,梯度下降法)训练判责模型。关于训练判责模型的更多细节可以参见图12及其相关描述,此处不再赘述。
在一些实施例中,确定责任判定结果后,判定模块430还可以将责任判定结果发送至服务提供方。服务提供方在接收责任判定结果后,如果对责任判定结果有异议,可以向系统发送申诉信息。在一些实施例中,训练模块可以获取申诉信息并基于申诉信息和责任判定结果,更新判责模型。相应地,基于服务提供方的反馈,训练模块可以更新训练样本的标注,并基于不断扩充的训练样本优化已有的模型,从而提升判责模型输出责任判定结果的准确率。
根据本说明书实施例,可以基于判责模型判定服务请求的服务提供方是 否为服务事故的责任方,并确定对应的目标责任场景。还可以将目标责任场景发送至服务提供方,以方便服务提供方及时了解其作为服务事故责任方的原因。一方面可以提高服务提供方对判定其有责的认可度,另一方面,也可以帮助服务提供方更好地改善自身服务,以提供更优质的服务。
应当注意的是,上述有关流程500的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程500进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图6是根据本说明书一些实施例所示的提取服务请求的特征的示例性流程的流程图。在一些实施例中,流程600可以由处理设备(例如,处理设备112或其他处理设备)执行。例如,流程600可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程600。在一些实施例中,流程600可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图6所示的操作的顺序并非限制性的。
在一些实施例中,流程600可以由处理设备112(例如,提取模块420)或其他处理设备执行。例如,流程600可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程600。在一些实施例中,流程600可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图6所示的操作的顺序并非限制性的。
步骤610,对沟通信息进行分词处理,确定至少一个目标分词结果。
在一些实施例中,分词处理是指将连续文本(例如,一句话)按照一定的规则划分成若干词或短语序列。例如,假设文本为“北京机场进站口”,对该文本进行分词处理后的结果可以是“北京/机场/进站口”。相应地,目标分词结果是对沟通信息的文本进行分词处理后得到的结果。例如,目标分词结果可 以包括沟通信息中独立的词、短语、标点符号或具有确定意义的其它语义单元等。
在一些实施例中,进行分词处理的分词方法可以包括但不限于基于词典的分词方法、基于理解的分词方法、基于统计的分词方法(例如,N元文法(N-gram)模型、隐马尔科夫模型等)、基于规则的分词方法(例如,最小匹配算法、最大匹配算法、逆向最大匹配算法、逐字匹配算法、N-最短路径分词算法等)等。
在一些实施例中,处理设备(例如,处理设备112)可以对沟通信息进行分词,得到至少一个初步分词结果。进一步地,处理设备可以基于上述至少一个初步分词结果分别对应的属性特征,对上述至少一个初步分词结果进行过滤,确定至少一个目标分词结果。
在一些实施例中,处理设备可以基于预设规则确定至少一个初步分词结果。例如,处理设备可以基于最小词单元或符号单元对沟通信息进行分词以得到至少一个初步分词结果。相应地,至少一个初步分词结果可以包括名词、动词、标点符号等。仅作为示例,假设沟通信息为“2019/12/14 12:01司机:你好,我即将到达上车点”,至少一个初步分词结果可以包括“2019/12/14 12:01”、“司机”、“你好”、“,”、“我”、“即将”、“到达”、“上车点”。又例如,假设沟通信息为“时间:2019/12/14地点:某地人物:甲、乙”,至少一个初步分词结果可以包括“时间”、“2019/12/14”、“地点”、“某地”、“人物”、“甲”、“乙”。
在一些实施例中,以一个特定初步分词结果为例,属性特征可以指类型、性质、实际含义等能够体现其特性的信息。例如,属性特征可以包括初步分词结果对应的沟通方(例如,服务请求方、服务提供方、客服)、初步分词结果的重要性等。在本说明书中,“重要性”可以体现该初步分词结果在特定应用场景中的沟通中的重要程度或意义大小。例如,以本说明书中“在线打车”场景为例,假设初步分词结果为“我/去/步行街”,“去”在此处仅为一个表达动 作的连接动词,因此“去”的重要性低于“我”和“步行街”。
在一些实施例中,确定至少一个初步分词结果后,处理设备可以基于属性特征对其进行过滤处理。例如,处理设备可以将重要性较低的分词结果去除。仅作为示例,假设沟通信息为“2019/12/14 12:01司机:你好,我即将到达上车点”,至少一个初步分词结果为“2019/12/14 12:01”、“司机”、“你好”、“,”、“我”、“即将”、“到达”、“上车点”。处理设备可以将“你好”及标点等无异议的初步分词结果去除,得到至少一个目标分词结果。
在一些实施例中,处理设备可以基于语义信息对至少一个初步分词结果进行处理,确定至少一个目标分词结果。例如,处理设备可以根据最大语义单元,合并一个或多个对应最小语义单元的初步分词结果,确定目标分词结果。仅作为示例,假设沟通信息为“我去西湖旁边的浙江大学”,至少一个初步分词结果为“我/去/西湖/旁边/的/浙江/大学”,进而处理设备可以合并对应最小语义单元的初步分词结果以确定目标分词结果“我/去/西湖旁边/的/浙江大学”。
步骤620,将至少一个目标分词结果转化为至少一个目标分词向量。
在一些实施例中,处理设备(例如,处理设备112)可以通过分词模型将至少一个目标分词结果转化为至少一个目标分词向量。在一些实施例中,分词模型可以包括但不限于word2vec模型、N元文法(N-gram)模型、CBOW模型等。
在一些实施例中,处理设备(例如,处理设备112)可以通过编码算法将至少一个目标分词结果转化为至少一个目标分词向量。在一些实施例中,编码算法可以包括但不限于one-hot编码、N-gram算法等。
步骤630,基于至少一个目标分词向量,确定沟通信息对应的目标文本向量。
在一些实施例中,处理设备(例如,处理设备112)可以对至少一个目标分词向量进行拼接,确定沟通信息对应的目标文本向量。
在一些实施例中,处理设备(例如,处理设备112)可以按至少一个目标 分词结果在沟通信息中的出现顺序,将对应的至少一个分词向量拼接为二维矩阵(其中,每行表示一个分词向量,行数表示至少一个目标分词结果的数量),再通过卷积和最大池化得到沟通信息对应的目标文本向量。
在一些实施例中,处理设备(例如,处理设备112)还可以通过其他模型或算法确定目标文本向量。其中,上述其他模型和算法可以包括但不限于词袋模型、Word2Vec模型、N元文法(N-gram)、Bert模型等。
应当注意的是,上述有关流程600的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程600进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图7是根据本说明书一些实施例所示的示例性词向量生成模型的示意图。在一些实施例中,结合上文所述,处理设备(例如,处理设备112)可以通过分词模型将至少一个目标分词结果转化为至少一个目标分词向量。以word2vec模型为例,如图7所示,word2vec模型包括输入层、隐藏层和输出层。对于至少一个目标分词结果中的每一个,可以确定该目标分词结果对应的one-hot向量。例如,假设有3个目标分词结果,则3个目标分词结果分别对应的one-hot向量可以是(1,0,0)、(0,1,0)和(0,0,1)。随后,word2vec模型可以自动学习两个权重W与W’,用于one-hot向量的维度转换。具体地,基于权重W,可以将一个长度为K(如K=10000000)的one-hot向量V1(x1,x2,x3...,xk,...xV)转换为一个长度只有N(如N=5)的向量h(h1,h2,h3...,hi,...hN);而基于权重W’,则可以将将h转换回一个长度为K的向量V2(y1,y2,y3...,yj,...yV),其中,V1和V2分别对应两个目标分词结果且该两个目标分词结果在沟通信息文本中的距离小于预设阈值(预设阈值可以是系统默认值,也可以根据不同情况调整调整,例如,预设阈值可以为3)。因此,W V×N={w ki},即W为一个V×N的矩阵;W’ N×V={w’ ij},即W’为一个N×V的矩阵,而W中第j行的向量为目标分词结果Vj对应的目标分词向量。
图8是根据本说明书一些实施例所示的示例性文本向量生成过程的示意图。结合图7所述,可以基于词向量生成模型确定至少一个目标分词结果分别对应的至少一个目标分词向量。进一步地,如图8所示,可以通过卷积神经网络,基于至少一个目标分词向量,确定沟通信息对应的目标文本向量。具体地,可以将至少一个目标分词向量拼接二维矩阵,其中,二维矩阵中每行表示一个目标分词向量,二维矩阵的行数表示至少一个目标分词向量的数量。随后,可以通过卷积和最大池化,对二维矩阵进行进一步处理,得到沟通信息对应的目标文本向量。在一些实施例中,还可以通过优化log loss(对数损失函数),不断优化卷积神经网络中卷积层的权重,从而得到更优的目标文本向量。
图9是根据本说明书一些实施例所示的基于判责模型确定责任判断结果的示例性过程的示意图。在一些实施例中,流程900可以由处理设备112(例如,判定模块430)或其他处理设备执行。例如,流程900可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程900。在一些实施例中,流程900可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。
结合步骤530所述,判定模块430可以将服务请求的特征输入判责模型,进而判责模型可以同时输出服务提供方是否为服务事故的责任方的判定结果以及对应的目标责任场景。具体地,如图9所示,判定模块430可以将服务请求的特征901输入判责模型902,并通过判责模型902的输出确定服务提供方是否为服务事故的责任方的判定结果903-1以及对应的目标责任场景903-2。具体地,判责模型902可以包括责任方判定子模型902-1和责任场景判定子模型902-2。其中,责任方判定子模型902-1可以用于判定服务请求的服务提供方是否为服务事故的责任方,责任场景判定子模型902-2可以用于判定服务事故的责任场景。
在一些实施例中,责任方判定子模型902-1和/或责任场景判定子模型902-2可以是分类模型。在一些实施例中,分类模型可以包括但不限于KNN (k-nearestneighbors)模型、感知机模型、朴素贝叶斯模型、决策树模型、逻辑斯蒂回归模型、支持向量机模型、随机森林模型、神经网络模型等或其任意组合。
图10是根据本说明书一些实施例所示的基于判责模型确定责任判断结果的示例性过程的示意图。在一些实施例中,流程1000可以由处理设备112(例如,判定模块430)或其他处理设备执行。例如,流程1000可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程1000。在一些实施例中,流程1000可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图10所示的操作的顺序并非限制性的。
结合步骤530所述,判定模块430可以将服务请求的特征输入判责模型,进而判责模型可以输出服务提供方是否为服务事故的责任方的判定结果。如图10所示,判定模块430可以将服务请求的特征1001输入判责模型1002,并通过判责模型1002的输出确定服务提供方是否为服务事故的责任方的判定结果1003。
在一些实施例中,判定模块430还可以基于第一预设规则1005,得到服务提供方对应的目标责任场景1004。关于第一预设规则的更多细节可以参见图11及其相关描述,此处不再赘述。
在一些实施例中,判责模型1002可以是分类模型。在一些实施例中,分类模型可以包括但不限于KNN(k-nearestneighbors)模型、感知机模型、朴素贝叶斯模型、决策树模型、逻辑斯蒂回归模型、支持向量机模型、随机森林模型、神经网络模型等或其任意组合。
图11是根据本说明书一些实施例所示的确定目标责任场景的示例性过程的流程图。在一些实施例中,流程1100可以由处理设备(例如,判定模块430)或其他处理设备执行。例如,流程1100可以以程序或指令的形式存储在存储设 备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程1100。在一些实施例中,流程1100可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图11所示的操作的顺序并非限制性的。
步骤1110,基于第一预设规则对服务请求的特征进行处理,确定至少一个候选责任场景。
如前文所述,服务请求的特征可以包括服务请求的基本信息、服务提供方的画像信息、服务请求方的画像信息、服务提供方与服务请求方的沟通信息等或其任意组合。相应地,第一预设规则可以是与服务请求的特征相关的筛选、分析或判断条件。例如,第一预设规则可以是用于判断服务提供方的责任场景的标准作业程序(Standard OperationProcedure,SOP)。具体例如,SOP可以是针对被服务请求方投诉的服务请求(其发生了服务事故)所执行的一套服务提供方责任判定标准。
在一些实施例中,可以通过对样本服务请求(例如,历史服务请求)的相关信息(例如,历史服务请求对应的投诉信息)进行聚类分析,以抽象出服务提供方的责任场景,从而确定第一预设规则。在一些实施例中,可以通过聚类算法分析历史服务请求的特征,将类似的特征聚类分组,并基于上述分组后的特征抽象确定相应的责任场景。在一些实施例中,聚类算法可以包括k均值聚类算法、模糊c均值聚类算法、分层聚类算法、高斯聚类算法、基于最小生成树(MST)的聚类算法、核k均值聚类算法、基于密度的聚类算法等。
在一些实施例中,第一预设规则还可以是预设的判责经验。例如,第一预设规则可以是基于历史判责数据确定的判责经验。在一些实施例中,可以根据不同情况调整第一预设规则。例如,不同的城市或地区可以对应不同的第一预设规则;不同的时间段可以对应不同的第一预设规则等。
步骤1120,根据至少一个候选责任场景分别对应的优先级,确实服务提供方对应的目标责任场景。
在一些实施例中,优先级可以与投诉转化率、投诉次数、投诉方式等相关。
以一个特定的候选责任场景为例,该候选责任场景对应的投诉转化率指该候选责任场景下被取消的服务请求中被服务请求方投诉的服务请求的比例。例如,假设预设时间段内(例如,过去3个月)在该候选责任场景下被取消的服务请求的数量为N,其中被服务请求方投诉的服务请求的数量为X,则投诉转化率为X/N。可以理解,投诉率越高,该责任场景的优先级越高。例如,候选责任场景A的投诉转化率为20%,候选责任场景B的投诉转化率为10%。那么,候选责任场景A的优先级高于候选责任场景B。
仍以一个特定的候选责任场景为例,该候选责任场景对应的投诉次数指该候选责任场景下服务请求方投诉的次数(即被取消的服务请求中被服务请求方投诉的服务请求的数量)。例如,假设预设时间段内(例如,过去3个月)在该候选责任场景下被取消的服务请求的数量为N,其中被服务请求方投诉的服务请求的数量为X,则投诉次数为X。可以理解,投诉次数越多,该责任场景的优先级越高。在一些实施例中,投诉次数还可以包括平均投诉次数(例如,每天的投诉次数)。
投诉方式指服务请求方进行投诉时所采用的方式。例如,视频投诉、语音投诉、现场投诉、文字投诉等。投诉方式可以体现用户对服务请求的投诉程度。可以理解,投诉方式越直观,对应的投诉程度越高。例如,视频投诉比语音投诉更加直观,则视频投诉的投诉程度高于语音投诉的投诉程度。
在一些实施例中,处理设备(例如,处理设备112)可以从存储设备130、服务提供方终端150、服务请求方终端140中的一个或多个,获取至少一个候选责任场景的历史数据。处理设备可以基于历史数据确定至少一个候选责任场景分别对应的投诉转化率、投诉次数、投诉方式等。进一步地,处理设备可以基于投诉转化率、投诉次数、投诉方式等中的一个或多个,确定至少一个候选责任场景分别对应的优先级。例如,处理设备可以仅基于投诉转化率确定至少一 个候选责任场景分别对应的优先级。投诉转化率越高,优先级越高。又例如,处理设备可以仅基于投诉次数确定至少一个候选责任场景分别对应的优先级。投诉次数越多,优先级越高。又例如,处理设备可以仅基于投诉方式确定至少一个候选责任场景分别对应的优先级。投诉方式越直观,优先级越高。又例如,处理设备可以基于投诉转化率、投诉次数、投诉方式等中的多个确定至少一个候选责任场景分别对应的优先级,其中投诉转化率、投诉次数、投诉方式等可以对应不同的权重。权重可以是系统默认值,也可以根据不同情况调整。
在一些实施例中,优先级可以是预先设定的。预先设定的优先级可以存储在本说明书任意位置所述的存储设备(例如,存储设备130)中。处理设备可以访问存储设备并从中读取至少一个候选责任场景分别对应的优先级。
在一些实施例中,也可以通过其他方式确定优先级。例如,可以通过第一预设规则或其他判责经验抽象出优先级。本说明书对于优先级的确定方式不作限制。
在一些实施例中,可以对优先级进行动态更新。例如,以一个特定的候选责任场景为例,随着时间推移,该候选责任场景下被取消的服务请求(以及其中被投诉的服务请求)的数量在随之变化。与该候选责任场景的优先级相关的参数(例如,投诉转化率、投诉次数、投诉方式)也在随之变化。相应地,也可以该候选责任场景的优先级进行动态更新,以更准确地确定最终的目标责任场景。在一些实施例中,可以在每个固定时段,对优先级进行更新。固定时段可以为一小时、一天、一周等。在一些实施例中,也可以不按固定频率对优先级进行更新,而是满足预设条件时则进行更新,例如,某候选责任场景下被取消的服务请求的增加数量超过预设阈值。
在一些实施例中,处理设备可以将优先级最高的候选责任场景确定为目标责任场景。例如,处理设备可以基于优先级对至少一个候选责任场景进行排序,并将排序最高的候选责任场景确定为目标责任场景。在一些实施例中,处理设备可以将优先级超过预设阈值的候选责任场景确定为目标责任场景。
根据本说明书一些实施例,根据判责模型输出的责任判定结果为服务提供方是服务事故的责任方时,可以进一步基于规则确定至少一个候选责任场景。并进一步根据至少一个候选责任场景分别对应的优先级,确定目标责任场景。例如,将优先级最高的候选责任场景确定为目标责任场景并反馈给服务提供方。相应地,可以有效地提高服务提供方对反馈结果的认可度。
应当注意的是,上述有关流程1100的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程1100进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。例如,处理设备可以直接基于投诉转化率、投诉次数、投诉方式等确定目标责任场景,而无需确定至少一个候选责任场景分别对应的优先级。
图12是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图。在一些实施例中,流程1200可以由处理设备112(例如,获取模块410)或其他处理设备执行。例如,流程1200可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程1200。在一些实施例中,流程1200可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图12所示的操作的顺序并非限制性的。
步骤1210,获取多个样本服务请求。
在一些实施例中,样本服务请求可以是发生服务事故的历史服务请求。例如,样本服务请求可以是被取消的历史服务请求。又例如,样本服务请求可以是被取消且被服务请求方投诉的历史服务请求。在一些实施例中,历史服务请求可以是过去一定时间段内的服务请求,例如,1个月内、3个月内、12个月内的服务请求等。
在一些实施例中,处理设备(例如,处理设备112)可以从存储设备(例如,存储设备130)、服务提供方终端150和服务请求方终端140中的一个或多个以多种方式获取样本服务请求。例如,处理设备可以从存储设备中随机抽取 一个或多个样本服务请求。又例如,处理设备可以从存储设备中获取所有发生服务事故的历史服务请求,并基于上述历史服务请求建立样本服务请求库。处理设备可以从样本服务请求库中随机抽取一个或多个样本服务请求。
步骤1220,对多个样本服务请求进行标注,得到多个样本服务请求分别对应的标注信息。
在一些实施例中,对于多个样本服务请求中的每一个,标注信息至少包括样本服务请求的样本服务提供方是否为服务事故的样本责任方。相应地,进行标注后的样本服务请求可以分为两类:一类是样本服务提供方有责的服务请求,另一类是样本服务提供方无责的服务请求。
在一些实施例中,结合图10所述,判责模型的输出可以是服务提供方是否为服务事故的责任方的判定结果。相应地,样本服务请求对应的标注信息可以仅标注样本服务提供方是否为服务事故的样本责任方。
在一些实施例中,结合图9所述,判责模型可以同时输出服务提供方是否为服务事故的责任方的判定结果以及对应的目标责任场景。相应地,对于多个样本服务请求中的每一个,若标注信息表明样本服务提供方样本责任方(可以称之为“第一标注信息”),标注信息还可以包括样本责任方对应的样本目标责任场景(可以称之为“第二标注信息”)。
在一些实施例中,处理设备可以基于多种方式对多个样本服务请求进行标注。例如,处理设备可以将样本服务提供方是样本责任方对应的样本服务请求标注为“1”,将样本服务提供方不是样本责任方对应的样本服务请求标注为“0”。
在一些实施例中,处理设备可以基于第二预设规则,确定多个样本服务请求分别对应的标注信息。例如,因“司机迟到”而被取消的样本服务请求对应的标注信息为“样本责任方为样本服务提供方”、“样本目标责任场景为样本服务提供方迟到”。结合图11所述,第二预设规则与第一预设规则可以相同或不同。
在一些实施例中,可以对样本服务请求采用人工标注、模型标注等多种方法进行标注。对于样本服务请求的标注方式,本说明书不作限制。
步骤1230,提取所述多个样本服务请求分别对应的样本特征。
结合步骤520所述,样本特征可以包括样本服务请求的基本信息、样本服务提供方的画像信息、样本服务请求方的画像信息、样本服务提供方与样本服务请求方的沟通信息等或其任意组合。关于样本特征的更多细节可以参见图5及其相关描述,此处不再赘述。
步骤1240,基于多个样本服务请求分别对应的标注信息和样本特征,训练得到判责模型。
在本说明书一些实施例中,判责模型用于判定服务提供方是否是服务事故的责任方。相应地,可以将样本服务提供方为服务事故的样本责任方的样本服务请求作为正样本,将样本服务提供方不是服务事故的样本责任方的服务请求作为负样本。进一步地,可以根据正样本和负样本所对应的样本特征,基于预设算法进行模型训练,得到判责模型。
在一些实施例中,预设算法可以包括极端梯度提升算法(eXtreme Gradient Boosting,Xgboost)、支持向量机算法(Support Vector Machine,SVM)、随机森林算法(Random Forest,RF)等。
在一些实施例中,预设算法还可以包括包括神经网络算法、排序算法、回归算法、基于实例的算法、归一化算法、决策树算法、贝叶斯算法、聚类算法、关联规则算法、深度学习算法、降维算法等或其任意组合。神经网络算法可以包括递归神经网络、感知器神经网络、反向传播、霍普菲尔得(Hopfield)网络、自组织映射(SOM)、学习矢量量化(LVQ)等。回归算法可以包括普通最小二乘法、逻辑回归、逐步回归、多元自适应回归样条、本地散点平滑估计等。排序算法可以包括插入排序、选择排序、合并排序、堆排序、冒泡排序、希尔排序(shell sort)、梳排序、计数排序、桶排序、基数排序等。基于实例的算法可以包括K最近邻(KNN)、学习矢量量化(LVQ)、自组织映射(SOM) 等。归一化算法可以包括岭回归、套索算法(LASSO)、弹性网络等。决策树算法可以包括分类和回归树(CART)、迭代二叉树三代(ID3)、卡方自动交互检测(CHAID)、决策树桩、多元自适应回归样条(MARS)、梯度增强机(GBM)等。贝叶斯算法可以包括朴素贝叶斯算法、平均一阶估计器(AODE)或贝叶斯信念网络(BBN)等。基于核的算法可以包括、径向基函数(RBF)或线性判别分析(LDA)等。聚类算法可以包括k均值聚类算法、模糊c均值聚类算法、分层聚类算法、高斯聚类算法、基于最小生成树(MST)的聚类算法、核k均值聚类算法、基于密度的聚类算法等。关联规则算法可以包括先验算法或等价类变换(Eclat)算法等。深度学习算法可以包括受限玻尔兹曼机(RBN)、深度信念网络(DBN)、卷积网络、栈式自编码器等。降维算法可以包括主成分分析(PCA)、偏最小二乘回归(PLS)、萨蒙(Sammon)映射、多维标度(MDS)、投影寻踪等。
在一些实施例中,判责模型可以是监督学习模型。处理设备可以基于用于训练监督学习模型的算法训练得到判责模型。示例性算法可以包括梯度提升决策树(GBDT)算法、决策树算法、随机森林算法、逻辑回归算法、支持向量机(SVM)算法、朴素贝叶斯算法、自适应增强算法、K最近邻(KNN)算法、马尔可夫链算法等或其任意组合。
在一些实施例中,判责模型可以是无监督学习模型。处理设备可以基于用于训练无监督学习模型的算法训练得到判责模型。示例性算法可以包括k均值聚类算法、分层聚类算法、具有噪声的基于密度的聚类方法(DBSCAN)算法、自组织映射算法等或其任意组合。
在一些实施例中,判责模型可以是强化学习模型。处理设备可以基于用于训练强化学习模型的算法训练得到判责模型。示例性算法可包括深度强化学习算法、逆强化学习算法、学徒学习算法等或其任意组合。
在一些实施例中,处理设备可以通过训练初始判责模型以确定判责模型。初始判责模型可以存储在存储设备(例如,数据库130)或其他存储器(例如, ROM或RAM)中。
在一些实施例中,当训练的判责模型满足预设条件时,训练结束。其中,预设条件可以是损失函数结果收敛或小于预设阈值、迭代次数达到预设次数等。
模型训练过程也就是构建服务提供方是否为服务事故责任方与服务请求的特征之间的映射关系。相应地,可以根据模型完成后所得到的映射关系,根据任意服务请求的特征,确定该服务请求的服务提供方是否为服务事故的责任方。可以有效提高服务请求的判定结果的准确性和可靠性,同时,提高服务请求的判定效率。
综上所述,根据本说明书实施例所提供的判责模型的训练方法,获取被取消的历史服务请求作为样本服务请求,进行样本标注,并根据各标注后的样本服务请求的特征训练获取判责模型,可以使得获取的判责模型更加可靠,进一步地,根据该判责模型进行服务请求是否有责的判定时,判定结果准确性和可靠性更好,从而提高了服务提供方和服务请求方的服务体验度。
图13是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图。在一些实施例中,流程1300可以由处理设备112或其他处理设备执行。例如,流程1300可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程1300。在一些实施例中,流程1300可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图13所示的操作的顺序并非限制性的。
步骤1310,获取服务请求。在一些实施例中,该步骤1310可以由获取模块410执行。关于获取服务请求的更多细节参见步骤510及其相关描述,此处不再赘述。
步骤1320,提取服务请求的特征,特征至少包括沟通信息。在一些实施例中,该步骤1320可以由提取模块420执行。关于提取服务请求的特征的更多细节参见步骤520及其相关描述,此处不再赘述。
步骤1330,基于判责模型对特征进行处理,确定服务事故的责任判定结果。在一些实施例中,该步骤1330可以由判定模块430执行。关于确定服务事故的责任判定结果的更多细节参见步骤530及其相关描述,此处不再赘述。
图14是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图。在一些实施例中,流程1400可以由处理设备112或其他处理设备执行。例如,流程1400可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程1400。在一些实施例中,流程1400可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图14所示的操作的顺序并非限制性的。
步骤1410,获取服务请求。在一些实施例中,该步骤1410可以由获取模块410执行。关于获取服务请求的更多细节参见步骤510及其相关描述,此处不再赘述。
步骤1420,提取服务请求的特征。在一些实施例中,该步骤1420可以由获取模块410执行。关于提取服务请求的特征的更多细节参见步骤520及其相关描述,此处不再赘述。
步骤1430,基于判责模型对特征进行处理,确定服务事故的责任判定结果,责任判定结果包括:服务请求的服务提供方是否为服务事故的责任方,若服务提供方为责任方,服务提供方对应的目标责任场景。在一些实施例中,该步骤1430可以由获取模块430执行。关于确定服务事故的责任判定结果的更多细节参见步骤530及其相关描述,此处不再赘述。
图15是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图。在一些实施例中,流程1500可以由处理设备112或其他处理设备执行。例如,流程1500可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程1500。在一些实施例中,流程1500可以利用以下未 描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图15所示的操作的顺序并非限制性的。
在一些实施例中,服务请求通过订单的方式进行,例如,可以通过发起订单发起服务请求,对订单的评价实际是对服务请求的评价,在一些实施例中,服务请求的基础信息可以包括订单信息。如步骤510所述,服务请求发生服务事故,可能导致服务请求被取消。在一些实施例中,判定服务事故的责任(简称责任判定)可以包括以下步骤:
步骤1510,获取已取消的服务请求的订单信息、已取消的服务请求对应的服务提供方与服务请求方的沟通信息。
在一些实施例中,订单信息为订单基本信息,对于共享出行领域,订单信息可以包括以下一种或多种的组合:接驾距离、乘客上车地点、接单地点、接单时间、取消订单时间等。
步骤1520,根据订单信息和沟通信息,利用预存的判责模型对已取消的服务请求订进行责任判定。
在一些实施例中,预存的判责模型为训练好或提前建立好的机器学习模型。关于判责模型的训练或建立参见图17及其相关描述,此处不再赘述。
关于基于判责模型进行责任判责的更多细节参见图5及其相关描述。
本说明书一些实施例提供的判定服务事故的责任的方法,在确定出现已取消的服务请求后,获取该已取消的服务请求的订单信息以及服务提供方(例如,驾驶员或司机)与服务请求方(例如,乘客)的沟通信息,进而根据订单信息和沟通信息,利用预存的责任判定模型对已取消的服务请求进行责任判定,例如,判断驾驶员取消服务请求的这一行为是否是驾驶员的责任,通过本说明书的一些实施例,能够实现结合服务提供方与服务请求方的沟通信息对服务请求被取消进行判责,提升判责准确率,避免因误判而导致的服务提供方被投诉。
在一些实施例中,在服务提供方接到服务请求,在准备提供或者提供服务过程中(例如,司机前来接乘客的途中),服务请求方和服务提供方可以通 过应用程序内自带的聊天功能给对方发送信息,例如,驾驶员和乘客对上车点及上车时间等可能发生变动的问题进行协商,从而形成驾驶员与乘客的沟通信息。
图16是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图。在一些实施例中,流程1600可以由处理设备(例如,处理设备112或其他处理设备)执行。例如,流程1600可以以程序或指令的形式存储在存储设备(例如,存储设备130或处理设备的存储单元)中,当处理器220或图4所示的模块执行程序或指令时,可以实现流程1600。在一些实施例中,流程1600可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图16所示的操作的顺序并非限制性的。
如图15所述,服务请求的基础信息可以包括订单信息。如步骤510所述,服务请求发生服务事故,可能导致服务请求被取消。在一些实施例中,判定服务事故的责任(简称责任判定)可以包括以下步骤:
步骤1610,建立判责模型。
关于建立判责模型的更多细节可以参见图17及其相关描述,此处不再赘述。
步骤1620,获取已取消的服务请求的订单信息、已取消的服务请求对应的服务提供方与服务请求方的沟通信息。具体参见步骤1510,此处不再赘述。
步骤1630,根据订单信息和沟通信息,利用判责模型对已取消的服务请求进行责任判定。
在该实施例中,建立用于对服务提供方取消服务请求进行责任判定的判责模型。具体地,获取大量样本服务请求,将样本服务请求提供方和样本服务请求方的沟通信息、订单信息、历史信息作为特征训练得到判责模型,帮助判责模型对各种场景下的服务请求取消做出正确判责。在一些实施例中,判责模型可以是XGBoost(Extreme Gradient Boosting,极端梯度提升)。具体参见图17及其相关描述,此处不再赘述。
图17是根据本说明书一些实施例所示的用于建立判责模型的方法的示例性流程图。如图15和图16及其相关说明可以看出,可以基于判责模型对订单信息和沟通信息的处理进行责任判定。相应的,流程1700中建立或训练判责模型可以包括以下步骤:
步骤1710,获取样本服务请求的样本订单信息、样本服务请求对应的样本服务提供方与样本服务请求方的样本沟通信息。
步骤1720,获取预存的历史信息。
如步骤520所述,服务提供方的画像信息可以是根据服务提供方的社会属性、生活习惯、提供服务行为或历史信息等信息而抽象出的标签化信息,服务请求方的画像信息是指根据服务请求方的社会属性、生活习惯请求服务行为或历史信息等信息而抽象出的标签化信息。在一些实施例中,预存的历史信息可以是服务请求方的历史信息和/或服务提供方的历史信息。
在一些实施例中,对于共享出行领域,历史信息包括以下一种或其组合:驾驶员服务分数、驾驶员被投诉率、乘客信用分数等。在一些实施例中,驾驶员服务分数、驾驶员被投诉率、乘客信用分数可以为预设之间段内得到的,例如,驾驶员过去一个月的服务分数、过去一个月被乘客投诉的投诉率等。
步骤1730,根据样本服务请求的样本订单信息、样本服务请求对应的样本服务提供方与样本服务请求方的样本沟通信息和历史信息,建立判责模型。
在一些实施例中,建立判责模型可以包括:对样本服务请求对应的样本服务请求方与样本服务提供方的样本沟通信息进行分词处理,得到多个单词;将每个单词转换为词向量;将多个词向量转换为文本向量;根据文本向量、样本服务请求的样本订单信息和历史信息,建立责任判定模型。
在该实施例中,将样本沟通信息进行分词,得到每个单词的词向量。接着按单词的出现顺序将词向量拼接为二维矩阵,再通过卷积和最大池化得到样本沟通信息的文本向量,最后将样本沟通信息的文本向量与样本订单信息和历史信息相结合去预测服务提供方(例如,司机)是否有责。
在一些实施例中,在将每个单词转换为词向量之前,还包括:对多个单词进行沟通方标记;和/或对多个单词进行有效性过滤。
在一些实施例中,在将每个单词转换为词向量之前,可将单词进行标记,即标记单词的说话方是样本服务提供方(例如,司机)还是样本服务请求方(例如,乘客),或标记单词的时间。或者,将无意义的词(即,重要性低的词,例如,你好)去掉,即丢弃掉冗余的文本内容,以提高模型训练的速度和准确度。
通过样本订单信息、历史信息,结合样本沟通信息建立有效的判责模型,以提高判责准确性。
在一些实施例中,基于样本沟通信息,建立判责模型(例如,服务提供方取消服务请求的判责模型),包括以下步骤:
步骤1、对样本服务提供方和样本服务请求方的样本沟通信息的样本沟通内容进行预处理。
在一些实施例中,首先需要将样本沟通内容进行分词,即将“2019/12/14 12:01司机:你好,我即将到达上车点”分为“2019/12/14 12:01”、“司机”、“你好”、“,”、“我”、“即将”、“到达”、“上车点”。其次需要标记分词后的文本内容的时间戳、说话方是样本服务提供方还是样本服务请求方,以及将无意义的词(例如,“你好”)去掉,即丢弃掉冗余的文本内容。
步骤2、将单词转换为数字向量。
通过word2vec模型,通过分词后的文本内容,得到每个单词的词向量,其中word2vec模型是用来产生词向量的模型。word2vec模型具体原理参见图7及其相关描述,此处不在赘述。
步骤3、卷积神经网络的预训练。
在一些实施例中,在已经标注好的训练集上使用文本信息去预测服务提供方最终是否有责,其目的是为了将一个完整的沟通信息的所有单词向量转化为一个文本向量,具体方式如下:将一个服务请求对应的沟通信息经过步骤1、 步骤2的处理后得到有效单词和对应的词向量,在本说明书中,目标分词结果包括有效单词。接着如图8所示,按有效单词的出现顺序拼接成二维矩阵(每行都是对应一个单词的词向量,而行数则代表沟通信息中的有效单词有多少个),再通过卷积(卷积层层数设置为C)和最大池化得到这条沟通信息的文本向量(大小为1×C),并使用这个文本向量去预测最终司机是否有责。此外,可通过优化log loss(对数损失函数),来不断优化卷积神经网络中卷积层的权重,并最终得到更优的文本向量。
步骤4、将样本服务提供方和样本服务请求方的样本沟通信息的文本特征加入初始判责模型进行训练。
在一些实施例中,样本沟通信息的文本特征可以包括样本沟通信息的文本向量。
在一些实施例中,使用XGBoost作为预测服务提供方是否有责的模型,将样本沟通信息通过步骤1至步骤3得到的长度为C的文本向量与样本订单信息、历史信息一起作为样本特征加入模型进行训练。
图18是根据本说明书一些实施例所示的判定服务事故的责任的装置的模块图。在一些实施例中,服务请求处理装置1800包括:提取模块420和判定模块430。
如前所述,提取模块420用于提取服务请求的特征。在一些实施例中,提取模块420可以用于获取已取消订单的订单信息、已取消订单对应的服务请求方与服务提供方的沟通信息。
如前所述,判定模块430用于确定责任判定结果。在一些实施例中,判定模块430可以根据订单信息和沟通信息,利用预存的责任判定模型对已取消的服务请求进行责任判定。
装置600,在确定出现已取消的服务请求后,获取该已取消的服务请求的订单信息以及服务提供方(例如,驾驶员)与服务请求方(例如,乘客)的沟通信息,进而根据订单信息和沟通信息,利用预存的责任判定模型对已取消的 服务请求进行责任判定,例如,判断驾驶员取消订单的这一行为是否是驾驶员的责任。
图19是根据本说明书一些实施例所示的判定服务事故的责任的装置的模块图。在一些实施例中,服务请求处理装置1800包括:提取模块420、判定模块430、模型建立模块1910。
关于提取模块420和判定模块430的更多细节参见图18。
如前所述,获取模块410可以用于获取判责模型。在一些实施例中,获取模块410可以包括多个子模型。例如,获取模块410可以包括模型建立模块1910。在一些实施例中,模型建立模块1910可以建立判责模型,例如,模型建立模块1910可以获取样本服务请求的样本订单信息、样本服务请求对应的样本服务提供方与样本服务请求方的样本沟通信息,获取预存的历史信息,根据样本订单信息、样本沟通信息和历史信息,建立责任判定模型。关于模型建立模块1910建立判责模型的更多细节参见图17及其相关描述,此处不再赘述。
图20是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图。流程2000的执行主体可以是计算机、服务器等具有数据处理功能的设备。
如步骤1210所述,样本服务请求可以是被取消且被服务请求方投诉的历史服务请求。在一些实施例中,训练判责模型可以包括:
步骤2010,根据服务请求方的服务投诉信息,从被取消的多个历史服务请求中,确定被投诉的服务请求作为样本服务请求。
在一些实施例中,服务请求方在请求服务的过程中,会因为多种原因,对请求的服务产生不满,从而会通过服务请求方的终端设备对所请求的服务进行投诉,生成服务投诉信息。针对大量的服务请求方生成的服务投诉信息可以存储于服务平台的数据库(例如,存储设备130)中,以进行大数据分析时,作为历史数据进行参考。
在一些实施例中,可以从数据库(例如,存储设备130)中获取服务请求方对于取消的多个历史服务请求的服务投诉信息。例如:服务请求A、服务请 求B、服务请求C,在过去的一段时间内,被至少一个服务请求方投诉,也即,服务请求A、服务请求B、服务请求C均对应有多个服务投诉信息。故可以将具有服务投诉信息的多个服务请求作为样本服务请求。
关于历史服务请求和样本服务请求的更多细节可以参见图12及其相关描述,此处不再赘述。
步骤2020,根据第一预设规则,对样本服务请求进行标注,使得样本服务请求中标注有:第一标注信息,第一标注信息用于指示样本服务提供方是否为服务事故的样本责任方。
如步骤1110所述,第一预设规则可以是SOP。关于第一预设规则的更多细节可以参见步骤1110,第一标注信息的更多细节步骤1220,此处不再赘述。
步骤2030,对样本服务请求进行特征提取,得到样本服务请求的样本特征。
在一些实施例中,被标注后的样本服务请求可以分为两类,一类是样本服务提供方有责的样本服务请求,另一类是样本服务提供方无责的样本服务请求。每一类样本服务请求中均可以包括多个样本服务请求。在一些实施例中,可以获取每一个标注后的样本服务请求的样本特征。
关于样本特征的更多细节可以参见步骤1230,此处不再赘述。
步骤2040,根据第一标注信息以及样本特征,进行模型训练,得到判责模型,判责模型用于判定服务提供方是否为服务事故的责任方。
关于模型训练的更多细节可以参见步骤1240,此处不再赘述。
本说明的一些实施例所提供的判责模型的训练方法,通过获取历史的具有服务投诉信息的被取消的服务请求,将其作为样本服务请求,进行样本标注,并根据各标注后的样本服务请求的样本特征,训练获取判责模型,使得获取的判责模型更加可靠,进一步地,根据该判责模型进行服务请求是否有责的判定时,判定结果准确性和可靠性更好,从而提高了服务提供方和服务请求方的服务请求体验度。
图21是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图。流程2100的执行主体可以是计算机、服务器等具有数据处理功能的设备。
如步骤1210所述,样本服务请求可以是被取消且被服务请求方投诉的历史服务请求。在一些实施例中,训练判责模型可以包括:
步骤2010,根据服务请求方的服务投诉信息,从被取消的多个历史服务请求中,确定被投诉的服务请求作为样本服务请求。
步骤2020,根据第一预设规则,对样本服务请求进行标注,使得样本服务请求中标注有:第一标注信息,第一标注信息用于指示服务提供方是否为服务事故的责任方。
步骤2110,根据预设的预设规则对应的责任场景,对样本服务请求进行标注,使得样本服务请求中标注有:第二标注信息,第二标注信息用于指示样本服务提供方的样本目标责任场景。
服务提供方的责任场景,也即指对于取消的、且被服务请求方所投诉的服务请求,当判定服务请求被取消是由于服务提供方所导致的,那么,服务提供方导致服务请求被取消、且被投诉的原因。
关于第二标注信息的更多细节可以参见步骤1220及其相关描述,此处不再赘述。
步骤2030,对样本服务请求进行特征提取,得到样本服务请求的样本特征。
步骤2120,根据第一标注信息、第二标注信息以及样本特征,进行模型训练,得到判责模型;判责模型用于判定服务提供方是否为服务事故的责任方,还用于判定服务请求对应服务提供方的目标责任场景。
关于训练判责模型的更多细节可以参见图12及其相关描述,此处不再赘述。
根据样本服务请求中包含的责任场景的标注信息,训练获取判责模型,使得责任判定结果为服务事故的责任方为服务提供方时,可以同时生成服务提 供方对应目标责任场景,并将目标责任场景反馈给服务提供方,以使得服务提供方及时了解其所提供的服务请求有责的原因。一方面可以提高服务提供方对判定其有责的认可度,另一方面,也可以帮助服务提供方更好的去改善自身的服务,以提供更优质的服务。
在一些实施例中,若样本服务请求标注有多个第二标注信息,每个第二标注信息对应样本服务提供方的一个样本候责任场景及样本候责任场景的优先级。
需要说明的是,对于任意的责任场景下,均可以对应包含有多个服务请求,例如:对于责任场景为:服务提供方未及时接听电话,可能存在有100个服务请求均是由于该责任场景,收到了服务请求方的投诉。同样的,对于任意服务请求,其对应的责任场景也可以包括多个,也即,样本服务请求的第二标注信息可以包括多个样本候选责任场景。例如,服务请求A对应的责任场景可以包括:服务提供方未及时接听电话、服务提供方未按时到达目的地。
在一些实施例中,在根据预设的预设规则对应的责任场景,对样本服务请求进行标注之前,还可以包括:确定责任场景优先级。关于确定责任场景的优先级的更多细节可以参见步骤1120。
在根据判责模型判定出服务请求为服务提供方有责,且反馈责任场景给服务提供方时,可以根据该服务请求所对应的责任场景的优先级,将优先级最高的责任场景反馈给服务提供方,从而有效提高服务提供方对反馈结果的认可度。
图22是根据本说明书一些实施例所示的判定服务事故的责任的示例性过程的流程图。流程2200的执行主体也可以为终端、服务器等具备数据处理功能的设备。其中,执行流程2200的设备和执行判责模型的训练方法(例如,图20和/或图21)的设备可以为同一设备,也可以为不同的设备。
如步骤510所述,服务请求发生服务事故,可能导致服务请求被取消。在一些实施例中,判定服务事故的责任(简称责任判定)可以包括以下步骤:
步骤2210,获取被取消的服务请求。
关于获取被取消的服务请求的更多细节可以参见步骤510,此处不再赘述。
步骤2220,采用预先训练的判责模型,对被取消的服务请求进行处理,确定被取消的服务请求的责任判定结果。
在一些实施例中,责任判定结果可以包括:责任指示信息,责任指示信息用于指示服务提供方是否为被取消的服务请求的责任方;判责模型为采用上述的判责模型的训练方法(例如,图20或图21)训练得到的模型。
在一些实施例中,可以提取该被取消的服务请求的特征,将提取到的特征输入预先训练的判责模型中,计算该被取消的服务请求的责任方为服务提供方的概率。在一些实施例中,在采用判责模型预测服务请求的责任方是否为服务提供方时,还需要预设责任概率阈值,从而可以将计算得到的被取消的服务请求对应的服务提供方为责任方的概率与预设的责任概率阈值进行比较,若被取消的服务请求对应的服务提供方为责任方的概率满足预设阈值,则责任判定结果为服务请求为责任方。其中,预设的有责概率阈值可以根据实际应用进行设定,此处不做具体限制。
在一些实施例中,若责任指示信息指示服务提供方为被取消的服务请求的责任方,责任判定结果还可以包括:责任场景指示信息,责任场景指示信息用于指示服务提供方的目标责任场景。
在一些实施例中,在模型判定被取消的服务请求的责任方为服务提供方时,进一步地,判责模型还可以根据判责模型训练过程中,服务请求的特征与责任场景的对应关系,确定该被取消的服务请求对应的服务提供方的目标责任场景。
在一些实施例中,若被取消的服务请求对应的服务提供方的候选责任场景为多个,则责任场景指示信息可以包括:多个责任场景的指示信息。进一步的,根据多个候选责任场景的优先级,从多个候选责任场景中,确定优先级最高的至候选责任场景确定为目标责任场景。其中,当存在多个候选责任场景的 优先级相同时,可以将该多个候选责任场景作为目标责任场景。
在一些实施例中,向被取消的服务请求对应的服务提供方返回责任判定结果,责任判定结果包括:责任指示信息,以及目标责任场景的指示信息。可以理解的,当责任判定结果为服务提供方不为责任方时,则无对应的目标责任场景的指示信息。
图23是根据本说明书一些实施例所示的训练判责模型的示例性过程的流程图。流程2300的执行主体也可以为终端、服务器等具备数据处理功能的设备。其中,执行流程2300的设备和执行判责模型的训练方法(例如,图20和/或图21)的设备可以为同一设备,也可以为不同的设备。
步骤2310,获取被取消的服务请求对应的服务提供方的申诉信息。
关于申诉信息的更多细节可以参见图5及其相关描述,此处不再赘述。
步骤2320,根据责任判定结果,以及申诉信息,对判责模型的训练数据进行更新。
关于对判责模型的训练数据进行更新的更多细节可以参见图12及其相关描述,此处不再赘述。
步骤2330,根据更新后的训练数据进行判责模型的优化。
在一些实施例中,可以根据上述获取的服务提供方的申诉信息以及服务提供方的申诉信息所对应的责任判定结果,对判责模型的训练数据进行更新,不断扩充样本服务请求中的样本数据。从而可以根据更新后的数据,不断的对模型进行训练,以达到模型优化的效果。
通过采用预先训练的判责模型,对被取消的服务请求进行责任判定。其中,预先训练的判责模型具有与上述实施例所提供的判责模型的训练方法,训练得到的判责模型相同的技术效果,从而使得根据该预先训练的判责模型进行责任判定时,判定结果的准确性和可靠性较高,且反馈给服务提供方的目标责任场景的准确性也较高,具有较好的说服力。还进一步地根据服务提供方的申诉结果及责任判定结果,对模型训练的样本数据进行更新,并根据更新后的数 据进行模型训练,从而对判责模型进行了优化,有效提升了判责模型进行责任判定的准确率。
图24是根据本说明书一些实施例所示的示例性判责模型的训练的装置的模块图。如图24所示,装置2400可以包括第一确定模块2410、标注模块2420、样本特征获取模块2430和训练模块2440。
如前所述,获取模块410可以用于获取判责模型(例如,通过训练获取判责模型)。在一些实施例中,获取模块410还可以包括多个子模块。例如,获取模块410可以包括第一确定模块2410、标注模块2420、样本特征获取模块2430和训练模块2440。
第一确定模块2410用于根据服务请求方的服务投诉信息,从被取消的多个历史服务请求中,确定被投诉的服务请求作为样本服务请求。关于确定样本服务请求的更多细节可以参见步骤1210及其相关描述,此处不再赘述。
标注模块2420用于根据第一预设规则,对样本服务请求进行标注,使得样本服务请求中标注有:第一标注信息,第一标注信息用于指示样本服务提供方是否为服务事故的样本责任方。标注模块2420,还用于根据第一预设规则对应的责任场景,对样本服务请求进行标注,使得样本服务请求中标注有:第二标注信息,第二标注信息用于指示样本服务提供方的样本目标责任场景。关于第一预设规则、第一标注信息和第二标注信息的更多细节可以参见图11和图12及其相关描述,此处不再赘述。
样本特征获取模块2430用于对样本服务请求进行特征提取,得到样本服务请求的样本特征。关于样本特征的更多细节可以参见图12及其相关描述,此处不再赘述。
训练模块2440用于根据第一标注信息以及样本特征,进行模型训练,得到判责模型,判责模型用于判定服务提供方是否为服务事故的责任方。训练模块2440还用于根据第一标注信息、第二标注信息以及样本特征,进行模型训练,得到判责模型。关于训练判责模型的更多细节可以参见图12及其相关描述,此 处不再赘述。
在一些实施例中,第一确定模块2410还用于根据责任场景下服务请求方的投诉转化率,确定责任场景的优先级。
图25是根据本说明书一些实施例所示的示例性判定服务事故的责任的装置的模块图。装置2500可以包括获取模块410和预测模块2510。
如前所述,获取模块410用于获取服务请求。在一些实施例中,获取模块410可以获取被取消的服务请求。
如前所述,判定模块430用于确定责任判定结果。在一些实施例中,判定模块430还可以包括多个子模块。例如,判定模块430可以包括预测模块2510。在一些实施例中,预测模块2510用于采用预先训练的判责模型,对被取消的服务请求进行处理,确定被取消的服务请求的责任判定结果。责任判定结果的更多细节可以参见图5、图22等流程图。
图26是根据本说明书一些实施例所示的示例性判定服务事故的责任的装置的模块图。装置2600可以包括:获取模块410、预测模块2510、第二确定模块2610和返回模块440。
如前所述,获取模块410可以获取被取消的服务请求。
如前所述,关于预测模块2510的更多细节可以参见图25及其相关描述,此处不再赘述。
如前所述,判定模块430用于基于候选责任场景的优先级确定目标责任场景。在一些实施例中,判定模块430还可以包括多个子模块。例如,判定模块430可以包括第二确定模块2610。在一些实施例中,第二确定模块2610用于根据多个候选责任场景的优先级,从多个候选责任场景中,确定优先级最高的至少一个候选责任场景为目标责任场景。关于目标责任场景的更多细节可以参见步骤530及其相关描述,此处不再赘述。
如前所述,返回模块440可以用于将责任判定结果返回给服务提供方或/和服务请求方。例如,返回模块440可以向被取消的服务请求对应的服务提供 方返回责任判定结果。关于责任判定结果的更多细节参见图5、图22等流程图。
图27是根据本说明书一些实施例所示的示例性判定服务事故的责任的装置的模块图。装置2700可以包括第一确定模块2410、标注模块2420、申诉信息获取模块2710、更新模块2720和优化模块2730。
关于第一确定模块2410、标注模块2420的更多细节参见图24及其相关描述,此处不再赘述。
如前所述,获取模块410可以用于获取服务提供方的申诉信息,并基于申诉信息和责任判断结果更新判责模型。在一些实施例中,获取模块410可以申诉信息获取模块2710、更新模块2720和优化模块2730。
申诉信息获取模块2710,用于获取被取消的服务请求对应的服务提供方的申诉信息。
更新模块2720,用于根据责任判定结果,以及申诉信息,对判责模型的训练数据进行更新。
优化模块2730,用于根据更新后的训练数据进行判责模型的优化。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本说明书的各方面可以通过若干具有 可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本说明书的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本说明书的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本说明书各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran2003、Perl、COBOL2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺 序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的处理设备或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (32)

  1. 一种方法,用于判定服务事故的责任,包括:
    获取服务请求,所述服务请求发生服务事故;
    提取所述服务请求的特征;以及
    基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。
  2. 根据权利要求1所述的方法,所述特征至少包括沟通信息,所述沟通信息基于所述服务请求的服务请求方和所述服务提供方之间的沟通内容确定。
  3. 根据权利要求2所述的方法,所述提取所述服务请求的特征包括:
    对所述沟通信息进行分词处理,确定至少一个目标分词结果;
    将所述至少一个目标分词结果转化为至少一个目标分词向量;以及
    基于所述至少一个目标分词向量,确定所述沟通信息对应的目标文本向量。
  4. 根据权利要求3所述的方法,所述对所述沟通信息进行分词处理,确定至少一个目标分词结果包括:
    对所述沟通信息进行分词,得到至少一个初步分词结果;以及
    基于所述至少一个初步分词结果分别对应的属性特征,对所述至少一个初步分词结果进行过滤,确定至少一个目标分词结果。
  5. 根据权利要求4所述的方法,对于所述至少一个初步分词结果中的每一个,所述属性特征包括所述初步分词结果对应的沟通方或所述初步分词结果的重要性中的至少一种。
  6. 根据权利要求1所述的方法,所述特征包括所述服务请求的基本信息、所述服务提供方的画像信息或服务请求方的画像信息中的至少一种。
  7. 根据权利要求1所述的方法,若所述责任判定结果表明所述服务提供方为所述责任方,所述责任判定结果还包括:所述服务提供方对应的目标责任场景。
  8. 根据权利要求1所述的方法,若所述责任判定结果表明所述服务提供方为所述责任方,所述方法还包括:
    基于第一预设规则对所述特征进行处理,确定至少一个候选责任场景;以及
    根据所述至少一个候选责任场景分别对应的优先级,确实所述服务提供方对应的目标责任场景。
  9. 根据权利要求8所述的方法,所述优先级与投诉转化率、投诉次数或投诉方式中的至少一种相关。
  10. 根据权利要求1所述的方法,所述方法还包括:
    将所述责任判定结果发送至所述服务提供方。
  11. 根据权利要求1所述的方法,所述方法还包括:
    获取所述服务提供方的申诉信息;以及
    基于所述申诉信息和所述责任判定结果,更新所述判责模型。
  12. 根据权利要求1所述的方法,所述判责模型通过训练过程获取,所述训练过程包括:
    获取多个样本服务请求,所述多个样本服务请求为发生所述服务事故的历史服务请求;
    对所述多个样本服务请求进行标注,得到所述多个样本服务请求分别对应 的标注信息,其中,对于所述多个样本服务请求中的每一个,所述标注信息至少包括:所述样本服务请求的样本服务提供方是否为所述服务事故的样本责任方;
    提取所述多个样本服务请求分别对应的样本特征;以及
    基于所述多个样本服务请求分别对应的所述标注信息和所述样本特征,训练得到所述判责模型。
  13. 根据权利要求12所述的方法,对于所述多个样本服务请求中的每一个,若所述标注信息表明所述样本服务提供方为所述样本责任方,所述标注信息还包括所述样本责任方对应的样本目标责任场景。
  14. 根据权利要求12所述的方法,所述对所述多个样本服务请求进行标注,得到所述多个样本服务请求分别对应的标注信息,包括:
    基于第二预设规则,确定所述多个样本服务请求分别对应的标注信息。
  15. 一种系统,包括:
    至少一个数据库,所述至少一个数据库包括用于判定服务事故的责任的指令;
    至少一个处理器,所述至少一个处理器与所述至少一个数据库通信,其中,在执行所述指令时,所述至少一个处理器被配置为:
    获取服务请求,所述服务请求发生服务事故;
    提取所述服务请求的特征;以及
    基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。
  16. 根据权利要求15所述的系统,所述特征至少包括沟通信息,所述沟通信息基于所述服务请求的服务请求方和所述服务提供方之间的沟通内容确定。
  17. 根据权利要求16所述的系统,为了提取所述服务请求的特征,所述至少一个处理器被配置为:
    对所述沟通信息进行分词处理,确定至少一个目标分词结果;
    将所述至少一个目标分词结果转化为至少一个目标分词向量;以及
    基于所述至少一个目标分词向量,确定所述沟通信息对应的目标文本向量。
  18. 根据权利要求17所述的系统,为了对所述沟通信息进行分词处理,确定至少一个目标分词结果,所述至少一个处理器被配置为:
    对所述沟通信息进行分词,得到至少一个初步分词结果;以及
    基于所述至少一个初步分词结果分别对应的属性特征,对所述至少一个初步分词结果进行过滤,确定至少一个目标分词结果。
  19. 根据权利要求18所述的系统,对于所述至少一个初步分词结果中的每一个,所述属性特征包括所述初步分词结果对应的沟通方或所述初步分词结果的重要性中的至少一种。
  20. 根据权利要求15所述的系统,所述特征包括所述服务请求的基本信息、所述服务提供方的画像信息或服务请求方的画像信息中的至少一种。
  21. 根据权利要求15所述的系统,若所述责任判定结果表明所述服务提供方为所述责任方,所述责任判定结果还包括:所述服务提供方对应的目标责任场景。
  22. 根据权利要求15所述的系统,若所述责任判定结果表明所述服务提供 方为所述责任方,所述至少一个处理器被配置为:
    基于第一预设规则对所述特征进行处理,确定至少一个候选责任场景;以及
    根据所述至少一个候选责任场景分别对应的优先级,确实所述服务提供方对应的目标责任场景。
  23. 根据权利要求22所述的系统,所述优先级与投诉转化率、投诉次数或投诉方式中的至少一种相关。
  24. 根据权利要求15所述的系统,所述至少一个处理器进一步被配置为:
    将所述责任判定结果发送至所述服务提供方。
  25. 根据权利要求15所述的系统,所述至少一个处理器进一步被配置为:
    获取所述服务提供方的申诉信息;以及
    基于所述申诉信息和所述责任判定结果,更新所述判责模型。
  26. 根据权利要求15所述的系统,为了通过训练过程获取所述判责模型,所述至少一个处理器被配置为:
    获取多个样本服务请求,所述多个样本服务请求为发生所述服务事故的历史服务请求;
    对所述多个样本服务请求进行标注,得到所述多个样本服务请求分别对应的标注信息,其中,对于所述多个样本服务请求中的每一个,所述标注信息至少包括:所述样本服务请求的样本服务提供方是否为所述服务事故的样本责任方;
    提取所述多个样本服务请求分别对应的样本特征;以及
    基于所述多个样本服务请求分别对应的所述标注信息和所述样本特征,训 练得到所述判责模型。
  27. 根据权利要求26所述的系统,对于所述多个样本服务请求中的每一个,若所述标注信息表明所述样本服务提供方为所述样本责任方,所述标注信息还包括所述样本责任方对应的样本目标责任场景。
  28. 根据权利要求26所述的系统,为了对所述多个样本服务请求进行标注,得到所述多个样本服务请求分别对应的标注信息,所述至少一个处理器被配置为:
    基于第二预设规则,确定所述多个样本服务请求分别对应的标注信息。
  29. 一种系统,用于判定服务事故的责任,所述系统包括:
    获取模块,用于获取服务请求,所述服务请求发生服务事故;
    提取模块,用于提取所述服务请求的特征;以及
    判定模块,用于基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。
  30. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行如下用于判定服务事故的责任的方法;
    获取服务请求,所述服务请求发生服务事故;
    提取所述服务请求的特征;以及
    基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果至少包括:所述服务请求的服务提供方是否为所述服务事故的责任方。
  31. 一种方法,用于判定服务事故的责任,包括:
    获取服务请求,所述服务请求发生服务事故;
    提取所述服务请求的特征,所述特征至少包括沟通信息,所述沟通信息基于所述服务请求的服务请求方和所述服务提供方之间的沟通内容确定;以及
    基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果包括所述服务请求的服务提供方是否为所述服务事故的责任方。
  32. 一种方法,用于判定服务事故的责任,包括:
    获取服务请求,所述服务请求发生服务事故;
    提取所述服务请求的特征;以及
    基于判责模型对所述特征进行处理,确定所述服务事故的责任判定结果,所述责任判定结果包括:
    所述服务请求的服务提供方是否为所述服务事故的责任方,以及
    若所述服务提供方为所述责任方,所述服务提供方对应的目标责任场景。
PCT/CN2020/136414 2019-12-20 2020-12-15 一种用于判定服务事故的责任的方法和系统 WO2021121206A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201911329083.8 2019-12-20
CN201911329083.8A CN111860927A (zh) 2019-12-20 2019-12-20 模型的训练方法、服务请求处理方法、装置、设备及介质
CN202010036685.0 2020-01-14
CN202010036685.0A CN111833137A (zh) 2020-01-14 2020-01-14 订单处理方法、订单处理装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021121206A1 true WO2021121206A1 (zh) 2021-06-24

Family

ID=76478213

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136414 WO2021121206A1 (zh) 2019-12-20 2020-12-15 一种用于判定服务事故的责任的方法和系统

Country Status (1)

Country Link
WO (1) WO2021121206A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113626592A (zh) * 2021-07-08 2021-11-09 中汽创智科技有限公司 一种基于语料的分类方法、装置,电子设备及存储介质
CN113657839A (zh) * 2021-09-01 2021-11-16 上海中通吉网络技术有限公司 物流问题定责方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199738A1 (en) * 2014-01-14 2015-07-16 Elwha Llc Guaranty investigation
CN109409971A (zh) * 2017-05-09 2019-03-01 北京嘀嘀无限科技发展有限公司 异常订单处理方法及装置
CN111833137A (zh) * 2020-01-14 2020-10-27 北京嘀嘀无限科技发展有限公司 订单处理方法、订单处理装置、计算机设备和存储介质
CN111860927A (zh) * 2019-12-20 2020-10-30 北京嘀嘀无限科技发展有限公司 模型的训练方法、服务请求处理方法、装置、设备及介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199738A1 (en) * 2014-01-14 2015-07-16 Elwha Llc Guaranty investigation
CN109409971A (zh) * 2017-05-09 2019-03-01 北京嘀嘀无限科技发展有限公司 异常订单处理方法及装置
CN111860927A (zh) * 2019-12-20 2020-10-30 北京嘀嘀无限科技发展有限公司 模型的训练方法、服务请求处理方法、装置、设备及介质
CN111833137A (zh) * 2020-01-14 2020-10-27 北京嘀嘀无限科技发展有限公司 订单处理方法、订单处理装置、计算机设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113626592A (zh) * 2021-07-08 2021-11-09 中汽创智科技有限公司 一种基于语料的分类方法、装置,电子设备及存储介质
CN113657839A (zh) * 2021-09-01 2021-11-16 上海中通吉网络技术有限公司 物流问题定责方法

Similar Documents

Publication Publication Date Title
US11315170B2 (en) Methods and systems for order processing
TWI676783B (zh) 用於預估到達時間之方法及系統
US20180032928A1 (en) Methods and systems for transport capacity scheduling
US20190362266A1 (en) Systems and methods for text attribute determination using a conditional random field model
US11538468B2 (en) Using semantic frames for intent classification
WO2016138863A1 (zh) 一种订单配对系统及方法
CN112382099B (zh) 交通路况预测方法、装置、电子设备及存储介质
US11615644B2 (en) Face detection to address privacy in publishing image datasets
CN111460248B (zh) 用于线上到线下服务的系统和方法
WO2021121206A1 (zh) 一种用于判定服务事故的责任的方法和系统
CN114365119A (zh) 在聊天机器人系统中检测不相关的话语
TW202010294A (zh) 用於識別線上到線下服務平臺中醉酒請求方的系統和方法
CN111444335B (zh) 中心词的提取方法及装置
US11694029B2 (en) Neologism classification techniques with trigrams and longest common subsequences
US20220358366A1 (en) Generation and implementation of dedicated feature-based techniques to optimize inference performance in neural networks
US20230367644A1 (en) Computing environment provisioning
US20220207284A1 (en) Content targeting using content context and user propensity
US20210064669A1 (en) Systems and methods for determining correlative points of interest associated with an address query
CN114118410A (zh) 图结构的节点特征提取方法、设备及存储介质
TWI705338B (zh) 使用條件隨機域模型確定文本屬性的系統及方法
US12002456B2 (en) Using semantic frames for intent classification
US11973832B2 (en) Resolving polarity of hosted data streams
US11934384B1 (en) Systems and methods for providing a nearest neighbors classification pipeline with automated dimensionality reduction
US20230376849A1 (en) Estimating optimal training data set sizes for machine learning model systems and applications
CN112236787B (zh) 用于生成个性化目的地推荐的系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20901587

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20901587

Country of ref document: EP

Kind code of ref document: A1