2018.08.07 - 2018.08.08
Task 1. OKBQA Platform
As research in the field of artificial intelligence has been activated, studies have been attracting more attention to building humanized knowledge as a structured knowledge base for understanding machines and utilizing such a knowledge base. These knowledge bases are used for improved information retrieval systems such as the Google Knowledge Graph or as a foundation for the services of AI agents like Apple Siri, Amazon Echo, and IBM Watson.
OKBQA (Open Knowledge Base and Question Answering) is a community aimed at constructing such a knowledge base and a question and answer system using a knowledge base, especially supporting the architecture and platform for resource disclosure and integration and collaboration. OKBQA has been working with Hackathon in 2014 and international exchanges such as Colling and SIGIR to exchange technology with domestic and international experts and build systems.
In OKBQA Hackathon in 2018, Task 1. OKBQA Platform aims:
to learn how to use OKBQA frameworks and modules
to learn how to contribute to OKBQA system and evaluate it
OKBQA platform is knowledge-based QA system composed of various sub modules. The knowledge base is constructed from RDF (Resource Description Framework) data composed of <Entity 1, Attribute, Entity 2>. In order to access this knowledge base, a query such as SPARQL must be used.
To provide intuitive ways of accessing this data, a process is required to interpret and convert the user's natural language queries (e.g., "How many students in KAIST?") into SPARQL queries.
Task 1. OKBQA Platform provides opportunities to utilize and improve pre-built frameworks, architectures, and sub-modules. More specifically, we aim to:
Encourage participation of OKBQA collaboration platform through hands-on training
Understanding the question answering system technologies and dataset
Establishing your own question answering system through small contribution to OKBQA system
Integration and evaluation in OKBQA architecture
There are no special restrictions to participate in task 1. Open KB-based QA.
However, if you are familiar with the following techniques, you can collaborate more efficiently:
Evaluator is a module to evaluate given question answering system, based on the user's choice of modules. Users can make use of configuration options to choose which modules to use for the steps of QA system, and data options to choose which QA-dataset to use for the evaluation. Once evaluator runs, it sends a natural language question in answer dataset to Controller one-by-one. It compares true answer of the dataset with the answer list from Controller. More details on QA-dataset, Input and Output can be found in our github.