Yong Rui, Vicente Ivan Sanchez Carmona, Mohsen Pourvali, Yun Xing, Wei-Wen Yi, Hui-Bin Ruan, Yu Zhang. Knowledge Mining: A Cross-disciplinary Survey. Machine Intelligence Research, vol. 19, no. 2, pp.89-114, 2022. https://doi.org/10.1007/s11633-022-1323-6
Citation: Yong Rui, Vicente Ivan Sanchez Carmona, Mohsen Pourvali, Yun Xing, Wei-Wen Yi, Hui-Bin Ruan, Yu Zhang. Knowledge Mining: A Cross-disciplinary Survey. Machine Intelligence Research, vol. 19, no. 2, pp.89-114, 2022. https://doi.org/10.1007/s11633-022-1323-6

Knowledge Mining: A Cross-disciplinary Survey

doi: 10.1007/s11633-022-1323-6
More Information
  • Author Bio:

    Yong Rui received the B. Sc. degree in electrical engineering from Southeast University, China in 1991, the M. Sc. degree in electrical engineering from Tsinghua University, China in 1994, and the Ph. D. degree in electrical and computer engineering from University of Illinois at Urbana-Champaign (UIUC), USA in 1999. He is currently the Chief Technology Officer and Senior Vice President of Lenovo Group, China. He is a Fellow of ACM, IEEE, IAPR, China SPIE, CCF and CAAI, and a Foreign Member of Academia Europaea. He holds 70 patents, and is the recipient of the prestigious 2018 ACM SIGMM Technical Achievement Award and 2016 IEEE Computer Society Edward J. McCluskey Technical Achievement Award. His research interests include multimedia, artificial intelligence, big data and knowledge mining.E-mail: yongrui@lenovo.com (Corresponding author)

    Vicente Ivan Sanchez Carmona received the B. Eng. and M. Eng. degree in computer engineering from National Autonomous University of Mexico, Mexico in 2008 and 2011, and the Ph. D. degree in computer science from University College London, UK in 2018. He is currently a researcher in Lenovo′s AI Lab, China. He has served as a reviewer in different conferences such as AAAI, ACL, CoNLL, COLING, among others.His research interests include artificial intelligence, behavioral science, cognitive science and human-computer interaction.E-mail: vcarmona@lenovo.com

    Mohsen Pourvali received the Ph. D. degree in computer science from Ca′ Foscari University of Venice, ltaly in 2017. During the Ph. D. period, he was working on text summarization and document enrichment. Currently, he is an advisory researcher at AI Lab in Lenovo. He is an experienced lecturer with a demonstrated history of teaching in universities. His research interests include explainable artificial intelligence and knowledge graph, especially in domain adaptive information extraction. E-mail: mpourvali@lenovo.comORCID iD: 0000-0003-2653-9613

    Yun Xing received the B. Sc. degree in optical information science and technology from Beijing Institute of Technology, China in 2012, and the M. Eng. degree in electronics and optics from Polytech Orleans, France in 2016. Currently, he is a NLP researcher in AI Lab at Lenovo Research, China.His research interests include natural language processing in machine learning and deep learning.E-mail: xingyun44@hotmail.com

    Wei-Wen Yi received the B. Eng. and M. Eng. degrees in information and communication engineering from Beijing University of Posts and Telecommunications, China in 2017 and 2020, respectively. Currently, she is a natural language processing researcher at Lenovo Research, China. She received the Best Paper Award of the EAI International Conference on Communications and Networking in China, China in 2018. She is a member of EAI and IEEE.Her research interests include named entity recognition, relation extraction and entity linking.E-mail: yiww1@lenovo.com

    Hui-Bin Ruan received the M. Sc. degrees in computer technology from Soochow University, China in 2020. Currently, she is a researcher in natural language process in Lenovo, China. Her research interests include discourse parsing, text classification and entity linking.E-mail: ruanhb2@lenovo.com

    Yu Zhang received the B. Eng. degree in human factors, B. Sc. (Minor) degree in applied mathematics and M. Sc. degree in engineering physics from Beihang University, China in 2008, 2008 and 2011. He is currently the technical assistant to chief technology officer at Lenovo, China, and a Ph. D. degree candidate in computer science at Southeast University, China. His research interests include human computer interaction and human-centered AI.E-mail: zhangyu29@lenovo.com

  • Received Date: 2021-09-15
  • Accepted Date: 2022-01-29
  • Publish Online: 2022-03-10
  • Publish Date: 2022-04-01
  • Knowledge mining is a widely active research area across disciplines such as natural language processing (NLP), data mining (DM), and machine learning (ML). The overall objective of extracting knowledge from data source is to create a structured representation that allows researchers to better understand such data and operate upon it to build applications. Each mentioned discipline has come up with an ample body of research, proposing different methods that can be applied to different data types. A significant number of surveys have been carried out to summarize research works in each discipline. However, no survey has presented a cross-disciplinary review where traits from different fields were exposed to further stimulate research ideas and to try to build bridges among these fields. In this work, we present such a survey.

     

  • 1 First-order logic (FOL) allows us to represent facts (objects and their relations) through predicates[4]. In this way, a predicate accounts for a relation type, and a predicate symbol refers to the name of a relation. The arity of a predicate indicates the number of arguments it can receive. Representing factual knowledge using FOL not only aligns with the traditional way of representing knowledge in artificial intelligence[4], but it also satisfies some useful characteristics of a representation for natural languages such as verifiability, avoidance of unambiguity, inference, and expressiveness among others[5]. 2 Usually, knowledge bases contain several thousands of semantic relation types.
    Usually, knowledge bases contain several thousands of semantic relation types.
    3 A token can be a word or a punctuation mark.
    4 Probably, the earliest work for NER using an LSTM is that of [22].5 Character embeddings can be pre-trained on a corpus and then fine-tuned on the target dataset via a convolutional neural network.
    Character embeddings can be pre-trained on a corpus and then fine-tuned on the target dataset via a Convolutional Neural Network.
    6 Negative instances can be generated by pairing entities that have no relation in the knowledge base, assigning them the label no-relation, and extracting features from sentences where these two entities co-occur.
    7 Recall, as shown in (4) in Section 2.5, cannot be computed since we do not know all the instances from which relation labels are to be recovered. In order to alleviate this problem, different Recall levels are proposed for increasingly bigger samples where predictions from the system are firstly ranked from higher to lower confidence scores.
    8 Association rules extracted from transactional data are a type of Boolean association rules where an item either appears or does not appear in the rule. In turn, we can see these types of rules as FOL-like rules. For example, the association rule ${\rm{diapers}} \to $$ {\rm{beer}}$ can be written as ${\rm{buys}}\left( {{\rm{X}},{\rm{ diapers}}} \right) \to {\rm{buys}}\left( {{\rm{X}},{\rm{ beer}}} \right)$, as noted in [79].
    9 As a note, a plausible explanation behind the association rule ${\rm{diapers}} \to {\rm{beer}}$ seems to be due to a change of activities of people who used to frequent bars but cannot do so anymore because they have now the activity of parenting. Thus, these people purchase now the target product in supermarkets rather than in a bar[80].
    10 The support count of an itemset can be seen as the unnormalized support of a rule. Formally, the support of a rule can be computed as the joint probability of the itemsets in the antecedent and consequent of the rule: $p\left( {A \cap B} \right)$.
    11 Sometimes also referred to as explainable artificial intelligence (xAI).
    12 Or set-of-parameters-level.13 As noted in [113, 114], a decision tree can be converted into IF-THEN logic rules where each internal node of the tree in a path from top to bottom represents an antecedent (the IF part of the rule), and leaf nodes deciding on the class label of an instance represent consequents (the THEN part of the rule). Thus, a path on a decision tree has a logic rule counterpart of the form IF ${f_i} = valu{e_i}$ AND ${f_j} = valu{e_j}$ AND ${f_k} = valu{e_k}$ THEN $class\_label = \hat y$, where ${f_i}$, ${f_j}$ and ${f_k}$ represent features of the input space.
    14 We note that the scope of a proxy model can either be at the local or global level. While a local explanation of an ML system targets a single prediction, a global explanation aims to account for the predictive behavior of the black-box system across a collection of instances; thus, a global proxy model can explain the system′s behavior for any input instance. Choosing one or the other explanation type usually corresponds to the algorithmic complexity of the method to extract the proxy model (global explanations may be NP-Hard to compute in some cases[115]).15 It is assumed that each input neuron in a neural network receives an input value (a feature) that is human-understandable. For example, if the concept that the target NN learns is to classify houses as cheap or expensive, possible input features are the number of rooms in a house, the age of the house, or the size of the house in square meters.
    We note that the scope of a proxy model can either be at the local or global level. While a local explanation of an ML system targets a single prediction, a global explanation aims to account for the predictive behavior of the black-box system across a collection of instances; thus, a global proxy model can explain the system’s behavior for any input instance. Choosing one or the other explanation type usually corresponds to the algorithmic complexity of the method to extract the proxy model (global explanations may be NP-Hard to compute in some cases [115] H. Jacobsson. Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review. Neural Computation, vol. 17, no. 6, pp. 1223-1263, 2005.).
    15 It is assumed that each input neuron in a neural network receives an input value (a feature) that is human-understandable. For example, if the concept that the target NN learns is to classify houses as cheap or expensive, possible input features are the number of rooms in a house, the age of the house, or the size of the house in square meters.
    16 This system makes a prediction by applying a sigmoid function to the dot product of the vector representations corresponding to an entity pair and a relation.
    17 We hold the same assumption as in Section 4.1, namely that the input features of a neural network are human interpretable.
    18 As we saw in Section 4.1, a single instance is perturbed to generate the behavioral dataset; thus, the number of predictions obtained will be proportional to the number of perturbations performed; however, some of the perturbations may yield out-of-domain instances which may not be representative of the domain where the black-box system was trained on; thus, these instances may elicit an inconsistent behavior from the black-box system.
    19 Domains such as medicine require applications to extract knowledge such as gene and drug entities and their different types of relations from biomedical texts or explanations from machine learning systems predicting relationships between entities as accurately as possible.
  • loading
  • [1]
    U. Fayyad, G. Piatetsky-Shapiro, P. Smyth. From data mining to knowledge discovery in databases. AI Magazine, vol. 17, no. 3, pp. 37–54, 1996. DOI: 10.1609/aimag.v17i3.1230.
    [2]
    S. Riedel, L. M. Yao, A. McCallum, B. M. Marlin. Relation extraction with matrix factorization and universal schemas. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, ACL, Atlanta, USA, pp. 74−84, 2013.
    [3]
    A. S. d′Avila Garcez, K. Broda, D. M. Gabbay. Symbolic knowledge extraction from trained neural networks: A sound approach. Artificial Intelligence, vol. 125, no. 1−2, pp. 155–207, 2001. DOI: 10.1016/S0004-3702(00)00077-1.
    [4]
    S. Russell, P. Norvig. Artificial Intelligence: A Modern Approach, 3rd ed., Harlow, USA: Pearson Education, 2010.
    [5]
    D. Jurafsky, J. H. Martin. Speech and Language Processing, [Online], Available: https://web.stanford.edu/~jurafsky/slp3/ed3book_dec302020.pdf, 2021.
    [6]
    T. Rocktäschel, S. Singh, S. Riedel. Injecting logical background knowledge into embeddings for relation extraction. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, ACL, Denver, USA, pp. 1119−1129, 2015. DOI: 10.3115/v1/N15-1118.
    [7]
    S. Hochreiter, J. Schmidhuber. Long short-term memory. Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. DOI: 10.1162/neco.1997.9.8.1735.
    [8]
    J. Devlin, M. W. Chang, K. Lee, K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), ACL, Minneapolis, USA, pp. 4171−4186, 2019. DOI: 10.18653/v1/N19-1423.
    [9]
    E. F. Tjong Kim Sang, F. De Meulder. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the 7th Conference on Natural Language Learning at HLT-NAACL 2003, ACL, Edmonton, Canada, pp. 142−147, 2003.
    [10]
    E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, R. Weischedel. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, Association for Computational Linguistics, New York City, USA, pp. 57−60, 2006.
    [11]
    J. P. C. Chiu, E. Nichols. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, vol. 4, pp. 357–370, 2016. DOI: 10.1162/tacl_a_00104.
    [12]
    R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, P. Kuksa. Natural language processing (Almost) from scratch. The Journal of Machine Learning Research, vol. 12, pp. 2493–2537, 2011.
    [13]
    A. Passos, V. Kumar, A. McCallum. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of the 18th Conference on Computational Natural Language Learning, ACL, Ann Arbor, USA, pp. 78−86, 2014. DOI: 10.3115/v1/W14-1609.
    [14]
    D. E. Appelt, J. R. Hobbs, J. Bear, D. J. Israel, M. Tyson. FASTUS: A finite-state processor for information extraction from real-world text. In Proceedings of the 13th International Joint Conference on Artificial Intelligence, Morgan Kaufmann, Chambery, France, pp. 1172−1178, 1993.
    [15]
    T. Eftimov, B. K. Seljak, P. Korošec. A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations. PLoS One, vol. 12, no. 6, Article number e0179488, 2017. DOI: 10.1371/journal.pone.0179488.
    [16]
    H. Isozaki, H. Kazawa. Efficient support vector classifiers for named entity recognition. In Proceedings of the 19th International Conference on Computational Linguistics, ACL, Taipei, China, pp. 1−7, 2002. DOI: 10.3115/1072228.1072282.
    [17]
    J. D. Lafferty, A. McCallum, F. C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., San Francisco, USA, pp. 282−289, 2001.
    [18]
    A. McCallum, W. Li. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the 7th Conference on Natural Language Learning at HLT-NAACL 2003, ACL, Edmonton, Canada, pp. 188−191, 2003. DOI: 10.3115/1119176.1119206.
    [19]
    Z. H. Huang, W. Xu, K. Yu. Bidirectional LSTM-CRF models for sequence tagging. [Online], Avaiable: https://arxiv.org/abs/1508.01991, 2015.
    [20]
    X. Z. Ma, E. Hovy. End-to-end sequence labeling via Bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL, Berlin, Germany, pp. 1064−1074, 2016. DOI: 10.18653/v1/P16-1101.
    [21]
    G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, C. Dyer. Neural architectures for named entity recognition. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, ACL, San Diego, USA, pp. 260−270, 2016. DOI: 10.18653/v1/N16-1030.
    [22]
    J. Hammerton. Named entity recognition with long short-term memory. In Proceedings of the 7th Conference on Natural Language Learning at HLT-NAACL 2003, ACL, Edmonton, Canada, pp. 172−175, 2003. DOI: 10.3115/1119176.1119202.
    [23]
    A. Akbik, D. Blythe, R. Vollgraf. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, ACL, Santa Fe, USA, pp. 1638−1649, 2018.
    [24]
    A. Akbik, T. Bergmann, R. Vollgraf. Pooled contextualized embeddings for named entity recognition. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), ACL, Minneapolis, USA, pp. 724−728, 2019. DOI: 10.18653/v1/N19-1078.
    [25]
    K. Liu, Y. Fu, C. Q. Tan, M. S. Chen, N. Y. Zhang, S. F. Huang, S. Gao. Noisy-labeled NER with confidence estimation. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, ACL, pp. 3437−3445, 2021. DOI: 10.18653/v1/2021.naacl-main.269.
    [26]
    D. J. Zeng, K. Liu, Y. B. Chen, J. Zhao. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of Conference on Empirical Methods in Natural Language Processing, ACL, Lisbon, Portugal, pp. 1753−1762, 2015. DOI: 10.18653/v1/D15-1203.
    [27]
    E. Sandhaus. The New York Times Annotated Corpus LDC2008T19. Philadelphia, USA, 2008. DOI: 10.35111/77ba-9x74.
    [28]
    Y. H. Zhang, V. Zhong, D. Q. Chen, G. Angeli, C. D. Manning. Position-aware attention and supervised data improve slot filling. In Proceedings of Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Copenhagen, Denmark, pp. 35−45, 2017. DOI: 10.18653/v1/D17-1004.
    [29]
    H. Ji, R. Grishman, H. T. Dang, K. Griffitt, J. Ellis. Overview of the TAC knowledge base population track. In Proceedings of Text Analysis Conference, 2010.
    [30]
    G. R. Doddington, A. Mitchell, M. A. Przybocki, L. A. Ramshaw, S. M. Strassel, R. M. Weischedel. The automatic content extraction (ACE) program – tasks, data, and evaluation. In Proceedings of the 4th International Conference on Language Resources and Evaluation, European Language Resources Association, Lisbon, Portugal, pp. 837−840, 2004.
    [31]
    S. M. Strassel, M. A. Przybocki, K. Peterson, Z. Y. Song, K. Maeda. Linguistic resources and evaluation techniques for evaluation of cross-document automatic content extraction. In Proceedings of the 6th International Conference on Language Resources and Evaluation, European Language Resources Association, Marrakech, USA, pp. 2706−2709, 2008.
    [32]
    I. Hendrickx, S. N. Kim, Z. Kozareva, P. Nakov, D. Séaghdha, S. Padó, M. Pennacchiotti, L. Romano, S. Szpakowicz. SemEval-2010 Task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, Association for Computational Linguistics, Uppsala, Sweden, pp. 33−38, 2010.
    [33]
    M. Banko, O. Etzioni. The tradeoffs between open and traditional relation extraction. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL, Columbus, USA, pp. 28−36, 2008.
    [34]
    R. C. Bunescu, R. J. Mooney. Subsequence kernels for relation extraction. In Proceedings of the 18th International Conference on Neural Information Processing Systems, MIT Press, Vancouver, Canada, pp. 171−178, 2005.
    [35]
    G. D. Zhou, J. Su, J. Zhang, M. Zhang. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL, Ann Arbor, USA, pp. 427−434, 2005. DOI: 10.3115/1219840.1219893.
    [36]
    A. Culotta, J. Sorensen. Dependency tree kernels for relation extraction. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL, Barcelona, Spain, pp. 423−429, 2004. DOI: 10.3115/1218955.1219009.
    [37]
    Q. Li, H. Ji. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL, Baltimore, USA, pp. 402−412, 2014.
    [38]
    M. Miwa, M. Bansal. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL, Berlin, Germany, pp. 1105−1116, 2016. DOI: 10.18653/v1/P16-1105.
    [39]
    B. W. Yu, Z. Y. Zhang, X. B. Shu, T. W. Liu, Y. B. Wang, B. Wang, S. J. Li. Joint extraction of entities and relations based on a novel decomposition strategy. In Proceedings of the 24th European Conference on Artificial Intelligence, Santiago de Compostela, Spain, pp. 2282−2289, 2020.
    [40]
    T. J. Fu, P. H. Li, W. Y. Ma. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL, Florence, Italy, pp. 1409−1418, 2019. DOI: 10.18653/v1/P19-1136.
    [41]
    X. Y. Li, F. Yin, Z. J. Sun, X. Y. Li, A. Yuan, D. Chai, M. X. Zhou, J. W. Li. Entity-relation extraction as multi-turn question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL, Florence, Italy, pp. 1340−1350, 2019. DOI: 10.18653/v1/P19-1129.
    [42]
    I. Beltagy, K. Lo, A. Cohan. SciBERT: A pretrained language model for scientific text. In Proceedings of Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Association for Computational Linguistics, Hong Kong, China, pp. 3615−3620, 2019. DOI: 10.18653/v1/D19-1371.
    [43]
    H. Y. Zheng, R. Wen, X. Chen, Y. F. Yang, Y. Y. Zhang, Z. H. Zhang, N. Y. Zhang, B. Qin, X. Ming, Y. F. Zheng. PRGC: Potential relation and global correspondence based joint relational triple extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), ACL, pp. 6225−6235, 2021. DOI: 10.18653/v1/2021.acl-long.486.
    [44]
    T. Lai, H. Ji, C. X. Zhai, Q. H. Tran. Joint biomedical entity and relation extraction with knowledge-enhanced collective inference. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), ACL, pp. 6248−6260, 2021. DOI: 10.18653/v1/2021.acl-long.488.
    [45]
    J. Wang, W. Lu. Two are better than one: Joint entity and relation extraction with table-sequence encoders. In Proceedings of Conference on Empirical Methods in Natural Language Processing, ACL, pp. 1706−1721, 2020. DOI: 10.18653/v1/2020.emnlp-main.133.
    [46]
    M. Mintz, S. Bills, R. Snow, D. Jurafsky. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, ACL, Suntec, Singapore, pp. 1003−1011, 2009.
    [47]
    C. J. Xiao, Y. Yao, R. B. Xie, X. Han, Z. Y. Liu, M. S. Sun, F. Lin, L. Y. Lin. Denoising relation extraction from document-level distant supervision. In Proceedings of Conference on Empirical Methods in Natural Language Processing, ACL, pp. 3683−3688, 2020. DOI: 10.18653/v1/2020.emnlp-main.300.
    [48]
    S. Riedel, L. M. Yao, A. McCallum. Modeling relations and their mentions without labeled text. In Proceedings of European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, Berlin, Germany, pp. 148−163, 2010. DOI: 10.1007/978-3-642-15939-8_10.
    [49]
    T. G. Dietterich, R. H. Lathrop, T. Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, vol. 89, no. 1-2, pp. 31–71, 1997. DOI: 10.1016/S0004-3702(96)00034-3.
    [50]
    G. L. Ji, K. Liu, S. Z. He, J. Zhao. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, AAAI, San Francisco, USA, pp. 3060−3066, 2017.
    [51]
    Y. K. Lin, S. Q. Shen, Z. Y. Liu, H. B. Luan, M. S. Sun. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL, Berlin, Germany, pp. 2124−2133, 2016. DOI: 10.18653/v1/P16-1200.
    [52]
    Z. X. Ye, Z. H. Ling. Distant supervision relation extraction with intra-bag and inter-bag attentions. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), ACL, Minneapolis, USA, pp. 2810−2819, 2019. DOI: 10.18653/v1/N19-1288.
    [53]
    G. Y. Wang, W. Zhang, R. X. Wang, Y. L. Zhou, X. Chen, W. Zhang, H. Zhu, H. J. Chen. Label-free distant supervision for relation extraction via knowledge graph embedding. In Proceedings of Conference on Empirical Methods in Natural Language Processing, ACL, Brussels, Belgium, pp. 2246−2255, 2018. DOI: 10.18653/v1/D18-1248.
    [54]
    A. Bordes, N. Usunier, A. Garcia-Durán, J. Weston, O. Yakhnenko. Translating embeddings for modeling multi-relational data. In Proceedings of the 26th International Conference on Neural Information Processing Systems Lake Tahoe, USA, pp. 2787−2795, 2013.
    [55]
    Z. Wang, J. W. Zhang, J. L. Feng, Z. Chen. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, AAAI, Quebec City, Canada, pp. 1112−1119, 2014.
    [56]
    Y. K. Lin, Z. Y. Liu, M. S. Sun, Y. Liu, X. Zhu. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, AAAI, Austin, USA, pp. 2181−2187, 2015.
    [57]
    T. Hasegawa, S. Sekine, R. Grishman. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, ACL, Barcelona, Spain, pp. 415−422, 2004. DOI: 10.3115/1218955.1219008.
    [58]
    B. Rosenfeld, R. Feldman. Clustering for unsupervised relation identification. In Proceedings of the 16th ACM Conference on Information and Knowledge Management, Association for Computing Machinery, Lisbon, Portugal, pp. 411−418, 2007. DOI: 10.1145/1321440.1321499.
    [59]
    L. M. Yao, A. Haghighi, S. Riedel, A. McCallum. Structured relation discovery using generative models. In Proceedings of Conference on Empirical Methods in Natural Language Processing, ACL, Edinburgh, UK, pp. 1456−1466, 2011.
    [60]
    L. M. Yao, S. Riedel, A. McCallum. Unsupervised Relation discovery with sense disambiguation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL, Jeju Island, Korea, pp. 712−720, 2012.
    [61]
    B. N. Min, S. M. Shi, R. Grishman, C. Y. Lin. Ensemble semantics for large-scale unsupervised relation extraction. In Proceedings of Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, ACL, Jeju Island, Korea, pp. 1027−1037, 2012.
    [62]
    D. Marcheggiani, I. Titov. Discrete-state variational autoencoders for joint discovery and factorization of relations. Transactions of the Association for Computational Linguistics, vol. 4, pp. 231–244, 2016. DOI: 10.1162/tacl_a_00095.
    [63]
    T. T. Tran, P. Le, S. Ananiadou. Revisiting unsupervised relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 7498−7505, 2020. DOI: 10.18653/v1/2020.acl-main.669.
    [64]
    L. B. Soares, N. FitzGerald, J. Ling, T. Kwiatkowski. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL, Florence, Italy, pp. 2895−2905, 2019. DOI: 10.18653/v1/P19-1279.
    [65]
    X. Han, T. Y. Gao, Y. K. Lin, H. Peng, Y. L. Yang, C. J. Xiao, Z. Y. Liu, P. Li, J. Zhou, M. S. Sun. More data, more relations, more context and more openness: A review and outlook for relation extraction. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, ACL, Suzhou, China, pp. 745−758, 2020.
    [66]
    A. Yates, M. Banko, M. Broadhead, M. Cafarella, O. Etzioni, S. Soderland. TextRunner: Open information extraction on the web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics, ACL, Rochester, USA, pp. 25−26, 2007.
    [67]
    A. Fader, S. Soderland, O. Etzioni. Identifying relations for open information extraction. In Proceedings of Conference on Empirical Methods in Natural Language Processing, ACL, Edinburgh, UK, pp. 1535−1545, 2011.
    [68]
    L. Del Corro, R. Gemulla. ClausIE: Clause-based open information extraction. In Proceedings of the 22nd International Conference on World Wide Web, Association for Computing Machinery, Rio de Janeiro, Brazil, pp. 355−366, 2013. DOI: 10.1145/2488388.2488420.
    [69]
    G. Stanovsky, J. Michael, L. Zettlemoyer, I. Dagan. Supervised open information extraction. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), ACL, New Orleans, USA, pp. 885−895, 2018. DOI: 10.18653/v1/N18-1081.
    [70]
    Y. Ro, Y. Lee, P. Kang. Multi.2OIE: Multilingual open information extraction based on multi-head attention with BERT. In Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2020, Association for Computational Linguistics, pp. 1107−1117, 2020. DOI: 10.18653/v1/2020.findings-emnlp.99.
    [71]
    C. G. Wang, X. Liu, Z. Chen, H. Y. Hong, J. Tang, D. Song. Zero-shot information extraction as a unified text-to-triple translation. In Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, pp. 1225−1238, 2021. DOI: 10.18653/v1/2021.emnlp-main.94.
    [72]
    R. D. Wu, Y. Yao, X. Han, R. B. Xie, Z. Y. Liu, F. Lin, L. Y. Lin, M. S. Sun. Open relation extraction: Relational knowledge transfer from supervised data to unsupervised data. In Proceedings of Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, ACL, Hong Kong, China, pp. 219−228, 2019. DOI: 10.18653/v1/D19-1021.
    [73]
    Y. L. Shen, X. Y. Ma, Z. Q. Tan, S. Zhang, W. Wang, W. M. Lu. Locate and label: A two-stage identifier for nested named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), ACL, pp. 2782−2794, 2021. DOI: 10.18653/v1/2021.acl-long.216.
    [74]
    T. Mikolov, I. Sutskever, K. Chen, G. Corrado, J. Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Lake Tahoe, USA, pp. 3111−3119, 2013.
    [75]
    J. Pennington, R. Socher, C. D. Manning. GloVe: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, ACL, Doha, USA, pp. 1532−1543, 2014. DOI: 10.3115/v1/D14-1162.
    [76]
    A. Radford, K. Narasimhan, T. Salimans, I. Sutskever. Improving Language Understanding by Generative Pre-Training, [Online], Available: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf, 2021.
    [77]
    R. Colin, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Q. Zhou, W. Li. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, vol. 21, no. 140, pp. 1–67, 2020.
    [78]
    W. Cui, X. Chen. Open rule induction. In Proceedings of the 35th Conference on Neural Information Processing Systems, 2021.
    [79]
    J. W. Han, M. Kamber, J. Pei. Data Mining: Concepts and Techniques, 3rd ed., Berlin, Germany: Morgan Kaufmann Publishers, 2011.
    [80]
    J. Leskovec, A. Rajaraman, J. D. Ullman. Mining of Massive Datasets, [Online], Available: http://infolab.stanford.edu/~ullman/mmds/book.pdf, 2021.
    [81]
    J. W. Han, Y. Z. Sun, X. F. Yan, P. S. Yu. Mining knowledge from data: An information network analysis approach. In Proceedings of the IEEE 28th International Conference on Data Engineering, IEEE, Arlington, USA, pp. 1214−1217, 2012. DOI: 10.1109/ICDE.2012.145.
    [82]
    P. N. Tan, M. Steinbach, A. Karpatne, V. Kumar. Introduction to Data Mining, 2nd ed., USA: Pearson, 2018.
    [83]
    R. Agrawal, R. Srikant. Fast algorithms for mining association rules in large databases. In Proceedings of the 20th International Conference on Very Large Data Bases, Santiago, Chile, pp. 487−499, 1994.
    [84]
    J. W. Han, J. Pei, Y. W. Yin. Mining frequent patterns without candidate generation. ACM SIGMOD Record, vol. 29, no. 2, pp. 1–12, 2000. DOI: 10.1145/335191.335372.
    [85]
    M. J. Zaki. Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, vol. 12, no. 3, pp. 372–390, 2000. DOI: 10.1109/69.846291.
    [86]
    M. C. Liu, J. F. Qu. Mining high utility itemsets without candidate generation. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, Association for Computing Machinery, Maui, USA, pp. 55−64, 2012. DOI: 10.1145/2396761.2396773.
    [87]
    Z. H. Deng, S. L. Lv. PrePost+: An efficient N-lists-based algorithm for mining frequent itemsets via Children-parent equivalence pruning. Expert Systems with Applications, vol. 42, no. 13, pp. 5424–5432, 2015. DOI: 10.1016/j.eswa.2015.03.004.
    [88]
    J. F. Qu, B. Hang, Z. Wu, Z. B. Wu, Q. Gu, B. Tang. Efficient mining of frequent itemsets using only one dynamic prefix tree. IEEE Access, vol. 8, pp. 183722–183735, 2020. DOI: 10.1109/ACCESS.2020.3029302.
    [89]
    R. Srikant, R. Agrawal. Mining generalized association rules. In Proceedings of the 21th International Conference on Very Large Data Bases, Zurich, Switzerland, pp. 407−419, 1995.
    [90]
    J. W. Han, Y. J. Fu. Discovery of multiple-level association rules from large databases. In Proceedings of the 21th International Conference on Very Large Data Bases, Zurich, Swizerland, pp. 420−431, 1995.
    [91]
    J. W. Han, J. Pei, Y. W. Yin, R. Y. Mao. Mining frequent patterns without candidate generation: A frequent-pattern tree approach. Data Mining and Knowledge Discovery, vol. 8, no. 1, pp. 53–87, 2004. DOI: 10.1023/B:DAMI.0000005258.31418.83.
    [92]
    R. Agrawal, R. Srikant. Mining sequential patterns. In Proceedings of the 11th International Conference on Data Engineering, IEEE, Taipei, China, pp. 3−14, 1995. DOI: 10.1109/ICDE.1995.380415.
    [93]
    R. Srikant, R. Agrawal. Mining quantitative association rules in large relational tables. In Proceedings of ACM SIGMOD International Conference on Management of Data, ACM, Montreal, Canada, pp. 1−12, 1996. DOI: 10.1145/233269.233311.
    [94]
    R. Agrawal, H. Mannila, R. Srikant, H. Toivonen, A. I. Verkamo. Fast discovery of association rules. Advances in Knowledge Discovery and Data Mining, U. M. Fayyad, G. Piatetsky-Shapiro, Ed., Menlo Park, USA: American Association for Artificial Intelligence, pp. 307–328, 1996.
    [95]
    T. Fukuda, Y. Morimoto, S. Morishita, T. Tokuyama. Data mining using two-dimensional optimized association rules: Scheme, algorithms, and visualization. In Proceedings of ACM SIGMOD Conference on Management of Data, Association for Computing Machinery, Montreal, Canada, pp. 13−23, 1996.
    [96]
    B. Lent, A. Swami, J. Widom. Clustering association rules. In Proceedings of the 13th International Conference on Data Engineering, IEEE, Birmingham, UK, pp. 220−231, 1997. DOI: 10.1109/ICDE.1997.581756.
    [97]
    K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, T. Tokuyama. Computing optimized rectilinear regions for association rules. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining, AAAI, Newport Beach, USA, pp. 96−103, 1997.
    [98]
    M. Kamber, J. W. Han, J. Y. Chiang. Metarule-guided mining of multi-dimensional association rules using data cubes. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining, AAAI, Newport Beach, USA, pp. 207−210, 1997.
    [99]
    Y. Aumann, Y. Lindell. A statistical theory for quantitative association rules. Journal of Intelligent Information Systems, vol. 20, no. 3, pp. 255–283, 2003. DOI: 10.1023/A:1022812808206.
    [100]
    L. Q. Geng, H. J. Hamilton. Interestingness measures for data mining: A survey. ACM Computing Surveys, vol. 38, no. 3, Article number 9, 2006. DOI: 10.1145/1132960.1132963.
    [101]
    J. Blanchard, F. Guillet, R. Gras, H. Briand. Using information-theoretic measures to assess association rule interestingness. In Proceedings of the 5th IEEE International Conference on Data Mining, IEEE, Houston, USA, pp. 66−73, 2005. DOI: 10.1109/ICDM.2005.149.
    [102]
    J. W. Han, H. Cheng, D. Xin, X. F. Yan. Frequent pattern mining: Current status and future directions. Data Mining and Knowledge Discovery, vol. 15, no. 1, pp. 55–86, 2007. DOI: 10.1007/s10618-006-0059-1.
    [103]
    P. Fournier-Viger, J. C. W. Lin, B. Vo, T. T. Chi, J. Zhang, H. B. Le. A survey of itemset mining. WIREs Data Mining and Knowledge Discovery, vol. 7, no. 4, Article number e1207, 2017. DOI: 10.1002/widm.1207.
    [104]
    C. C. Aggarwal, Y. Li, J. Y. Wang, J. Wang. Frequent pattern mining with uncertain data. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Paris, France, pp. 29−38, 2009. DOI: 10.1145/1557019.1557030.
    [105]
    W. S. Gan, J. C. W. Lin, P. Fournier-Viger, H. C. Chao, P. S. Yu. HUOPM: High-utility occupancy pattern mining. IEEE Transactions on Cybernetics, vol. 50, no. 3, pp. 1195–1208, 2020. DOI: 10.1109/TCYB.2019.2896267.
    [106]
    C. M. Chen, L. L. Chen, W. S. Gan, L. N. Qiu, W. P. Ding. Discovering high utility-occupancy patterns from uncertain data. Information Sciences, vol. 546, pp. 1208–1229, 2021. DOI: 10.1016/j.ins.2020.10.001.
    [107]
    B. Vo, L. T. T. Nguyen, N. Bui, T. D. D. Nguyen, V. N. Huynh, T. P. Hong. An efficient method for mining closed potential high-utility itemsets. IEEE Access, vol. 8, pp. 31813–31822, 2020. DOI: 10.1109/ACCESS.2020.2974104.
    [108]
    R. Andrews, J. Diederich, A. B. Tickle. Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, vol. 8, no. 6, pp. 373–389, 1995. DOI: 10.1016/0950-7051(96)81920-4.
    [109]
    V. I. S. Carmona. Experimental Analysis of Representation Learning Systems, Ph. D. dissertation. University College London, UK, 2018.
    [110]
    R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, D. Pedreschi. A survey of methods for explaining black box models. ACM Computing Surveys, vol. 51, no. 5, Article number 93, 2019. DOI: 10.1145/3236009.
    [111]
    S. Thrun. Extracting rules from artificial neural networks with distributed representations. In Proceedings of the 7th International Conference on Neural Information Processing Systems, Denver, USA, pp. 505−512, 1994.
    [112]
    M. W. Craven, J. W. Shavlik. Extracting tree-structured representations of trained networks. In Proceedings of the 8th International Conference on Neural Information Processing Systems, Denver, USA, pp. 24−30, 1995.
    [113]
    J. Huysmans, K. Dejaeger, C. Mues, J. Vanthienen, B. Baesens. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, vol. 51, no. 1, pp. 141–154, 2011. DOI: 10.1016/j.dss.2010.12.003.
    [114]
    A. A. Freitas. Comprehensible classification models: A position paper. ACM SIGKDD Explorations Newsletter, vol. 15, no. 1, pp. 1–10, 2013. DOI: 10.1145/2594473.2594475.
    [115]
    H. Jacobsson. Rule extraction from recurrent neural networks: A taxonomy and review. Neural Computation, vol. 17, no. 6, pp. 1223–1263, 2005. DOI: 10.1162/0899766053630350.
    [116]
    L. Breiman, J. H. Friedman, R. A. Olshen, C. J. Stone. Classification and Regression Trees, New York, USA: Wadsworth Int. Group, 1984.
    [117]
    M. T. Ribeiro, S. Singh, C. Guestrin. “Why Should I Trust You?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, San Francisco, USA, pp. 1135−1144, 2016. DOI: 10.1145/2939672.2939778.
    [118]
    P. Domingos. Knowledge discovery via multiple models. Intelligent Data Analysis, vol. 2, no. 3, pp. 187–202, 1998. DOI: 10.3233/IDA-1998-2303.
    [119]
    I. Sánchez, T. Rocktaschel, S. Riedel, S. Singh. Towards extracting faithful and descriptive representations of latent variable models. In Proceedings of AAAI Spring Syposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches, AAAI, Stanford, California, USA, pp. 35−38, 2015.
    [120]
    I. Sanchez Carmona, S. Riedel. Extracting interpretable models from matrix factorization models. In Proceedings of International Conference on Cognitive Computation: Integrating Neural and Symbolic Approaches, Montreal, Canada, pp. 78−84, 2015.
    [121]
    G. Peake, J. Wang. Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ACM, London, United Kingdom, pp. 2060−2069, 2018. DOI: 10.1145/3219819.3220072.
    [122]
    A. C. Gusmão, A. H. C. Correia, G. De Bona, F. G. Cozman. Interpreting embedding models of knowledge bases: A pedagogical approach. In Proceedings of ICML Workshop on Human Interpretability in Machine Learning, Stockholm, Sweden, pp. 79−86, 2018.
    [123]
    S. B. Thrun. Extracting Provably Correct Rules from Artificial Neural Networks, Bonn, University of Bonn, Germany, 1993.
    [124]
    J. R. Zilke, E. L. Mencía, F. Janssen. DeepRED – rule extraction from deep neural networks. In Proceedings of the 19th International Conference on Discovery Science, Springer, Bari, Italy, pp. 457−473, 2016. DOI: 10.1007/978-3-319-46307-0_29.
    [125]
    M. Sato, H. Tsukimoto. Rule extraction from neural networks via decision tree induction. In Proceedings of International Joint Conference on Neural Networks, IEEE, Washington, USA, pp. 1870−1875, 2001. DOI: 10.1109/IJCNN.2001.938448.
    [126]
    R. Setiono, H. Liu. Understanding neural networks via rule extraction. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, Canada, pp. 480−485, 1995.
    [127]
    B. S. Yang, W. T. Yih, X. D. He, J. F. Gao, L. Deng. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [128]
    W. J. Murdoch, A. Szlam. Automatic rule extraction from long short term memory networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017.
    [129]
    S. M. Lundberg, S. I. Lee. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, pp. 4768−4777, 2017.
    [130]
    S. Bang, P. T. Xie, H. Lee, W. Wu, E. Xing. Explaining a black-box by using a deep variational information bottleneck approach. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI, pp. 11396−11404, 2021.
    [131]
    M. Pourvali, Y. C. Jin, C. Sheng, Y. Meng, L. Wang, M. S. Gorkovenko, C. J. Hu. Path-based visual explanation. In Proceedings of the 9th CCF International Conference on Natural Language Processing and Chinese Computing, Springer, Zhengzhou, China, pp. 454−466, 2020. DOI: 10.1007/978-3-030-60457-8_37.
    [132]
    A. Jacovi, Y. Goldberg. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 4198−4205, 2020. DOI: 10.18653/v1/2020.acl-main.386.
    [133]
    J. Sippy, G. Bansal, D. S. Weld. Data staining: A method for comparing faithfulness of explainers. In Proceedings of ICML Workshop on Human Interpretability in Machine Learning, 2020.
    [134]
    J. Bastings, S. Ebert, P. Zablotskaia, A. Sandholm, K. Filippova. “Will you find these shortcuts?” A protocol for evaluating the faithfulness of input salience methods for text classification, [Online], Available: https://arxiv.org/pdf/2111.07367.pdf, 2021.
    [135]
    J. Yuan, O. Nov, E. Bertini. An exploration and validation of visual factors in understanding classification rule sets. In Proceedings of IEEE Visualization Conference, IEEE, New Orleans, USA, pp. 6−10, 2021. DOI: 10.1109/VIS49827.2021.9623303.
    [136]
    I. Lage, E. Chen, J. He, M. Narayanan, B. Kim, S. J. Gershman, F. Doshi-Velez. Human evaluation of models built for interpretability. In Proceedings of the 7th AAAI Conference on Human Computation and Crowdsourcing, AAAI Press, Stevenson, USA, pp. 59−67, 2019.
    [137]
    A. McCallum, D. Jensen. A Note on the Unification of Information Extraction and Data Mining using Conditional-Probability, Relational Models, [Online], Available: https://scholarworks.umass.edu/cs_faculty_pubs/42/, 2021.
    [138]
    R. J. Mooney, R. Bunescu. Mining knowledge from text using information extraction. ACM SIGKDD Explorations Newsletter, vol. 7, no. 1, pp. 3–10, 2005. DOI: 10.1145/1089815.1089817.
    [139]
    Q. Z. Xie, X. Z. Ma, Z. H. Dai, E. Hovy. An interpretable knowledge transfer model for knowledge base completion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL, Vancouver, Canada, pp. 950−962, 2017. DOI: 10.18653/v1/P17-1088.
    [140]
    L. A. Galárraga, C. Teflioudi, K. Hose, F. Suchanek. AMIE: Association rule mining under incomplete evidence in ontological knowledge bases. In Proceedings of the 22nd International Conference on World Wide Web, Association for Computing Machinery, Rio de Janeiro, Brazil, pp. 413−422, 2013. DOI: 10.1145/2488388.2488425.
    [141]
    S. Riedel, S. Singh, G. Bouchard, T. Rocktäschel, I. Sanchez. Towards two-way interaction with reading machines. In Proceedings of the 3rd International Conference on Statistical Language and Speech Processing, Springer, Budapest, Hungary, pp. 1−7, 2015. DOI: 10.1007/978-3-319-25789-1_1.
    [142]
    K. A. Kaufman, R. S. Michalski. From data mining to knowledge mining. Data Mining and Data Visualization, C. R. Rao, E. J. Wegman, J. L. Solka, Eds., Amsterdam, Netherlands: Elsevier, pp. 47−75, 2005. DOI: 10.1016/S0169-7161(04)24002-0.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(1)

    用微信扫码二维码

    分享至好友和朋友圈

    Article Metrics

    Article views (737) PDF downloads(97) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return