Abstract
The cost of deriving actionable knowledge from large datasets has been decreasing thanks to a convergence of positive factors: low cost data generation, inexpensively scalable storage and processing infrastructure (cloud), software frameworks and tools for massively distributed data processing, and parallelisable data analytics algorithms. One observation that is often overlooked, however, is that each of these elements is not immutable, rather they all evolve over time. As those datasets change over time, the value of their derivative knowledge may decay, unless it is preserved by reacting to those changes. Our broad research goal is to develop models, methods, and tools for selectively reacting to changes by balancing costs and benefits, i.e. through complete or partial re-computation of some of the underlying processes. In this paper we present an initial model for reasoning about change and re-computations, and show how analysis of detailed provenance of derived knowledge informs re-computation decisions. We illustrate the main ideas through a real-world case study in genomics, namely on the interpretation of human variants in support of genetic diagnosis.
Original language | English |
---|---|
Publication status | Published - 2016 |
Event | 8th USENIX Workshop on the Theory and Practice of Provenance, TaPP 2016 - Washington, United States Duration: 8 Jun 2016 → 9 Jun 2016 |
Conference
Conference | 8th USENIX Workshop on the Theory and Practice of Provenance, TaPP 2016 |
---|---|
Country/Territory | United States |
City | Washington |
Period | 8/06/16 → 9/06/16 |
Bibliographical note
Funding Information:This work is funded in part by UK EPSRC grant no. EP/N01426X/1.
Publisher Copyright:
© TaPP 2016 - 8th USENIX Workshop on the Theory and Practice of Provenance. All rights reserved.
Keywords
- Big data analytics
- Data change
- Data refresh
- Provenance
ASJC Scopus subject areas
- General Computer Science