Program and Keynotes
The conference takes place at: Comwell Hvide Hus, Vesterbro 2, 9000 Aalborg.
The complete proceedings are available here.
MONDAY, JUNE 30
TUESDAY, JULY 1
|09:00||Research Session 3: Data Mining|
|Chair: Lars Dannecker|
|-||Data Perturbation for Outlier Detection Ensembles,|
|-||A Subspace Filter Supporting the Discovery of Small Clusters in Very Noisy Datasets,|
|-||DivIDE: Efficient Diversification for Interactive Data Exploration,|
|-||Local Context Selection for Outlier Ranking in Graphs with Multiple Numeric Node Attributes,|
|11:10||Research Session 4: Advanced Issues|
|Chair: Emmanuel Müller|
|-||(k, d)-Core Anonymity: Structural Anonymization of Massive Networks,|
|-||Matching Dominance: Capture the Semantics of Dominance for Multi-dimensional Uncertain Objects,|
|-||A Provable Algorithmic Approach to Product Selection Problems for Market Entry and Sustainability,|
|-||Distributed Data Placement to Minimize Communication Cost via Graph Partitioning,|
|13:30||Tour and banquet|
|23:00||Return to hotel|
WEDNESDAY, JULY 2
Long data is data with a prominent temporal context that captures changes in the real-world. Long data is being generated and collected at an unprecedented scale, and data-driven decision making is omnipresent in our society. In stark contrast database technology in general, and the relational model in particular, are at odds with data that exhibits a prominent temporal context. Recently, however, the major database companies have significantly progressed their infrastructures to deal with temporal data. The talk works out the key requirements to manage temporal data, shows how the requirements can be mapped to simple and powerful primitives for the relational model and database systems, and identifies a range of open problems when dealing with long data.
Michael H. Böhlen is a professor of computer science at the University of Zürich where he heads the database technology group. His research interests include various aspects of data management, and have focused on time-varying information, data warehousing and data analysis, and similarity search. He received his M.Sc. and Ph.D. degrees from ETH Zürich in respectively 1990 and 1994. Before joining the University of Zürich he visited the University of Arizona for one year, and was a faculty member at Aalborg University for eight years and the Free University of Bozen-Bolzano for six years. He was Program co-Chair of the 39th International Conference on Very Large Data Bases and served as an associate editor for ACM TODS and The VLDB Journal. He served as a PC member for SIGMOD, VLDB, ICDE and EDBT. He is a member of the VLDB Endowment's Board of Trustees.
Using computation and data to bring wind on par with fossil fuel
Vestas is daily creating value from petabytes of data, from over 35.000 wind turbines and from simulation data. This added value is important in making carbon neutral energy competitive with fossil fuels. Working with data at the petabyte scale is not feasible with relational databases. In partnership with IBM research in Almaden, Vestas has succeeded in getting SQL-like capability that scales. This talk addresses how big data has become big business for Vestas and how technical challenges were overcome.
Anders Rhod Gregersen is the chief specialist in high performance and data heavy computing at Vestas Wind Systems A/S. At Vestas he designed and operates the Firestorm supercomputer, the third largest commercially used supercomputer in the world at the time of installation. Before Vestas, Anders successfully enabled the University supercomputers in the Nordic countries to analyse the vast data streams from the largest machine in the world, the large hadron collider (LHC) at CERN, Geneva. He is the vice-chair for the Industrial Advisory Committee of PRACE (Partnership for Advanced Computing in Europe).